최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기한국정보통신학회논문지 = Journal of the Korea Institute of Information and Communication Engineering, v.26 no.12, 2022년, pp.1769 - 1776
안영필 (Department of Computer Science, Chungbuk National University) , 박현준 (Division of Software Convergence, Cheongju University)
Automatically summarizing long sentences is an important technique. The BART model is one of the widely used models in the summarization task. In general, in order to generate a summarization model of a specific domain, fine-tuning is performed by re-training a language model trained on a large data...
M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, "Bart: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension," arXiv:1910.13461, 2019.
H. Zhuge, Multi-dimensional summarization in cyber-physical society, MA: Morgan Kaufmann, 2016.
J. Zhang, Y. Zhao, M. Saleh, and P. Liu, "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization," in Proceeding of International Conference on Machine Learning, Online, pp. 11328-11339, 2020.
J. Delvin, M. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," in Proceeding of NAACL-HLT, Minneaplis: MN, USA, pp. 4171-4186, 2019.
A. Radford, K. Narasimhan, T. Salimans and I. Sutskever. (2018). Improving language understanding by generative pre-training [Internet]. Available: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf.
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei and I. Sutskever. (2019). Language models are unsupervised multitask learners. OpenAI blog [Internet]. Available: https://life-extension.github.io/2020/05/27/GPT%E6%8A%80%E6%9C%AF%E5%88%9D%E6%8E%A2/language-models.pdf.
K. Cho, B. van Merrienboer, C Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation," in Proceeding of EMNLP, Doha, Qatar, pp. 1724-1734, 2014.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L Kaiser, and I. Polosukhin, "Attention is All you Need," in Proceeding of Advances in neural information processing systems, Long Beach: CA, USA, pp. 5998-6008, 2017.
V. Nair and G. E. Hinton, "Rectified Linear Units Improve Restricted Boltzmann Machines," in Proceeding of the 27th International Conference on Machine Learning, Haifa, Israel, 2010.
D. Hendrycks and K. Gimpel, "Gaussian Error Linear Units (GELUs)," arXiv:1606.08415, 2016.
D. Bahdanau, K. H. Cho, and Y. Bengio, "Neural Machine Translation by Jointly Learning to Align and Translate," arXiv:1409.0473, 2015.
J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," in Proceeding of NIPS 2014 Workshop on Deep Learning, Online, 2014.
H. Sak, A.W. Senior, and F. Beaufays, "Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling," Interspeech, pp. 338-342, Jan. 2014.
A. Galassi, M. Lippi, and P. Torroni, "Attention in Natural Language Processing," IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 10, pp. 4291-2308, Sep. 2020.
K. Cho, A. Courville ,and Y. Bengio, "Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks," IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 1875-1886, Nov. 2015.
Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, "RoBERTa: A Robustly Optimized BERT Pretraining Approach," in Proceedings of ICLR 2020 Conference, Virtual, pp. 1-15, 2020.
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," arXiv:2010.11929, 2020.
H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jegou, "Training data-efficient image transformers & distillation through attention," in Proceeding of International Conference on Machine Learning, Virtual, pp. 10347-10357, 2021.
H. Zhao, J. Jia, and V. Koltun, "Exploring Self-Attention for Image Recognition," in Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle: WA, USA, pp. 10076-10085, 2020.
Z. Lin, Y. Wu, S. V. Peri, W. Sun, G. Singh, F. Deng, J. Jiang, and S. Ahn, "SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition," in Proceeding of International Conference on Learning Representations, Virtual, 2019.
N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, "End-to-End Object Detection with Transformers," in Proceeding of European Conference on Computer Vision, Virtual, pp. 213-229, 2020.
X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, "Deformable DETR: Deformable Transformers for End-to-End Object Detection," arXiv:2010.04159, 2020.
K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra, "DRAW: A Recurrent Neural Network for Image Generation," in Proceeding of International Conference on Machine Learning, Lile, France, pp. 1462-1471, 2015.
N. J. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, N. Shazeer, A. Ku, and D. Tran, "Image Transformer," in Proceeding of International Conference on Machine Learning, Stockholm, Seden, pp. 4055-4064, 2018.
M. Chen, A. Radford, R. Child, J. Wu, H. Jun, D. Luan, and I. Sutskever, "Generative Pretraining from Pixels," in Proceeding of International Conference on Machine Learning, Virtual, pp. 1691-1703, 2020.
R. Mihalcea and P. Tarau, "Textrank: Bringing Order into Text," in Proceeding of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, pp. 404-411, 2004.
Y. Liu and M. Lapata, "Text Summarization with Pretrained Encoders," in Proceeding of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3730-3740, 2019.
D. Huang, L. Cui, S. Yang, G. Bao, K. Wang, J. Xie, and Y. Zhang, "What Have We Achieved on Text Summarization?," in Proceeding of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 446-469, 2020.
C. Sun, X. Qiu, Y. Xu, and X. Huang, "How to Fine-Tune BERT for Text Classification?" in Proceeding of China National Conference on Chinese Computational Linguistics, Kunming, China, pp. 194-206, 2019.
Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang, "All NLP Tasks Are Generation Tasks: A General Pretraining Framework," arXiv:2103.1036ov1, 2021.
X. Liang, L. Wu, J. Li, Y. Wang, Q. Meng, T. Qin, W. Chen, M. Zhang, and T. Liu, "R-Drop: Regularized Dropout for Neural Networks," in Proceedings of Advances in Neural Information Processing Systems 34 (NeurIPS 2021), Online, 2021.
K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in Proceeding of the IEEE conference on computer vision and pattern recognition, Las Vegas: NV, USA, pp. 770-778, 2016.
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein and L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "PyTorch: An Imperative Style, High-Performance Deep Learning Library," in Proceedings of Advances in Neural Information Processing Systems, Vancouver, Canada, vol. 32, pp. 8026-8037, 2019.
T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, "HuggingFace's Transformers: State-of-the-art Natural Language Processing," arXiv:1910.03771, 2019.
S. Narayan, S. B. Cohen, and M. Lapata, "Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization," in Proceeding of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 1797-1807, 2018.
C. Y. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries," in Proceeding of Workshop on Text Summarization of ACL, Barcelona, Spain, pp. 74-81, 2004.
D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," in Proceeding of International Conferenceon Learning Representations, San Dego: CA, USA, 2015.
Y. Chen, "Convolutional Neural Network for Sentence Classification," M. S. thesis, University of Waterloo, 2015.
J. Song, "UFO-ViT: High Performance Linear Vision Transformer without Softmax," arXiv:2109.14382, 2021.
G. Zhao, X. Sun, J. Xu, Z. Zhang, and L. Luo, "MUSE: Parallel Multi-scale Attention for Sequence to Sequence Learning," arXiv:1911.09483, 2019.
External-Attention-pytorch/SSDPE, Github [Internet]. Availabe: https://github.com/xmu-xiaoma666/External-Attention-pytorch#3-simplified-self-attention-usage.
M. H. Guo, Z. N. Liu, T. J. Mu, and S. M. Hu, "Beyond Self-Attention: External Attention Using Two Linear Layers for Visual Tasks," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-13, Oct. 2022.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
오픈액세스 학술지에 출판된 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.