$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

[국내논문] 텍스트 요약을 위한 어텐션 기반 BART 모델 미세조정
Fine-tuning of Attention-based BART Model for Text Summarization 원문보기

한국정보통신학회논문지 = Journal of the Korea Institute of Information and Communication Engineering, v.26 no.12, 2022년, pp.1769 - 1776  

안영필 (Department of Computer Science, Chungbuk National University) ,  박현준 (Division of Software Convergence, Cheongju University)

초록
AI-Helper 아이콘AI-Helper

긴 문장으로 이루어진 글을 자동으로 요약하는 것은 중요한 기술이다. BART 모델은 이러한 요약 문제에서 좋은 성능을 보여주고 널리 사용되고 있는 모델 중 하나이다. 일반적으로 특정 도메인의 요약 모델을 생성하기 위해서는 큰 데이터세트를 학습한 언어 모델을 그 도메인에 맞게 다시 학습하는 미세조정 작업을 수행한다. 이러한 미세조정은 일반적으로 마지막 전 연결 계층의 노드 수를 변경하는 방식으로 진행된다. 하지만 본 논문에서는 최근 다양한 모델에 적용되어 좋은 성능을 보여주고 있는 어텐션 계층을 추가하는 방법으로 미세조정하는 방법을 제안한다. 제안하는 방법의 성능을 평가하기 위해 미세조정 과정에서 층을 더 깊게 쌓기, 스킵 연결 없는 미세조정 등 다양한 실험을 진행하였다. BART 언어 모델에 스킵 연결을 가진 2개의 어텐션 계층을 추가하였을 때 가장 좋은 성능을 보였다.

Abstract AI-Helper 아이콘AI-Helper

Automatically summarizing long sentences is an important technique. The BART model is one of the widely used models in the summarization task. In general, in order to generate a summarization model of a specific domain, fine-tuning is performed by re-training a language model trained on a large data...

주제어

참고문헌 (42)

  1. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, "Bart: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension," arXiv:1910.13461, 2019. 

  2. H. Zhuge, Multi-dimensional summarization in cyber-physical society, MA: Morgan Kaufmann, 2016. 

  3. J. Zhang, Y. Zhao, M. Saleh, and P. Liu, "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization," in Proceeding of International Conference on Machine Learning, Online, pp. 11328-11339, 2020. 

  4. J. Delvin, M. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," in Proceeding of NAACL-HLT, Minneaplis: MN, USA, pp. 4171-4186, 2019. 

  5. A. Radford, K. Narasimhan, T. Salimans and I. Sutskever. (2018). Improving language understanding by generative pre-training [Internet]. Available: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf. 

  6. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei and I. Sutskever. (2019). Language models are unsupervised multitask learners. OpenAI blog [Internet]. Available: https://life-extension.github.io/2020/05/27/GPT%E6%8A%80%E6%9C%AF%E5%88%9D%E6%8E%A2/language-models.pdf. 

  7. K. Cho, B. van Merrienboer, C Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation," in Proceeding of EMNLP, Doha, Qatar, pp. 1724-1734, 2014. 

  8. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L Kaiser, and I. Polosukhin, "Attention is All you Need," in Proceeding of Advances in neural information processing systems, Long Beach: CA, USA, pp. 5998-6008, 2017. 

  9. V. Nair and G. E. Hinton, "Rectified Linear Units Improve Restricted Boltzmann Machines," in Proceeding of the 27th International Conference on Machine Learning, Haifa, Israel, 2010. 

  10. D. Hendrycks and K. Gimpel, "Gaussian Error Linear Units (GELUs)," arXiv:1606.08415, 2016. 

  11. D. Bahdanau, K. H. Cho, and Y. Bengio, "Neural Machine Translation by Jointly Learning to Align and Translate," arXiv:1409.0473, 2015. 

  12. J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," in Proceeding of NIPS 2014 Workshop on Deep Learning, Online, 2014. 

  13. H. Sak, A.W. Senior, and F. Beaufays, "Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling," Interspeech, pp. 338-342, Jan. 2014. 

  14. A. Galassi, M. Lippi, and P. Torroni, "Attention in Natural Language Processing," IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 10, pp. 4291-2308, Sep. 2020. 

  15. K. Cho, A. Courville ,and Y. Bengio, "Describing Multimedia Content Using Attention-Based Encoder-Decoder Networks," IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 1875-1886, Nov. 2015. 

  16. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, "RoBERTa: A Robustly Optimized BERT Pretraining Approach," in Proceedings of ICLR 2020 Conference, Virtual, pp. 1-15, 2020. 

  17. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," arXiv:2010.11929, 2020. 

  18. H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jegou, "Training data-efficient image transformers & distillation through attention," in Proceeding of International Conference on Machine Learning, Virtual, pp. 10347-10357, 2021. 

  19. H. Zhao, J. Jia, and V. Koltun, "Exploring Self-Attention for Image Recognition," in Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle: WA, USA, pp. 10076-10085, 2020. 

  20. Z. Lin, Y. Wu, S. V. Peri, W. Sun, G. Singh, F. Deng, J. Jiang, and S. Ahn, "SPACE: Unsupervised Object-Oriented Scene Representation via Spatial Attention and Decomposition," in Proceeding of International Conference on Learning Representations, Virtual, 2019. 

  21. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, "End-to-End Object Detection with Transformers," in Proceeding of European Conference on Computer Vision, Virtual, pp. 213-229, 2020. 

  22. X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, "Deformable DETR: Deformable Transformers for End-to-End Object Detection," arXiv:2010.04159, 2020. 

  23. K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra, "DRAW: A Recurrent Neural Network for Image Generation," in Proceeding of International Conference on Machine Learning, Lile, France, pp. 1462-1471, 2015. 

  24. N. J. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, N. Shazeer, A. Ku, and D. Tran, "Image Transformer," in Proceeding of International Conference on Machine Learning, Stockholm, Seden, pp. 4055-4064, 2018. 

  25. M. Chen, A. Radford, R. Child, J. Wu, H. Jun, D. Luan, and I. Sutskever, "Generative Pretraining from Pixels," in Proceeding of International Conference on Machine Learning, Virtual, pp. 1691-1703, 2020. 

  26. R. Mihalcea and P. Tarau, "Textrank: Bringing Order into Text," in Proceeding of the 2004 Conference on Empirical Methods in Natural Language Processing, Barcelona, Spain, pp. 404-411, 2004. 

  27. Y. Liu and M. Lapata, "Text Summarization with Pretrained Encoders," in Proceeding of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3730-3740, 2019. 

  28. D. Huang, L. Cui, S. Yang, G. Bao, K. Wang, J. Xie, and Y. Zhang, "What Have We Achieved on Text Summarization?," in Proceeding of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, pp. 446-469, 2020. 

  29. C. Sun, X. Qiu, Y. Xu, and X. Huang, "How to Fine-Tune BERT for Text Classification?" in Proceeding of China National Conference on Chinese Computational Linguistics, Kunming, China, pp. 194-206, 2019. 

  30. Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang, "All NLP Tasks Are Generation Tasks: A General Pretraining Framework," arXiv:2103.1036ov1, 2021. 

  31. X. Liang, L. Wu, J. Li, Y. Wang, Q. Meng, T. Qin, W. Chen, M. Zhang, and T. Liu, "R-Drop: Regularized Dropout for Neural Networks," in Proceedings of Advances in Neural Information Processing Systems 34 (NeurIPS 2021), Online, 2021. 

  32. K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," in Proceeding of the IEEE conference on computer vision and pattern recognition, Las Vegas: NV, USA, pp. 770-778, 2016. 

  33. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein and L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "PyTorch: An Imperative Style, High-Performance Deep Learning Library," in Proceedings of Advances in Neural Information Processing Systems, Vancouver, Canada, vol. 32, pp. 8026-8037, 2019. 

  34. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush, "HuggingFace's Transformers: State-of-the-art Natural Language Processing," arXiv:1910.03771, 2019. 

  35. S. Narayan, S. B. Cohen, and M. Lapata, "Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization," in Proceeding of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp. 1797-1807, 2018. 

  36. C. Y. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries," in Proceeding of Workshop on Text Summarization of ACL, Barcelona, Spain, pp. 74-81, 2004. 

  37. D. P. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," in Proceeding of International Conferenceon Learning Representations, San Dego: CA, USA, 2015. 

  38. Y. Chen, "Convolutional Neural Network for Sentence Classification," M. S. thesis, University of Waterloo, 2015. 

  39. J. Song, "UFO-ViT: High Performance Linear Vision Transformer without Softmax," arXiv:2109.14382, 2021. 

  40. G. Zhao, X. Sun, J. Xu, Z. Zhang, and L. Luo, "MUSE: Parallel Multi-scale Attention for Sequence to Sequence Learning," arXiv:1911.09483, 2019. 

  41. External-Attention-pytorch/SSDPE, Github [Internet]. Availabe: https://github.com/xmu-xiaoma666/External-Attention-pytorch#3-simplified-self-attention-usage. 

  42. M. H. Guo, Z. N. Liu, T. J. Mu, and S. M. Hu, "Beyond Self-Attention: External Attention Using Two Linear Layers for Visual Tasks," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-13, Oct. 2022. 

섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로