최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기한국IT서비스학회지 = Journal of Information Technology Services, v.19 no.5, 2020년, pp.83 - 91
이치훈 (티쓰리큐(주) 인공지능연구소) , 이연지 (티쓰리큐(주) 인공지능연구소) , 이동희 (국민대학교 경영학부)
Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also r...
* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.
임승영, 김명지, 이주열, "KorQuAD : 기계독해를 위한 한국어 질의응답 데이터셋", 한국정보과학회 학술발표논문집, 2018, 539-541.
Clark, K., U. Khandelwal, O. Levy, and C. Manning, "What Does BERT Look At? An Analysis of BERT's Attention", Stanford University, Facebook AI Research, 2019. Available at https://arxiv.org/abs/1906.04341 (Accessed June 11, 2019).
Devlin, J., M.W. Chang, K. Lee, and K. Toutanova, "BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding, Google AI Language", 2018. Available at https://arxiv.org/pdf/1810.04805.pdf (Accessed May 24, 2019).
Dodge, J., G. Ilharco, R. Schwartz, A. Farhad, H. Hajishirzi, and N. Smith, "Fine-Tuning Pretrained Language Models : Weight Initializations, Data Orders, and Early Stopping", Cornell University, 2020. Available at https://arxiv.org/pdf/2002.06305.pdf (Accessed February 15, 2020).
Ethayarajh, K., "How Contextual Are Contextualised Word Representations? Comparing The Geometry of Bert, Elmo, And Gpt2", Stanford University, 2019. Available at https://arxiv.org/abs/1909.00512 (Accessed September 2, 2019).
Kobayashi, S., Contextual Augmentation : Data Augmentation By Words With Paradigmatic Relations, Preferred Networks, Inc., 2018. Available at https://arxiv.org/abs/1805.06201 (Accessed May 16, 2018).
Lalande, K.M., "CS224n Final Project : SQuAD 2.0 with BERT", 2019. Available at http://web.stanford.edu/class/cs224n/reports/default/15791990.pdf (Accessed September 5, 2020).
Marivate, V. and T. Sefara, Improving short text classification through global augmentation methods, CD-MAKE 2020 : Machine Learning and Knowledge Extraction, 2019, 385-399.
Mohammadi, M., R. Mundra, R. Socher, L. Wang, and A. Kamat, "Natural Language Processing With Deep Learning", Stanford University, 2019. Available at http://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes03-neuralnets.pdf (Accessed June 10, 2020).
Qin, Z., W. Mao, and Z. Zhu, "Diverse Ensembling with Bert and its variations for Question Answering on SQuAD 2.0", 2019. Available at pdfs.semanticscholar.org/728e/855946e2683dd34fe8eb165f223059cb2961.pdf (Accessed October 10, 2020).
Semnani, J.S., R.K. Sadagopan, and F. Tlili, "BERTA : Fine-tuning BERT with Adapters and Data Augmentation", Standford University, 2019. Available at https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/reports/default/15848417.pdf (Accessed June 10, 2020).
Sun, C., X. Qiu, Y. Xu, and X. Huang, How To Fine-Tune BERT For Text Classification?, Shanghai : Fudan University, 2020. Available at https://arxiv.org/pdf/1905.05583.pdf (Accessed June 10, 2020).
Wang, R., H. Su, C. Wang, K. Ji, and J. Ding, "To Tune or not tune? How about the best of both worlds?", Percent Group, AI Lab. 2019. Available at https://arxiv.org/pdf/1907.05338.pdf (Accessed Oct 10, 2020).
Yang, W., Y. Xie, L. Tan, K. Xiong, M. Li, and J. Lin, "Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering", 2019. Available at https://arxiv.org/pdf/1904.06652.pdf (Accessed April 14, 2019).
Ying, A., "Really Paying Attention : A BERT+BiDAF Ensemble Model for Question-Answering", Standford University, 2019. Available at https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/reports/default/15792214.pdf (Accessed June 10, 2020).
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.