최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기스마트미디어저널 = Smart media journal, v.12 no.5, 2023년, pp.17 - 27
전민규 (국민대학교 비즈니스IT전문대학원) , 김남규 (국민대학교 비즈니스IT전문대학원)
Recently, automatic text summarization, which automatically summarizes only meaningful information for users, is being studied steadily. Especially, research on text summarization using Transformer, an artificial neural network model, has been mainly conducted. Among various studies, the GSG method,...
Y. Liu, "Fine-tune BERT for extractive?summarization," arXiv:1903.10318, 2019.
J. Xu and G. Durrett, "Neural extractive text?summarization with syntactic compression,"?arXiv:1902.00863, 2019.
M. Zhong, P. Liu, Y. Chen, D. Wang, X. Qiu,?and X. Huang, "Extractive summarization as text?matching," arXiv:2004.08795, 2020
R. Nallapati, B. Zhou, C. Gulcehre, and B. Xiang,?"Abstractive text summarization using?sequence-to-sequence rnns and beyond,"?arXiv:1602.06023, 2016
A. M. Rush, S. Chopra, and J. Weston, "A?neural attention model for abstractive sentence?summarization," arXiv:1509.00685, 2015
A. See, P. J. Liu, and C. D. Manning, "Get to?the point: Summarization with pointer-generator?networks," arXiv:1704.04368, 2017
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit,?L. Jones, A. N. Gomez, L Kaiser, and I.?Polosukhin, "Attention is all you need," Advances?in Neural Information Processing Systems, vol.?30, 2017.
J. Zhang, Y. Zhao, M. Saleh, and P. Liu,?"Pegasus: Pre-training with extracted?gap-sentences for abstractive summarization,"?Proceedings of the 37th International Conference?on Machine Learning, Vol. 119, pp. 11328-11339,?2020.
C. Lin, "Rouge: A package for automatic?evaluation of summaries," Text Summarization?Branches Out, pp. 74-81, Barcelona, Spain, Jul.?2004.
H. P. Luhn, "A statistical approach to?mechanized encoding and searching of literary?information," IBM Journal of Research and?Development, vol. 1, no. 4, pp. 309-317, 1957.
M. A. Fattah and F. Ren, "GA, MR, FFNN,?PNN and GMM based models for automatic?text summarization," Comput. Speech Lang., vol.?23, no. 1, pp. 126-144, 2009.
R. Mihalcea and P. Tarau, "Textrank: Bringing?order into text," Proceedings of the 2004?Conference on Empirical Methods in Natural?Language Processing, pp. 404-411, Barcelona,?Spain, Jul. 2004.
L. Page, S. Brin, R. Motwani, and T. Winograd,?"The PageRank Citation Ranking: Bringing?Order to the Web," Stanford University?technical report, 1998.
차준석, 김정인, 김판구, "단어 간 의미적 연관성을 고려한 어휘 체인 기반의 개선된 자동 문서요약 방법," 스마트미디어저널, vol. 6, no. 1, pp.?22-29, 2017
R. Nallapati, B. Zhou, and M. Ma, "Classify or?select: Neural architectures for extractive?document summarization," arXiv:1611.04244, 2016.
A. Khan and N. Salim, "A review on?abstractive summarization methods," Journal of?Theoretical and Applied Information?Technology, vol. 59, no. 1, pp. 64-72, 2014.
이태석, 선충녕, 정영임, 강승식, "미등록 어휘에?대한 선택적 복사를 적용한 문서 자동요약," 스마트미디어저널, vol. 8, no. 2, pp. 58-65, 2019
S. Hochreiter and J. Schmidhuber, "Long?short-term memory," Neural Comput., vol. 9, no?8, pp. 1735-1780, 1997.
I. Sutskever, O. Vinyals, and Q. V. Le,?"Sequence to sequence learning with neural?networks," Advances in Neural Information?Processing Systems, vol. 27, 2014.
D. Bahdanau, K. Cho and Y. Bengio, "Neural?machine translation by jointly learning to align?and translate," arXiv:1409.0473, 2014.
A. Radford, K. Narasimhan, T. Salimans, and I.?Sutskever, "Improving language understanding?by generative pre-training," 2018.
A. Radford, J. Wu, R. Child, D. Luan, D.?Amodei, and I. Sutskever, "Language models?are unsupervised multitask learners," OpenAI?Blog, vol. 1, no. 8, pp. 9, 2019.
T. Brown, B. Mann, N. Ryder, M. Subbiah, J.?D. Kaplan, P. Dhariwal, A. Neelakantan, P.?Shyam, G. Sastry, and A. Askell, "Language?models are few-shot learners," Advances in?Neural Information Processing Systems, vol. 33,?pp. 1877-1901, 2020.
R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer,?A. Kulshreshtha, H. Cheng, A. Jin, T. Bos, L.?Baker, and Y. Du, "Lamda: Language models?for dialog applications," arXiv:2201.08239, 2022.
H. Touvron, T. Lavril, G. Izacard, X. Martinet,?M. Lachaux, T. Lacroix, B. Roziere, N. Goyal,?E. Hambro, and F. Azhar, "Llama: Open and?efficient foundation language models,"?arXiv:2302.13971, 2023.
A. Chowdhery, S. Narang, J. Devlin, M. Bosma,?G. Mishra, A. Roberts, P. Barham, H.W. Chung,?C. Sutton, and S. Gehrmann, "Palm: Scaling?language modeling with pathways,"?arXiv:2204.02311, 2022.
J. Devlin, M. Chang, K. Lee, and K. Toutanova,?"Bert: Pre-training of deep bidirectional?transformers for language understanding,"?arXiv:1810.04805, 2018.
M. Joshi, D. Chen, Y. Liu, D. S. Weld, L.?Zettlemoyer, and O. Levy, "Spanbert: Improving?pre-training by representing and predicting?spans," Transactions of the Association for?Computational Linguistics, vol. 8, pp. 64-77,?2020.
김은희, 신주현, 임명진, "ELMo 임베딩 기반 문장?중요도를 고려한 중심 문장 추출방법," 스마트미디어저널, vol. 10, no. 1, pp. 39-46, 2021.
N. Reimers and I. Gurevych, "Sentence-bert:?Sentence embeddings using siamese?bert-networks," arXiv:1908.10084, 2019.
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S.?Narang, M. Matena, Y. Zhou, W. Li, and P.J.?Liu, "Exploring the limits of transfer learning?with a unified text-to-text transformer," The?Journal of Machine Learning Research, vol. 21,?no. 1, pp. 5485-5551, 2020.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.