최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기지능정보연구 = Journal of intelligence and information systems, v.25 no.2, 2019년, pp.141 - 166
윤여일 (국민대학교 경영대학) , 고은정 (국민대학교 비즈니스IT전문대학원) , 김남규 (국민대학교 경영대학)
Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and gra...
* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.
핵심어 | 질문 | 논문에서 추출한 답변 |
---|---|---|
텍스트 마이닝은 어떤 분야인가? | 텍스트 마이닝(Text Mining)은 기존의 데이터마이닝의 다양한 기법을 텍스트 데이터에 접목한 분야로써, 텍스트 데이터로부터 알려지지 않은 새로운 사실을 발견하고 이를 통해 통찰을 도출하는 분야로(Tan, 1999) 정의된다. 텍스트 마이닝을 수행하기 위해선 텍스트 데이터의 수집과 분할, 정제와 같은 전처리 과정이 필요하며, 분석을 위해 자연어 형태의 텍스트를 정형화된 형태로 바꾸는 구조화의 단계가 선행되어야 한다. | |
문서 요약이란 무엇인가? | 문서 요약은 하나 이상의 문서로부터 중요한 정보를 찾아내고 해당 정보를 포함하는 짧은 문서로 축소하는 과정이라고 정의할 수 있으며(Eduard, 2015), 이렇게 도출된 요약문은 원문이 지닌 정보를 최대한 반영하여 정보의 손실을 최소화하면서도 객관적으로 전달해야 한다. 이에 따라 사람의 주관이 개입하지 않는 자동 요약에 대한 연구가 주목을 받아 활발히 이루어지고있다(Nenkova, 2012). | |
Word2Vec은 CBOW(Continuous Bag of Words)와 Skip-Gram 두 가지 알고리즘을 활용하는데, 이들은 각각 어떤 알고리즘인가? | 2013), CBOW(Continuous Bag of Words)와 Skip-Gram 두 가지 알고리즘을 활용하여 용어의 벡터화를 수행한다. CBOW는 대상 용어를 기준으로 주위에 등장하는 용어들에 대한 벡터를 도출하는 학습 모델이고, Skip-Gram 모델은 대상 용어와 인접하게 위치한 용어들을 바탕으로 대상 용어에 대한 벡터를 도출한다. Word2Vec은 기존의 임베딩 모델이 시도하지 않았던 개별 용어 관점의 구조화를 진행하였다는 점에 모델의 독창성을 인정받아 다양한 분야에서 활용되고있다. |
Bingham, E. and H. Mannila, "Random projection in dimesionality reduction: applications to image and text," Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, (2001), 245-250.
Chen, Y. and M. Bansal, "Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting," Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, (2018), 675-686.
Chorpa, S., M. Auli and A. Rush, "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks," Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (2016), 93-98.
Eduard, H., and C. Lin, "Automated text summarization and the SUMMARIST system," Proceedings of a workshop, (1998), 197-214.
Eduard, H., The Oxford Handbook of Computational Linguistics 2nd edition, Oxford University Press, Oxford, 2015.
Erk, K. and S. Pado, "A structured vector space model for word meaning in context," Proceedings of the Conference on Empirical Methods in Natural Language Processing, (2008), 897-906.
Gao, J., Y. He, X. Zhang and Y. Xia, "Duplicate Short Text Detection Based on Word2Vec," 2017 8th IEEE International Conference on Software Engineering and Service Science, (2017), 33-38.
Goldstein, J., M. Kantrowitz, V. Mittal and J. Carbonell, "Summarizing text documents: sentence selection and evaluation metrics," Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, (1999), 121-128.
Gong, Y. and X. Liu, "Generic text summarization using relevance measure and latent semantic analysis," Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, (2001), 19-25.
Gupta, V. and G. Lehal, "A Survey of Text Summarization Extractive Techniques," JOURNAL OF EMERGING TECHNOLOGIES IN WEB INTELLIGENCE, Vol.2, No.3(2010), 258-268.
Joel, L. N., A. Alex and C, Kaestner, "Automatic Text Summarization Using a Machine Learning Approach," Brazilian Symposium on Artificial Intelligence, (2002), 205-215.
Kageback, M., O. Mogren, N. Tahmasebi and D. Dubhashi, "Extractive Summarization using Continuous Vector Space Models," Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality, (2014), 31-39.
Kim, J., J. Kim and D. Hwang, "Korean Text Summarization Using an Aggregate Similarity," Proceedings of the fifth international workshop on on Information retrieval with Asian languages, (2000), 111-118.
Ko, E. and N. Kim, "Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization," Journal of Intelligence and Information Systems, Vol.24, No.2(2018), 125-148.
Li, W., X. Xiao, Y. Lyu and Y. Wang, "Improving Neural Abstractive Document Summarization with Structural Regularization," Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, (2018), 4078-4097.
Marco, B., D. Georgiana and K. German, "Don't count, predict! A Systematic comparison of context-counting vs. context-predicting semantic vectors," Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, (2014), 238-247.
Mikolov, T., I. Sutskever, K. Chen, G. Corrado and J. Dean, "Distributed representations of words and phrases and their compositionality," Proceedings of the 26th International Conference on Neural Information Processing Systems, Vol.2, (2013), 3111-3119.
Mittal, N., B. Agarwal, H. Mantri, R. Goyal and M. Jain, "Extractive Text Summarization," International Journal of Current Engineering and Technology, Vol.4, No.2(2014), 870-872.
Mohamed, A. F. and R. Fuji, "GA, MR, FFNN, Pnn and GMM based models for automatic text summarization," Computer Speech & Language, Vol.23, (2009), 126-144.
Nallapati, R., B. Zhou, C. Santos, C. Gulcehre and B. Xiang, "Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond," Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, (2016), 280-290.
Nenkova, A. and K. Mckewon, "A Survey of Text Summarization Techniques," Mining Text Data, (2012), 43-76.
Omer, L., Y. Goldberg and I. Dagan, "Improving Distributional Similarity with Lessons Learned from Word Embeddings," Transactions of the Association for Computational Linguistics, Vol.3, (2015), 211-225.
Rachit, A. and B. Ravindran "Latent Dirichlet Allocation and Singular Value Decomposition Based Multi-document Summarization," 2008 8th IEEE International Conference on Data Mining, (2008), 713-718.
Ramiz, M. A., "A new sentence similarity measure and sentence based extractive technique for automatic text summarization," Expert Systems with Applications, Vol.36, (2009), 7764-7772.
Salton, G., A. Wong and C. S. Yang, "A Vector Space Model for Automatic Indexing," Communications of the ACM, Vol.18, No.11(1975), 613-620.
Singhal, A., "Modern Information Retrieval: A Brief Overview," Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, (2001), 35-43.
Sonawane, S., A. Ghotkar and S. Hinge, "Context-Based Multi-document Summarization," Contemporary Advances in Innovative and Applicable Information Technology, (2018), 153-165.
Tan, A. "Text Mining: The state of the art and the challenges," Proceedings of the Pacific Asia Conf on Knowledge Discovery and Data Mining PAKDD'99 workshop on Knowledge Discovery from Advanced Databases, (1999), 65-70.
Wan, X. and J. Yang, "Multi-document summarization using cluster-based link analysis," Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, (2008), 299-306.
Wen, Z., T. Yoshida and X. Tang, "A comparative study of TF*IDF, LSI and multi-words for text classification," Expert Systems with Applications, Vol.38, No.3(2011), 2758-2765.
Yeh, J. Y., H. Ke and W. Yang, "iSpreadRank: Ranking sentences for extraction-based summarization using feature weight propagation in the sentence similarity network," Expert Systems with Applications, Vol.35, (2008), 1451-1462.
Zhang, F., J. Yao and R. Yan, "On the Abstractiveness of Neural Document Summarization," Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, (2018), 785-790.
Zhang, P. and C. Li, "Automatic text summarization based on sentences clustering and extraction," 2009 2nd IEEE International Conference on Computer Science and Information Technology, (2009), 167-170.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
오픈액세스 학술지에 출판된 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.