최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기정보처리학회논문지. KIPS transactions on software and data engineering. 소프트웨어 및 데이터 공학, v.8 no.11, 2019년, pp.441 - 448
이현영 (국민대학교 컴퓨터공학과) , 강승식 (국민대학교 소프트웨어학부)
Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural netw...
* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.
K. S. Kim, H. J. Lee, and S. J. Lee, "Three-stage Wordspacing System for Continuous Syllable Sentence in Korea," Journal of KISS(B): Software and Applications, Vol.25, No.12, pp.1838-1844, 1998.
S. S. Kang, "Eojeol-block Bidirectional Algorithm for Automatic Word Spacing of Hangul Sentences," Journal of KISS : Software and Applications, Vol.27, No.4, pp.441-447, 2000.
C. K. Lee, "Structural SVM-based Korean Word Spacing using Spacing Information Input by Users," Journal of KIISE: Computing Practices and Letters, Vol.20, No.5, pp.301-305, 2014.
H. S. Hwang and C. K. Lee, "Automatic Korean Word Spacing using Deep Learning," in Korea Computer Congress of KIISE, Jeju, The South Korea, 2016, pp.738-740.
T. S. Lee and S. S. Kang, "LSTM Based Sequence-tosequence Model for Korean Automatic Word-spacing," Smart Media Journal, Vol.7, No.4, pp.17-23, 2018.
Heewon Jeon, "KoSpacing: Automatic Korean Word Spacing," GitHub Repository, https://github.com/haven-jeon/PyKoSpacing, http://freesearch.pe.kr/archives/4759, 2018.
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, "Distributed Representations of Words and Phrases and Their Compositionality," in Advances in Neural Information Processing Systems, Lake Tahoe, the United States, 2013, pp.3111-3119.
T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient Estimation of Word Representations in Vector Space," arXiv Preprint arXiv:1301.3781, 2013.
P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, "Enriching Word Vectors with Subword Information," Transactions of the Association for Computational Linguistics, Vol 5, pp.135-146, 2017.
J. Pennington, R. Socher, and C. Manning, "Glove: Global Vectors for Word Representation," in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 2014, pp.1532-1543.
T. Mikolov, M. Karafiat, L. Burget, J. Cernocky and S. Khudanpur, "Recurrent Neural Network Based Language Model," in Eleventh Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, 2010, pp.1045-1048.
S. W. Kim, and S. P. Choi, "Research on Joint Models for Korean Word Spacing and POS (Part-Of-Speech) Tagging Based on Bidirectional LSTM-CRF," Journal of KIISE, Vol.45, No.8, pp.792-800, Aug, 2018.
Z. H. Huang, W. Xu, and K. Yu, "Bidirectional LSTM-CRF Models for Sequence Tagging," arXiv Preprint arXiv: 1508.01991, 2015.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet Classification with Deep Convolutional Neural Networks," in Advances in Neural Information Processing Systems. Harrahs and Harveys, Lake Tahoe, the United States, 2012, pp.1097-1105.
T. Mikolov, S. Kombrink, L. Burget, J. Cernocky, and S. Khudanpur, "Extensions of Recurrent Neural Network Language Model," in 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 2011, pp.5528-5531.
Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, "A Neural Probabilistic Language Model," Journal of Machine Learning Research, Vol 3, pp.1137-1155, 2003.
M. Sundermeyer, R. Schluter, and H. Ney, "LSTM Neural Networks for Language Modeling," in Thirteenth Annual Conference of the International Speech Communication Association, Portland, OR, USA, 2012, pp.194-197.
R. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu, "Exploring the Limits of Language Modeling," arXiv Preprint arXiv:1602.02410, 2016.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.