최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기지능정보연구 = Journal of intelligence and information systems, v.27 no.2, 2021년, pp.17 - 32
강가람 (경희대학교 일반대학원 빅데이터응용학과) , 권오병 (경희대학교 경영대학)
Speaker recognition is generally divided into speaker identification and speaker verification. Speaker recognition plays an important function in the automatic voice system, and the importance of speaker recognition technology is becoming more prominent as the recent development of portable devices,...
Ahn, J., "The use of new forms of honorific final ending in Modern Korean". The Linguistic Association of Korea Journal, Vol.25, No.3 (2017), 173-192.
Ai, H., W. Xia., and Q. Zhang., "Speaker Recognition Based on Lightweight Neural Network for Smart Home Solutions". International Symposium on Cyberspace Safety and Security, No.12(2019), 421-431.
Alluri, K., V. Raju., S. Gangashetty., and A. K. Vuppala., "Analysis of Source and System features for Speaker Recognition in Emotional Conditions". IEEE Region 10 Conference, No.(2016), 2847-2850.
Bhattacharya, G., M. Alam., and P. Kenny., "Deep speaker recognition: Modular or monolithic?". INTERSPEECH, No.9(2019), 1143-1147.
Bu, S., and S. B. Cho, "Speaker Identification Method based on Convolutional Neural Network with STFT Sound-Map". KIISE Transactions on Computing Practices, Vol.24, No.6(2018), 289-294.
Chae. S., "Noise Robust Text-Dependent Speaker Verification Using Teacher-Student Learning Framework". Department of Electrical Engineeting and Computer Science College of Engineering SEOUL NATIONAL UNIVERSITY. (2019).
Chae. S., "Theories and methods of sociolinguistic research". Saegugeosaenghwal, Vol.14, No.4 (2004), 83-103.
Chakroun, R., and M. Frikha., "A New Text Independent Speaker Recognition System with Short Utterances Using SVM". European, Mediterranean, and Middle Eastern Conference on Information Systems, No.11(2020), 566-574.
Chen, G., S. Chen., L, Fan., X. Du., Z. Zhao., F. Song., & Y. Liu., "Who is real bob? adversarial attacks on speaker recognition systems". arXiv:1911.01840, No.4(2020).
Choi. J., "Classification of Continuous Speech Speakers by Multilayer Perceptron Network". Proceedings of the Korean Institute of Information and Commucation Sciences Conference, No.5(2017), 682-683.
Choi. J., "Speech-dependent Speaker Identification Using Mel Frequency Cepstrum Coefficients for Continuous Speech Recognition". Journal of KIIT, Vol.14, No.10(2016), 67-72.
Dehak, N., P. Kenny. R. Dehak., P. Dumouchel., and P. Ouellet., "Front-end factor analysis for speaker verification". IEEE Transactions on Audio, Speech, and Language Processing, Vol.19, No.4(2011), 788-798.
Devi, K., N. Singh., and K. Thongam., "Automatic Speaker Recognition from Speech Signals Using Self Organizing Feature Map and Hybrid Neural Network". Microprocessors and Microsystems, Vol.79, No.(2020), 103264.
Garcia-Romero. D., D. Snyder., G. Sell., A. McCree., D. Povey., and S. Khudanpur., "X-vector DNN Refinement with Full-Length Recordings for Speaker Recognition". INTERSPEECH, No.9(2019), 1493-1496.
Ha. B., and M. Huh., "The Effect of Pitch, Duration, and Intensity on a Preception of Speech". Journal of speech-language & hearing disorders, Vol.27, No.3(2018), 45-54.
Han, G., Study on the Endings of Modern Hangeul, Yuk Rack, (2004).
Han. S., "A Study on the Use of Final Endings in Korean Language Conversation". The Journal of Humanities and Social science, Vol.11, No.4(2020), 2315-2327.
Huanjun, B., X. Mingxing., and F. Thomas., "Emotion Attribute Projection for Speaker Recognition on Emotional Speech". EUROSPEECH, No.8(2007), 758-761.
Ioffe. S., "Probabilistic linear discriminant analysis" Computer Vision-ECCV, No.5(2006), 531-542.
Jang. K., "A study on stylistic features of ending-components in Korean". Youkrack, (2010).
Jo. M., "Pragmatic Strategy and Intonation of "-geodeun", the Final Endings: Focusing on the age variation of Those in 10s, 20s, 30s". Korean Linguistics, Vol.65, No.11(2014), 237-262.
Jung. H., S. Yoon., and N. Park., "Speaker Recognition Using Convolutional Siamese Neural Networks". The transactions of The Korean Institute of Electrical Engineers. Vol.69, No.1(2020), 164-169.
Kang, B., "Lexical Differences between Utterances of Men and Women : A Corpus Based Classification Study". Korean Linguistics, Vol.58, No.2(2013), 1-30.
Kang, H., and M. H. Kim, "A Multivariate Analytical Study of Variation Patterns of Honorific Final Endings in KakaoTalk Dialogue". The Sociolinguistic Journal of Korea, Vol.26, No.1(2018), 1~30.
Kim, J., "A study of awareness and generation of Korean language leaners on attitude of speaker in terms of boundary tone". EWHA WOMANS UNIVERSITY, (2018).
Kim, J., M. S. Yoon, S. J. Kim, M. S. Chang, and J. E. Cha, "Utterance Types in Typically Developing Preschoolers". Korean Journal of Communication Disorders, Vol.17, No.3(2012), 488-498.
Kim, S., "The function and meaning of the final ending -ni", Urimal Studies, Vol. 15, 2004, 53-78.
Kim, S., Kim, J. "Development of final ending of three to four-year-old children", Communication Sciences & Disorders, Vol. 9 (2004), 22-35.
Kwon, O., Kim, J., Cho, H.Y., Hong, K.A. Han, J.M., Kim, Y.W., Choi, S., KHU-SentiwordNet: Developing A Korean SentiwordNet Combining Empty Morpheme, Proceedings of the 2019 Conference on Korea IT Service, 2019, pp.194-197.
Mohdiwale, S., and T. Sahu., "Nearest Neighbor Classification Approach for Bilingual Speaker and Gender Recognition". Advances in Biometrics, No.(2019), 249-266.
Pack, J., "Study on the Recognition of Honorification among Korean Native Speakers -focused on Koreans in their 20s, 30s-". Hanminjok Emunhak, Vol.73, No.8(2016), 119-154.
Patterns of Honorific Final Endings in KakaoTalk Dialogue". The Sociolinguistic Journal of Korea, Vol.26, No.1(2018), 1~30.
Povey, D., X. Zhang., and S. Khudanpur., "Parallel training of deep neural networks with natural gradient and parameter averaging". ICLR, No.11(2014).
Ramachandran, R., K. Farrell., R. Ramachandran., and R. Mammone., "Speaker recognition-general classifier approaches and data fusion methods". Pattern recognition, Vol.35, No.12 (2002), 2801-2821.
Seo. Y., and H. Kim., "Recent Speaker Recognition Technology Trend". The Magazine of the IEIE, Vol.41, No.3(2014), 40-49.
Sing, P., M. Embi., and H. Hashim., "Ask the Assistant: Using Google Assistant in classroom reading comprehension activities". International Journal of New Technology and Research, Vol.5, No.7(2019), 39-43.
Snyder, D., D. Garcia-Romero., D. Povey., and S. Khudanpur., "Deep Neural Network Embeddings for Text-Independent Speaker Verification". Interspeech, No.8(2017), 999-1003.
Snyder. D., D. Garcia-Romero., G. Sell., A. McCree., D. Povey., and S. Khudanpur., "Speaker recognition for multi-speaker conversations using x-vectors". IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), No.5(2019), 5796-5800.
Snyder, D., D. Garcia-Romero.,G. Sell., D. Povey., and S. Khudanpur., "X-vectors: Robust dnn embeddings for speaker recognition". IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), No.4(2018), 5329-5333.
So, S., "Development of speaker classification model using text-independent utterance based on deep neural network", HANYANG UNIVERSITY, (2019).
Song, J., "Semantic Functions of the Korean Sentence-Terminal Suffix -ney", Journal of Korean Linguistics, Vol.76, No.12(2015), 123-159.
Wang, N., P. Ching., N. Zheng., and T. Lee., "Robust speaker recognition using denoised vocal source and vocal tract features". IEEE transactions on audio, speech, and language processing, Vol.19, No.1(2011), 196-205.
Yun, H., and Z. Jin., "Exploring listeners' perception on evidential grammatical markers: Comparison between Seoul and Yanbian dialect users". Language and Information, Vol.24, No.1(2020), 29-45.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
오픈액세스 학술지에 출판된 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.