최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기말소리와 음성과학 = Phonetics and speech sciences, v.15 no.4, 2023년, pp.71 - 80
박진 (가톨릭관동대학교 언어재활학과) , 이창균 (가톨릭관동대학교 경영학과)
This study primarily aimed to develop an automated stuttering identification and classification method using artificial intelligence technology. In particular, this study aimed to develop a deep learning-based identification model utilizing the convolutional neural networks (CNNs) algorithm for Kore...
Altinkaya, M., & Smeulders, A. W. M. (2020, October). A dynamic,?self supervised, large scale audiovisual dataset for stuttered?speech. Proceedings of the 1st International Workshop on Multimodal Conversational AI (pp. 9-13). Seattle, WA.
Barrett, L., Hu, J., & Howell, P. (2022). Systematic review of machine?learning approaches for detecting developmental stuttering.?IEEE/ACM Transactions on Audio, Speech, and Language?Processing, 30, 1160-1172.
Bayerl, S. P., von Gudenberg, A. W., Honig, F., Noth, E., &?Riedhammer, K. (2022, June). KSoF: The Kassel state of fluency?dataset -A therapy centered dataset of stuttering. Proceedings of?the 13th Conference on Language Resources and Evaluation (pp.?1780-1787). Marseille, France.
Bhushan, P. S., Vani, H. Y., Shivkumar, D. K., & Sreeraksha, M. R.?(2021). Stuttered Speech Recognition using Convolutional Neural?Networks, International Journal of Engineering Research &?Technology, 9(12), 250-254.
Das, A., Mock, J. Irani, F., Huang, Y., Najafirad, P., & Golob, E.?(2022). Multimodal explainable AI predicts upcoming speech?behavior in adults who stutter. Frontiers in Neuroscience, 16:912798.
Fang, S. H., Tsao, Y., Hsiao, M. J., Chen, J. Y., Lai, Y. H., Lin, F. C.,?& Wang, C. T. (2019). Detection of pathological voice using?cepstrum vectors: A deep learning approach. Journal of Voice, 33(5), 634-641.
Garg, U., Agarwal, S., Gupta, S., Dutt, R., & Singh, D. (2020,?September). Prediction of emotions from the audio speech signals?using MFCC, MEL and Chroma. Proceedings of the 12th?International Conference on Computational Intelligence and?Communication Networks (CICN). Bhimtal, India.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. Cambridge, UK: MIT Press.
Guitar, B. (2019). Stuttering: An integrated approach to its nature?and treatment. Baltimore, PA: Lippincott Williams.
Hariharan, M., Chee, L. S., Ai, O. C., & Yaacob, S. (2012).?Classification of speech disfluencies using LPC based?parameterization techniques. Journal of Medical Systems, 36(3),?1821-1830.
Howell, P., Sackin, S., & Glenn, K. (1997). Development of a two-stage procedure for the automatic recognition of dysfluencies?in the speech of children who stutter: II. ANN recognition of?repetitions and prolongations with supplied word segment?markers. Journal of Speech, Language, and Hearing Research, 40(5), 1085-1096.
Jeon, H. S., & Jeon, H. E. (2015). Characteristics of disfluency?clusters in adults who stutter. Journal of Speech-Language &?Hearing Disorders, 24(1), 135-144.
Jo, C., Wang, S. G., & Kwon, I. (2022). Performance comparison on vocal cords disordered voice discrimination via machine learning?methods. Phonetics and Speech Sciences, 14(4), 35-43.
Kully, D., & Boberg, E. (1988). An investigation of interclinic?agreement in the identification of fluent and stuttered syllables.?Journal of Fluency Disorders, 13(5), 309-318.
Lee, Y. H. (2017). Speech/audio processing based on deep learning.?Broadcasting and Media Magazine, 22(1), 47-58.
Mishra, N., Gupta, A., & Vathana, D. (2021). Optimization of?stammering in speech recognition applications. International?Journal of Speech Technology, 24(2), 679-685.
Park, J., Oh, S. Y., Jun, J. P., & Kang, J. S. (2015). Effects of?background noises on speech-related variables of adults who?stutter. Phonetics and Speech Sciences, 7(1), 27-37.
Prabhu, Y., & Seliya, N. (2022, December). A CNN-based automated?stuttering identification system. Proceeding of the 2022 21st IEEE?International Conference on Machine Learning and Applications?(ICMLA). Nassau, Bahamas.
Ravikumar, K. M., Rajagopal, R., & Nagaraj, H. C. (2009). Stuttered?Speech Using MFCC Features. ICGST International Journal on Digital Signal Processing, 9, 19-24.
Riley, G. D. (1972). A stuttering severity instrument for children and?adults. Journal of Speech and Hearing Disorders, 37(3), 314-322.
Sheikh, S. A., Sahidullah, M., Hirsch, F., & Ouni, S. (2022). Machine?learning for stuttering identification: Review, challenges and?future directions. Neurocomputing, 514, 385-402.
Shim, H. S., Shin, M. J., & Lee, E. J. (2010). Paradise Fluency?Assessment-II (P-FA-II). Seoul: Paradise Welfare Foundation.
Shim, H. S., Shin, M. J., Lee, E. J., Lee, K. J., & Lee, S. B. (2022).?Fluency disorders: Assessment and treatment. Seoul: Korea.
Tichenor, S. E., Constantino, C., & Scott Yaruss, J. (2022). A point of?view about fluency. Journal of Speech, Language, and Hearing?Research, 65(2), 645-652.
Van Riper, C. (1972). Speech correction: Principles and methods(5th?ed.). Englewood Cliffs, NJ: Prentice-Hall.
Wisniewski, M., Kuniszyk-Jozkowiak, W., Smolka, E., & Suszynski,?W. (2007). Automatic detection of disorders in a continuous?speech with the hidden Markov models approach. In M.?Kurzynski, E. Puchala, M. Wozniak, & A. Zolnierek (Eds.),?Computer recognition systems 2: Advances in soft computing (pp.?445-453). Berlin, Heidelberg: Springer.
Yang, B., Wu, J., Zhou, Z., Komiya, M., Kishimoto, K., Xu, J.,?Nonaka, K., ... Horiuchi, T. (2021, October). Facial action?unit-based deep learning framework for spotting macro- and?micro-expressions in long video sequences. Proceedings of the?29th ACM International Conference on Multimedia (pp.?4794-4798). Chengdu, China.
Yaruss, S. J. (1997). Utterance timing and childhood stuttering.?Journal of Fluency Disorders, 22(4), 263-286.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.