언어장애인의 스마트스피커 접근성 향상을 위한 개인화된 음성 분류 기법 Personalized Speech Classification Scheme for the Smart Speaker Accessibility Improvement of the Speech-Impaired people원문보기
음성인식 기술과 인공지능 기술을 기반으로 한 스마트스피커의 보급으로 비장애인뿐만 아니라 시각장애인이나 지체장애인들도 홈 네트워크 서비스를 연동하여 주택의 전등이나 TV와 같은 가전제품을 음성을 통해 쉽게 제어할 수 있게 되어 삶의 질이 대폭 향상되었다. 하지만 언어장애인의 경우 조음장애나 구음장애 등으로 부정확한 발음을 하게 됨으로서 스마트스피커의 유용한 서비스를 사용하는 것이 불가능하다. 본 논문에서는 스마트스피커에서 제공되는 기능 중 일부 서비스를 대상으로 언어장애인이 이용할 수 있도록 개인화된 음성분류기법을 제안한다. 본 논문에서는 소량의 데이터와 짧은 학습시간으로도 언어장애인이 구사하는 문장의 인식률과 정확도를 높여 스마트스피커가 제공하는 서비스를 실제로 이용할 수 있도록 하는 것이 목표이다. 본 논문에서는 ResNet18 모델을 fine tuning하고 데이터 증강과 one cycle learning rate최적화 기법을 추가하여 적용하였으며, 실험을 통하여 30개의 스마트스피커 명령어 별로 10회 녹음한 후 3분 이내로 학습할 경우 음성분류 정확도가 95.2% 정도가 됨을 보였다.
음성인식 기술과 인공지능 기술을 기반으로 한 스마트스피커의 보급으로 비장애인뿐만 아니라 시각장애인이나 지체장애인들도 홈 네트워크 서비스를 연동하여 주택의 전등이나 TV와 같은 가전제품을 음성을 통해 쉽게 제어할 수 있게 되어 삶의 질이 대폭 향상되었다. 하지만 언어장애인의 경우 조음장애나 구음장애 등으로 부정확한 발음을 하게 됨으로서 스마트스피커의 유용한 서비스를 사용하는 것이 불가능하다. 본 논문에서는 스마트스피커에서 제공되는 기능 중 일부 서비스를 대상으로 언어장애인이 이용할 수 있도록 개인화된 음성분류기법을 제안한다. 본 논문에서는 소량의 데이터와 짧은 학습시간으로도 언어장애인이 구사하는 문장의 인식률과 정확도를 높여 스마트스피커가 제공하는 서비스를 실제로 이용할 수 있도록 하는 것이 목표이다. 본 논문에서는 ResNet18 모델을 fine tuning하고 데이터 증강과 one cycle learning rate 최적화 기법을 추가하여 적용하였으며, 실험을 통하여 30개의 스마트스피커 명령어 별로 10회 녹음한 후 3분 이내로 학습할 경우 음성분류 정확도가 95.2% 정도가 됨을 보였다.
With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly ...
With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly improved the quality of life. However, in the case of speech-impaired people, it is impossible to use the useful services of the smart speaker because they have inaccurate pronunciation due to articulation or speech disorders. In this paper, we propose a personalized voice classification technique for the speech-impaired to use for some of the functions provided by the smart speaker. The goal of this paper is to increase the recognition rate and accuracy of sentences spoken by speech-impaired people even with a small amount of data and a short learning time so that the service provided by the smart speaker can be actually used. In this paper, data augmentation and one cycle learning rate optimization technique were applied while fine-tuning ResNet18 model. Through an experiment, after recording 10 times for each 30 smart speaker commands, and learning within 3 minutes, the speech classification recognition rate was about 95.2%.
With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly improved the quality of life. However, in the case of speech-impaired people, it is impossible to use the useful services of the smart speaker because they have inaccurate pronunciation due to articulation or speech disorders. In this paper, we propose a personalized voice classification technique for the speech-impaired to use for some of the functions provided by the smart speaker. The goal of this paper is to increase the recognition rate and accuracy of sentences spoken by speech-impaired people even with a small amount of data and a short learning time so that the service provided by the smart speaker can be actually used. In this paper, data augmentation and one cycle learning rate optimization technique were applied while fine-tuning ResNet18 model. Through an experiment, after recording 10 times for each 30 smart speaker commands, and learning within 3 minutes, the speech classification recognition rate was about 95.2%.
Smart speaker household penetration rate in the?United States from 2014 to 2025* (2020),?https://www.statista.com/statistics/1022847/united?-states-smart-speaker-household-penetration/?(accessed Nov., 11, 2021).
장애인차별금지 및 권리구제에 관한 법률시행령,?시행 2016. 2. 3 [대통령령 제 26944호]
C. Espana-Bonet and J. A. Fonollosa,?"Automatic speech recognition with deep neural?networks for impaired speech," in Third?International Conference on Advances in Speech?and Language Technologies for Iberian?Languages. Springer, pp. 97-107, Nov. 2016.
Biadsy, F., Weiss, R. J., Moreno, P. J.,?Kanvesky, D., and Jia, Y. Parrotron: An?end-to-end speech-to-speech conversion model?and its applications to hearing-impaired speech?and speech separation. arXiv preprint?arXiv:1904.04169, 2019.
J. Cattiau, "How ai can improve products for?people with impaired speech," 2019. [Online].?Available :?https://blog.google/outreach-initiatives/accessibility/impaired-speech-recognition/ (accessed Oct. 8,?2021).
Colin Lea, Zifang Huang, Lauren Tooley, Zeinab?Liaghat, Shri Thelapurath, Leah Findlater, Jeffrey P. Bigham, "Nonverbal Sound Detection?for Disordered Speech," IEEE ICASSP,?Singapore, May, 2022.
Cai S, Lillianfeld L, Seaver K, Green JR,?Brenner MP, Nelson PQ, Sculley D, "A?Voice-Activated Switch for Persons with Motor?and Speech Impairments: Isolated-Vowel?Spotting Using Neural Networks," InterSpeech,?2021.
Smith, L.N., Topin, N.: Super-convergence: very?fast training of neural networks using large?learning rates (2018). arXiv:1708.07120
Leslie N. Smith, "Cyclical Learning Rates for?Training Neural Networks," 2017 IEEE Winter?Conference on Applications of Computer Vision?(WACV), Santa Rosa, USA, Mar. 2017.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.