$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

Abstract AI-Helper 아이콘AI-Helper

In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition are determined on the database analysis step. Secondly, recognition algo...

주제어

AI 본문요약
AI-Helper 아이콘 AI-Helper

* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.

제안 방법

  • He obtained the result from the sub neural networks to decision logic. For this research, two different cases were studied: the closed case using training data and the open case using new speech so나nds. The closed case resulted in 70 percent of the recognition rate average while the opened case only showed 30 percent of the recognition rate average.
  • His study was limited to the conscious emotional expression, which was far easier to recognize. He extracted features of 8 emotions - joy, tease, fear, sadness, disgust, anger, surprise and neutrality referring them as conscious emotion. Those features were categorized into prosodic and, phonetic features.
  • Many of the studies used the pitch as the common features, by focusing on reflecting the patterns in the feature extracting area [6, 7]. In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases.
  • In this study, we used an ANN in order to classify emotions. ANNs provide a general, practical method fbr learning real- valued, discrete-valued, and vector-valued functions from examples.
  • 100 test data was randomly selected. The experiment was tried 3 times (The PCA algorithm was implemented by MATLAB). So, 나ic result is similar to that of the LBG.

대상 데이터

  • 10 male Graduate students, age range (24-31), were asked to speak 400 samples (10 subjects x 4 emotions x 10 sentences) of speech (subjects) with 4 emotions. They are ordinary Koreans but from different provinces.
  • Instead, each representative of 4 emotions was selected by LBG algorithm (That is, the center of each cluster). 100 test data was randomly selected. The experiment was tried 3 times (The PCA algorithm was implemented by MATLAB).
  • Therefore a mean age was not calculated. The recording format was 11 KHz, 16 bit, mono and since the loudness depends on the gap between the subject and MIC. The gap was fixed to 10 Cm.

이론/모형

  • Also, the variance was obtained from the same data. Loudness (intensity) was obtained by using a magnitude estimation method. Section Number, IR and CR ware obtained from the methods of our former paper [8].
본문요약 정보가 도움이 되었나요?

참고문헌 (9)

  1. F. Nasoz, K.Alvarez, C.L.Lisetti, and N.Finkelstein, 'Emotion Recognition from physiological signals using wireless sensors for presence technologies,' Springer-verlag, london, 2003 

  2. V.Hzjan and Z.Kacic, 'Context-independent Multilingual Emotion Recognition from speech signals,' International Journal of Speech technology, pp. 311-320, 2003 

  3. L.S.Chen, H.Tao, T.S.Huang, T.Miyasato and R.Nakatsu, 'Emotion recognition from audiovisual information,' IEEE Second Workshop on Multimedia Signal Processing, 1998 

  4. J.Nicholson, K.Takahashi, and R.Nakatsu, 'Emotion Recognition in speech using neural networks,' Proc. Of ICONIP, Vol.2, 1996 

  5. S.Batliner, K.Fisher, R.Hyber, J.Spilker, and E.Noth, 'Desperately seeking Emotions:Actors, Wizards and Human Beings,' Proceedings of the ISCA Workshop on Speech and Emotion 

  6. T.Moriyama and S.Ozawa, 'Emotion Recognition and Synthesis System on Speech,' IEEE International Conference on Multimedia Computing and Systems, Vol.1, 1999 

  7. D.Galanis, V.Darsinos, and G.Kokkinakis, 'Investigating emotional speech parameters for speech synthesis,' Proc. of ICECS, Vol.2, pp. 3-16, Oct, 1996 

  8. C.H.Park, K.S.Byun, and K.B.Sim, 'The Implementation of the Emotion Recognition from Speech and Facial Expression System,' Proc. of ICNC, Part 2, pp. 85-88, Aug, 2005 

  9. T.M.Mitchell, Machine Learning, McGraw-Hill International Edition, Singapore, 1997 

저자의 다른 논문 :

관련 콘텐츠

저작권 관리 안내
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로