최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기정보과학회. 컴퓨팅의 실제 논문지 = KIISE transactions on computing practices, v.23 no.6, 2017년, pp.386 - 391
박경화 (서울대학교 뇌과학협동과정) , 김병희 (서울대학교 컴퓨터공학부) , 김은솔 (서울대학교 컴퓨터공학부) , 조휘열 (서울대학교 컴퓨터공학부) , 장병탁 (서울대학교 컴퓨터공학부)
In this paper, we propose a system that automatically assigns user's experience-based emotion tags from wearable sensor data collected in real life. Four types of emotional tags are defined considering the user's own emotions and the information which the user sees and listens to. Based on the colle...
P. Ekman and W. V. Friesen, “Detecting deception from the body or face,” J. Pers. Soc. Psychol., Vol. 29, No. 3, pp. 288-298, 1974.
I. B. Mauss and M. D. Robinson, “Measures of emotion: A review,” Cogn. Emot., Vol. 23, No. 2, pp. 209-237, 2009.
M. Shah, B. Mears, C. Chakrabarti, and A. Spanias, "Lifelogging: Archival and retrieval of continuously recorded audio using wearable devices," IEEE International Conference on Emerging Signal Processing Applications, pp. 99-102, 2012.
J. Tao and T. Tan, "Affective Computing : A Review," ACII 2005, pp. 981-995, 2005.
M. Soleymani, G. Chanel, J. J. M. Kierkels, and T. Pun, “Affective characterization of movie scenes based on content analysis and physiological changes,” Int. J. Semant. Comput., Vol. 3, No. 2, pp. 235-254, 2009.
K. Scherer, "Adding the affective dimension: a new look in speech analysis and synthesis," ICSLP, 1996.
G. Caridakis, G. Castellano, and L. Kessous, "Multimodal emotion recognition from expressive faces, body gestures and speech," Boukis C., Pnevmatikakis A., Polymenakos L. (eds) Artificial Intelligence and Innovations 2007: from Theory to Applications, 2007.
E. Navas, I. Hernaez, and Iker Luengo, “An objective and subjective study of the role of semantics and prosodic features in building corpora for emotional TTS,” IEEE Trans. Audio, Speech Lang. Process, Vol. 14, No. 4, pp. 1117-1127, Jul. 2006.
F. Burkhardt, a Paeschke, M. Rolfes, W. Sendlmeier, and B. Weiss, "A database of German emotional speech," Eur. Conf. Speech Commun. Technol, Vol. 2005, pp. 3-6, 2005.
H. Atassi and A. Esposito, "A speaker independent approach to the classification of emotional vocal expressions," 2008 20th IEEE Int. Conf., 2008.
Huang, et al., "Speech emotion recognition using CNN," ACM International Conference on Multimedia, pp. 801-804, 2014.
D. Bogdanov, N. Wack, E. Gomez, S. Gulati, P. Herrera, O. Mayor, G. Roma, J. Salamon, J. Zapata, and X. Serra, "ESSENTIA: An audio analysis library for music information retrieval," ISMIR 2013, pp. 493-498, 2013.
E.-S. Kim, et al., "Behavioral pattern modeling of human-human interaction for teaching restaurant service robots," AAAI 2015 Fall Symposium on AI for Human-Robot Interaction, 2015.
C. Chamaret, L. Chen, S. Member, Y. Baveye, and E. Dellandr, “LIRIS-ACCEDE: A video database for affective content analysis,” IEEE Trans. Affect. Comput., Vol. 6, No. 1, pp. 43-55, 2015.
해당 논문의 주제분야에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.