본 논문에서는 착용형 사물인터넷 (Internet of Things, IoT) 장치를 사용한 IoT 지원 감정 인식 시스템을 제안한다. 제안한 IoT 지원 감정 인식 시스템은 일상생활에서 지속적으로 멀티모달생체신호 데이터를 수집할 수 있도록 ...
본 논문에서는 착용형 사물인터넷 (Internet of Things, IoT) 장치를 사용한 IoT 지원 감정 인식 시스템을 제안한다. 제안한 IoT 지원 감정 인식 시스템은 일상생활에서 지속적으로 멀티모달생체신호 데이터를 수집할 수 있도록 시스템 아키텍처를 설계하였고, 방사형 기저 함수 (Radial Basis Function, RBF) 기반 멀티 클래스서포트 벡터 머신 (Support Vector Machine, SVM)알고리즘을 사용하여 3가지(예: 혐오감, 공포, 슬픔)의 감정 상태를 분류할 수 있다. 제안한 IoT 지원 감정 인식 시스템의 아키텍처는 Machine to Machine (M2M) 통신 및 IoT 글로벌 표준인 oneM2M 규격에 따라 설계하였다. 착용형 IoT 장치 중 헤드셋은 뇌파(알파파, 베타파, 감마파, 델타파, 세타파)를 측정하고, 손몬 밴드는 피부 온도, 피부 전도도, 그리고 심장 박동수 데이터를 측정하여 애플리케이션에 전송한다. Application dedicated node-application entity(ADN-AE)로 모델링 된 스마트폰 애플리케이션은 두 개의 착용형 IoT 장치에서 측정한 멀티모달 생체신호 데이터를 수신하고 등록, 컨테이너 생성, 컨텐트 인스턴스 생성 절차에 따라 infrastructure node-common service entity (IN-CSE)로 모델링 된 oneM2M IoT 플랫폼 데이터를 수집한다. 수집 된 데이터는 데이터 시각화, 통계량, 히스토그램, 산점도 행렬, 그리고 선형 상호 상관관계 그래프와 같은 탐색적 데이터 분석을 통해 데이터 특성 관계를 이해하여 특성 선택을 한다. 분석 결과 기반으로 입력 데이터셋 및 출력 데이터셋을 만들어 RBF 기반 멀티 클래스 SVM 알고리즘의 최적 매개 변수 (예: 자유 매개 변수, 정규화 매개 변수)를 모델을 만든다. 본 논문에서 제안한 IoT 지원 감정 인식 시스템의 실험 결과는 멀티모달 생체 신호 데이터가 oneM2M IoT 플랫폼의 데이터베이스에 지속적으로 저장되는 것을 확인하였고, RBF 기반 멀티 클래스 SVM 알고리즘의 최적 매개 변수인 γ=〖10〗^3, ∁=〖10〗^3를 찾아 모델을 만들었을 때 교차 검증한 결과 약 98%의 평균 정확도를 확인하였다.
핵심단어: 감정 인식, 사물인터넷, 생체신호, oneM2M, 서포트 벡터 머신
본 논문에서는 착용형 사물인터넷 (Internet of Things, IoT) 장치를 사용한 IoT 지원 감정 인식 시스템을 제안한다. 제안한 IoT 지원 감정 인식 시스템은 일상생활에서 지속적으로 멀티모달 생체신호 데이터를 수집할 수 있도록 시스템 아키텍처를 설계하였고, 방사형 기저 함수 (Radial Basis Function, RBF) 기반 멀티 클래스 서포트 벡터 머신 (Support Vector Machine, SVM) 알고리즘을 사용하여 3가지(예: 혐오감, 공포, 슬픔)의 감정 상태를 분류할 수 있다. 제안한 IoT 지원 감정 인식 시스템의 아키텍처는 Machine to Machine (M2M) 통신 및 IoT 글로벌 표준인 oneM2M 규격에 따라 설계하였다. 착용형 IoT 장치 중 헤드셋은 뇌파(알파파, 베타파, 감마파, 델타파, 세타파)를 측정하고, 손몬 밴드는 피부 온도, 피부 전도도, 그리고 심장 박동수 데이터를 측정하여 애플리케이션에 전송한다. Application dedicated node-application entity(ADN-AE)로 모델링 된 스마트폰 애플리케이션은 두 개의 착용형 IoT 장치에서 측정한 멀티모달 생체신호 데이터를 수신하고 등록, 컨테이너 생성, 컨텐트 인스턴스 생성 절차에 따라 infrastructure node-common service entity (IN-CSE)로 모델링 된 oneM2M IoT 플랫폼 데이터를 수집한다. 수집 된 데이터는 데이터 시각화, 통계량, 히스토그램, 산점도 행렬, 그리고 선형 상호 상관관계 그래프와 같은 탐색적 데이터 분석을 통해 데이터 특성 관계를 이해하여 특성 선택을 한다. 분석 결과 기반으로 입력 데이터셋 및 출력 데이터셋을 만들어 RBF 기반 멀티 클래스 SVM 알고리즘의 최적 매개 변수 (예: 자유 매개 변수, 정규화 매개 변수)를 모델을 만든다. 본 논문에서 제안한 IoT 지원 감정 인식 시스템의 실험 결과는 멀티모달 생체 신호 데이터가 oneM2M IoT 플랫폼의 데이터베이스에 지속적으로 저장되는 것을 확인하였고, RBF 기반 멀티 클래스 SVM 알고리즘의 최적 매개 변수인 γ=〖10〗^3, ∁=〖10〗^3를 찾아 모델을 만들었을 때 교차 검증한 결과 약 98%의 평균 정확도를 확인하였다.
In human life, emotion plays an important role and it includes not only language that is spoken, but also non-verbal such as facial expression, gestures, biosignal data, etc. Here a biosignal denotes any electrical biosignal in living beings that can be continu- ally measured and monitored, although...
In human life, emotion plays an important role and it includes not only language that is spoken, but also non-verbal such as facial expression, gestures, biosignal data, etc. Here a biosignal denotes any electrical biosignal in living beings that can be continu- ally measured and monitored, although it may refer to both electrical and non-electrical signal. Among the best-known biosignal data, there are electroencephalogram (EEG), electrocardiogram (ECG), skin temperature (SKT), galvanic skin response (GSR), etc. Emotion recognition can be defined as a technique that a machine collects and pro- cesses various biosignal data of a human to classify his or her emotion states. As one of emotion recognition studies, a medical EEG system based on a personal computer (PC) was presented to record a dataset consisting of brain waves for the analysis of hu- man emotional states, but it has serious limitations that can not measure the brain waves in daily life. Recently, an emotion recognition method using a wearable ECG device enabling heart rate variability (HRV) measurement in daily life was presented but its ac- curacy is not so good in classification for five kinds of emotion states since the features extracted only from a single biosignal data (e.g. HRV) were utilized.
Therefore, we propose an Internet of Things (IoT)-aided emotion recognition system using wearable IoT devices to collect the multimodal biosignal data seamlessly in daily life and a multiclass support vector machine (SVM) model with a non-linear radial basis function (RBF) kernel to classify three emotion states (e.g. disgust, fear, and sadness) more accurately through the multimodal biosignal data. As wearable IoT devices, we consider two commercialized headset and wristband to measure the multimodal biosig- nal data: the brain waves (alpha, beta, gamma, delta, and theta) data from the headset and the SKT, GSR, and heart rate (HR) data from the wristband. The architecture of the proposed IoT-aided emotion recognition system is designed by using oneM2M specifi- cations, the global standards initiative for Machine-to-Machine (M2M) communications and the IoT. The smartphone applications modelled as an application dedicated node- application entities (ADN-AEs) receive the multimodal biosignal data measured from two wearable IoT devices and then upload them into an oneM2M-compliant IoT plat- form modelled as an infrastructure node-common service entity (IN-CSE) through ap- plication entity-identifier (AE-ID) registration, container creation, and content-instance creation procedures. In exploratory data analysis (EDA) according to each emotion state, the uploaded data are illustrated by data visualization, summary statistics, histogram, etc. and their correlation coefficients are investigated by a scatter matrix for the purpose of understanding their characteristics and relationship. Based on the EDA results, we cre- ate datasets and labels by playing three movie clips representing three emotion states to thirty participants wearing the headset and wristband and design a multiclass SVM model, that is the extention of the SVM to multiclass problem, by using them. We select the one-versus-rest approach for the multiclass SVM model and find its optimal param-
eters such as free parameter γ = 103 and normalization parameter C = 103 by some empirical analysis. The implementation results of the proposed IoT-aided emotion recognition system verified that the multimodal biosignal data were continuously stored in the database of the oneM2M-compliant IoT platform. The experimental results of the proposed mul- ticlass SVM model showed that it could achieve the average accuracy of about 98% through the cross-validation when using the feature selection for multimodal biosignal data.
Key word: Emotion Recognition, Internet of Things, Biosignal, oneM2M, Support Vec- tor Machine
In human life, emotion plays an important role and it includes not only language that is spoken, but also non-verbal such as facial expression, gestures, biosignal data, etc. Here a biosignal denotes any electrical biosignal in living beings that can be continu- ally measured and monitored, although it may refer to both electrical and non-electrical signal. Among the best-known biosignal data, there are electroencephalogram (EEG), electrocardiogram (ECG), skin temperature (SKT), galvanic skin response (GSR), etc. Emotion recognition can be defined as a technique that a machine collects and pro- cesses various biosignal data of a human to classify his or her emotion states. As one of emotion recognition studies, a medical EEG system based on a personal computer (PC) was presented to record a dataset consisting of brain waves for the analysis of hu- man emotional states, but it has serious limitations that can not measure the brain waves in daily life. Recently, an emotion recognition method using a wearable ECG device enabling heart rate variability (HRV) measurement in daily life was presented but its ac- curacy is not so good in classification for five kinds of emotion states since the features extracted only from a single biosignal data (e.g. HRV) were utilized.
Therefore, we propose an Internet of Things (IoT)-aided emotion recognition system using wearable IoT devices to collect the multimodal biosignal data seamlessly in daily life and a multiclass support vector machine (SVM) model with a non-linear radial basis function (RBF) kernel to classify three emotion states (e.g. disgust, fear, and sadness) more accurately through the multimodal biosignal data. As wearable IoT devices, we consider two commercialized headset and wristband to measure the multimodal biosig- nal data: the brain waves (alpha, beta, gamma, delta, and theta) data from the headset and the SKT, GSR, and heart rate (HR) data from the wristband. The architecture of the proposed IoT-aided emotion recognition system is designed by using oneM2M specifi- cations, the global standards initiative for Machine-to-Machine (M2M) communications and the IoT. The smartphone applications modelled as an application dedicated node- application entities (ADN-AEs) receive the multimodal biosignal data measured from two wearable IoT devices and then upload them into an oneM2M-compliant IoT plat- form modelled as an infrastructure node-common service entity (IN-CSE) through ap- plication entity-identifier (AE-ID) registration, container creation, and content-instance creation procedures. In exploratory data analysis (EDA) according to each emotion state, the uploaded data are illustrated by data visualization, summary statistics, histogram, etc. and their correlation coefficients are investigated by a scatter matrix for the purpose of understanding their characteristics and relationship. Based on the EDA results, we cre- ate datasets and labels by playing three movie clips representing three emotion states to thirty participants wearing the headset and wristband and design a multiclass SVM model, that is the extention of the SVM to multiclass problem, by using them. We select the one-versus-rest approach for the multiclass SVM model and find its optimal param-
eters such as free parameter γ = 103 and normalization parameter C = 103 by some empirical analysis. The implementation results of the proposed IoT-aided emotion recognition system verified that the multimodal biosignal data were continuously stored in the database of the oneM2M-compliant IoT platform. The experimental results of the proposed mul- ticlass SVM model showed that it could achieve the average accuracy of about 98% through the cross-validation when using the feature selection for multimodal biosignal data.
Key word: Emotion Recognition, Internet of Things, Biosignal, oneM2M, Support Vec- tor Machine
학위논문 정보
저자
임재현
학위수여기관
Namseoul University
학위구분
국내석사
학과
Major in Internet of Things and Data Science, Department of Infromation and Communication Engineering
※ AI-Helper는 부적절한 답변을 할 수 있습니다.