상황 인식은 유비쿼터스컴퓨팅 환경에 대한 진화를 변화시켰고 무선 센서네트워크 기술은 많은 응용기기에 대한 새로운 방법을 제시하였다. 특히, 행동 인식은 사람의 응용서비스를 제공하는데 있어 특정 사용자의 상황을 인식하는 핵심 요소로 의학, 취미, 군사 분야에서 폭넓은 응용분야를 갖고 있고 사용반경의 확대에서도 효율과 정확도를 높이는 방법에 크게 기여한다. 스마트폰 센서로부터 나오는 데이터로부터 프레임이 512인셈플 데이터를 얻어, 프레임간50%의 오버랩을 갖도록 하고 Machine LearningAlgorithm 인 WEKA Experimenter (University of Waikato, Version 3.6.10)을 써서 데이더로부터 시간영역 특징값을 추출함으로써 행동 인식에 대한 99.33%의 정확도를 얻을 수 있었다. 또한, WEKA Experimenter의 사용기법인 C4.5 Decision Tree과 다른 방법인 BN, NB, SMO or Logistic Regression간의 비교실험을 하였다.
상황 인식은 유비쿼터스컴퓨팅 환경에 대한 진화를 변화시켰고 무선 센서네트워크 기술은 많은 응용기기에 대한 새로운 방법을 제시하였다. 특히, 행동 인식은 사람의 응용서비스를 제공하는데 있어 특정 사용자의 상황을 인식하는 핵심 요소로 의학, 취미, 군사 분야에서 폭넓은 응용분야를 갖고 있고 사용반경의 확대에서도 효율과 정확도를 높이는 방법에 크게 기여한다. 스마트폰 센서로부터 나오는 데이터로부터 프레임이 512인셈플 데이터를 얻어, 프레임간50%의 오버랩을 갖도록 하고 Machine Learning Algorithm 인 WEKA Experimenter (University of Waikato, Version 3.6.10)을 써서 데이더로부터 시간영역 특징값을 추출함으로써 행동 인식에 대한 99.33%의 정확도를 얻을 수 있었다. 또한, WEKA Experimenter의 사용기법인 C4.5 Decision Tree과 다른 방법인 BN, NB, SMO or Logistic Regression간의 비교실험을 하였다.
Activity recognition is a key component in identifying the context of a user for providing services based on the application such as medical, entertainment and tactical scenarios. Instead of applying numerous sensor devices, as observed in many previous investigations, we are proposing the use of sm...
Activity recognition is a key component in identifying the context of a user for providing services based on the application such as medical, entertainment and tactical scenarios. Instead of applying numerous sensor devices, as observed in many previous investigations, we are proposing the use of smartphone with its built-in multimodal sensors as an unobtrusive sensor device for recognition of six physical daily activities. As an improvement to previous works, accelerometer, gyroscope and magnetometer data are fused to recognize activities more reliably. The evaluation indicates that the IBK classifier using window size of 2s with 50% overlapping yields the highest accuracy (i.e., up to 99.33%). To achieve this peak accuracy, simple time-domain and frequency-domain features were extracted from raw sensor data of the smartphone.
Activity recognition is a key component in identifying the context of a user for providing services based on the application such as medical, entertainment and tactical scenarios. Instead of applying numerous sensor devices, as observed in many previous investigations, we are proposing the use of smartphone with its built-in multimodal sensors as an unobtrusive sensor device for recognition of six physical daily activities. As an improvement to previous works, accelerometer, gyroscope and magnetometer data are fused to recognize activities more reliably. The evaluation indicates that the IBK classifier using window size of 2s with 50% overlapping yields the highest accuracy (i.e., up to 99.33%). To achieve this peak accuracy, simple time-domain and frequency-domain features were extracted from raw sensor data of the smartphone.
* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.
문제 정의
One of the aims of this study is to investigate how well could the multi-modal sensors in a smartphone be used for movement recognition similar to the previous accelerometer based approaches. Various different classification models have been applied to the problem of human activity recognition.
The purpose of this experiment was to compare the performance of different classifiers using all the features for each different activity. However, it is difficult to directly compare different classification algorithms due to lack of universally accepted quantitative performance evaluation measures.
This research focuses on evaluation analysis of classifiers’ accuracy and providing reliable results for selecting the best set of sensors and features to optimize the performance of activity recognition applications for smartphones.
This work contributes to the physical activity recognition domain using unobtrusive devices by multi-sensor information fusion. Three smartphone sensors, i.
제안 방법
However, there is no universally accepted method of recognizing a particular set of activities and all approaches have associated limitations and benefits. For this study, in order to identify which machine learning algorithm provided the most accurate activity detection, six different classification algorithms were applied to the data. These include: BN (Bayesian Network), NB (Naïve Bayes), J48 (C4.
, mean, standard deviation, energy, and correlation). In order to perform the classification task, they analyzed the performance of base-level classifiers and meta-level classifiers on two subjects, and achieved high accuracy. The sampling frequency was 50 Hz and window size was 5.
Magnetometer sensor also helps determining the orientation as well as absolute heading information. The purpose of this experiment was to evaluate the performance of different sensors and their combinations using simple time-domain and frequency-domain features in order to assess which is the best combination for activity recognition. Six different classification models were evaluated with four different combinations of sensors.
A custom smartphone application was designed for data collection and annotation as shown in Fig 1. This application collects data from an accelerometer, a gyroscope and a magnetometer at 50 Hz and stored it on a SD card for offline analysis. Previous studies have claimed that sampling frequency between 22 Hz and 100 Hz is suitable for classifying different physical activities[12].
To collect the activities dataset, four healthy subjects (2 males and 2 females) of different ages (between 25 and 30), heights and weights participated in this study. The characteristics of the participants are shown in Table 1.
5 Decision Tree), SMO (Sequential Minimal Optimization), Logistic Regression and IBK (Nearest Neighbor with K=1). To identify which machine learning model achieved the best accuracy, a 10-fold stratified cross validation with ten iterations was performed using WEKA experimenter. In practice, a 10-fold cross validation is the most widely used methodology to calculate the accuracy of a classifier[18].
대상 데이터
The characteristics of the participants are shown in Table 1. Participants were graduate students of Mokpo National University, Department of Electronics Engineering.
성능/효과
The 10-fold stratified cross validation with 10 iterations is used to evaluate the performance of different classifiers. After 10 iterations, the average classification accuracy is computed and reported as the overall accuracy. Fig.
By investigating each activity’s recognition rate, it can be inferred that the classification models distinguish between the device placements and user activities with an overall accuracy of greater than 95%.
It can be observed that in most cases high levels of accuracy was achieved. For the two most common activities, standing and sitting, BN, IBK, J48 and SMO achieved accuracy of 100%. By investigating each activity’s recognition rate, it can be inferred that the classification models distinguish between the device placements and user activities with an overall accuracy of greater than 95%.
By investigating each activity’s recognition rate, it can be inferred that the classification models distinguish between the device placements and user activities with an overall accuracy of greater than 95%. In conclusion, the IBK classifier provided an overall highest recognition accuracy of 99.33%, classifying activities sitting and standing with an accuracy of 100%.
The IBK classifier had shown to be the best classifier in this case with overall recognition accuracy of 99.33% using full feature set and 99.4% when evaluated with only time-domain features.
As an improvement to previous works, accelerometer, gyroscope and magnetometer data are fused to recognize activities more reliably. The evaluation indicates that the IBK classifier using window size of 2s with 50% overlapping yields the highest accuracy (i.e., up to 99.33%). To achieve this peak accuracy, simple time-domain and frequency-domain features were extracted from raw sensor data of the smartphone.
참고문헌 (19)
L. G. Villanueva, S. Cagnoni, and L. Ascari, "Design of a wearable sensing system for human motion monitoring in physical rehabilitation," J. Sensors, vol. 13, no. 6, pp. 7735-7755, 2013.
M. V. Albert, S. Toledo, M. Shapiro, and K. Kording, "Using mobile phones for activity recognition in Parkinson's patients," J. Frontiers in neurology, vol. 3, Nov. 2012.
L. Bao and S. S. Intille, "Activity recognition from user-annotated acceleration data," in Proc. Pervasive Computing, vol. 3001, pp. 1-17, Linz/Vienna, Austria, Apr. 2004.
D. Gordon, J.-H. Hanne, M. Berchtold, T. Miyaki, and M. Beigl, "Recognizing group activities using wearable sensors," in Proc. Mobile and Ubiquitous Systems: Computing, Networking, and Services, vol. 104, pp. 350- 361, 2012.
N. Ravi, N. Dandekar, P. Mysore, and M. L. Littman, "Activity recognition from accelerometer data," in AAAI, vol. 5, pp. 1541-1546, 2005.
J. R. Kwapisz, G. M. Weiss, and S. A. Moore, "Activity recognition using cell phone accelerometers," ACM SIGKDD Explorations Newsletter, vol. 12, pp. 74-82, 2011.
M. A. Awan, Z. Guangbin, and S.-D. Kim, "A dynamic approach to recognize activities in WSN," Int. J. Distrib. Sensor Netw., 2013.
O. D. Lara, A. J. Perez, M. A. Labrador, and J. D. Posada, "Centinela: A human activity recognition system based on acceleration and vital sign data," Pervasive and Mobile Computing, vol. 8, pp. 717-729, 2011.
Z. Zhao, Y. Chen, J. Liu, Z. Shen, and M. Liu, "Cross-people mobile-phone based activity recognition," in Proc. 22nd Int. Joint Conf. Artificial Intelligence, vol. 3, pp. 2545- 2550, 2011.
T. M. Do, S. W. Loke, and F. Liu, "HealthyLife: An activity recognition system with smartphone using logic-based stream reasoning," in Mobile and Ubiquitous Systems: Computing, Networking, and Services, pp. 188-199, 2013.
N. Kern, B. Schiele, and A. Schmidt, "Recognizing context for annotating a live life recording," Personal and Ubiquitous Comput., vol. 11, pp. 251-263, 2007.
C. V. Bouten, K. T. Koekkoek, M. Verduin, R. Kodde, and J. D. Janssen, "A triaxial accelerometer and portable data processing unit for the assessment of daily physical activity," IEEE Trans. Biomedical Eng., vol. 44, pp. 136-147, 1997.
W. Wu, S. Dasgupta, E. E. Ramirez, C. Peterson, and G. J. Norman, "Classification accuracies of physical activities using smartphone motion sensors," J. Medical Internet Research, vol. 14, 2012.
S. J. Preece, J. Y. Goulermas, L. P. Kenney, and D. Howard, "A comparison of feature extraction methods for the classification of dynamic activities from accelerometer data," IEEE Trans. Biomedical Eng., vol. 56, pp. 871-879, 2009.
I. Cleland, B. Kikhia, C. Nugent, A. Boytsov, J. Hallberg, K. Synnes, et al., "Optimal placement of accelerometers for the detection of everyday activities," J. Sensors, vol. 13, pp. 9183-9200, 2013.
R. Herren, A. Sparti, K. Aminian, and Y. Schutz, "The prediction of speed and incline in outdoor running in humans using accelerometry," Medicine and science in sports and exercise, vol. 31, pp. 1053-1059, 1999.
Waikato environment for knowledge analysis (WEKA), Available: http://www.cs.waikato.a c.nz/ml/weka
I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques: Practical Machine Learning Tools and Techniques, 2nd Ed., Elsevier, 2005.
M. Shoaib, J. Scholten, and P. Havinga, "Towards physical activity recognition using smartphone sensors," in Proc. IEEE 10th Int. Conf. Ubiquitous Intelligence & Computing, Vietri sul Mare, Italy, pp. 80-87, 2013.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.