Smart Robot embedded with GMM-UBM (Gaussian mixture model- universal background model) based on the machine learning scheme is presented in the article. Authors have designed a smart robot for the farmer and which is designed controlled by the concept of machine learning. On the other hand, the tech...
Smart Robot embedded with GMM-UBM (Gaussian mixture model- universal background model) based on the machine learning scheme is presented in the article. Authors have designed a smart robot for the farmer and which is designed controlled by the concept of machine learning. On the other hand, the techniques of machine learning are applied to develop a smart robot for helping farmers recognize the environment conditions, e.g. weather, and disease protection in rice or plant. The smart robot is implemented to detect and to recognize the environment conditions around a fixed area. The sensing way through vision devices, such as camera, look like a human’s eye to distinguish various types of target. The QR code is deployed to simulate working conditions allows the robot to separate conditions and act according to conditions precisely. Besides, the smart robot is embedded with GMM-UBM algorithm for promoting the accuracy of recognition from the deployment of machine learning. The smart robot, mainly combines with AI (Artificial intelligence) techniques, consists of the following equipments: 1) a control movement subsystem, 2) a sensor control subsystem, and 3) an analysis subsystem. The researcher has determined the condition of the message options via QR code. In addition, the contents of the QR code tag will be processed a text message and saved to a memory device, once the reading is finished. The data analysis subsystem then reads the text and recommends the robot to move according to the specified conditions. The results from QR code data allow the smart robot to accurately collect many kinds of prefer data (e.g., climate data) in the farm at the specified location.
Smart Robot embedded with GMM-UBM (Gaussian mixture model- universal background model) based on the machine learning scheme is presented in the article. Authors have designed a smart robot for the farmer and which is designed controlled by the concept of machine learning. On the other hand, the techniques of machine learning are applied to develop a smart robot for helping farmers recognize the environment conditions, e.g. weather, and disease protection in rice or plant. The smart robot is implemented to detect and to recognize the environment conditions around a fixed area. The sensing way through vision devices, such as camera, look like a human’s eye to distinguish various types of target. The QR code is deployed to simulate working conditions allows the robot to separate conditions and act according to conditions precisely. Besides, the smart robot is embedded with GMM-UBM algorithm for promoting the accuracy of recognition from the deployment of machine learning. The smart robot, mainly combines with AI (Artificial intelligence) techniques, consists of the following equipments: 1) a control movement subsystem, 2) a sensor control subsystem, and 3) an analysis subsystem. The researcher has determined the condition of the message options via QR code. In addition, the contents of the QR code tag will be processed a text message and saved to a memory device, once the reading is finished. The data analysis subsystem then reads the text and recommends the robot to move according to the specified conditions. The results from QR code data allow the smart robot to accurately collect many kinds of prefer data (e.g., climate data) in the farm at the specified location.
참고문헌 (20)
Frontiers in Plant Science 7 1 Using deep learning for image-based plant disease detection Mohanty 2016 10.3389/fpls.2016.01419
IEEE Transactions on Systems, Man, and Cybernetics 36 172 Perceptual learning and abstraction in machine learning: an application to autonomous robotics Bredeche 2006 10.1109/TSMCC.2006.871139
10.1109/ICMA.2017.8016120 Hatano M. , Estimation of center of gravity for withdrawal works of unknown indefinite shape rubbles for rescue robots, Proceeding of IEEE International Conference on Mechatronics and Automation (2017), 1970-1975.
IEEE Transactions on Industrial Informatics 14 3244 GMM and CNN hybrid method for short utterance speaker recognition Liu 2018 10.1109/TII.2018.2799928
10.1109/ICEIEC.2019.8784501 Wu Ziteng and Zheng Lin , Emotional Communication Robot Based on 3D FaceModel and ASR Technology, IEEE 9th International Conference on Electronics Information and Emergency Communication (ICEIEC), 2019, 726-730.
IEEE Transactions on System, Man, and Cybernetics 34 138 Human-robot interaction in rescue robotics Murphy 2004 10.1109/TSMCC.2004.826267
Ventura R. and Lima P.U. , Search and rescue robot: the civil protection teams of the future, Third International Conference on Emerging Security Technologies (2012), 12-19.
10.1109/URAI.2017.7992670 Shin S. , Yoon D. , Song H. , Kim B. and Han J. , Communication System of a Segmented Rescue Robot Utilizing Socket Programming and ROS, The 14th International Conference onUbiquitousRobots andAmbient Intelligence, 2017, 565-569.
10.1109/URAI.2016.7734045 Park J. , Yun D. , Park D. and Park C. , Dynamic Simulation of Joint Module with MR Damper for Mobile Rescue Robot, The 13th International Conference on Ubiquitous Robots and Ambient Intelligence, 2016, 157-158
IEEE Access 7 14124 Efficient laser-based 3D SLAM for coal mine rescue robots Li 2019 10.1109/ACCESS.2018.2889304
10.1109/ICISET.2016.7856489 Uddin Z. and Islam M. , Search and Rescue System for Alive Human Detection by Semi-autonomous Mobile Rescue Robot, the International Conference on Innovations in Science, Engineering and Technology, 2016.
10.1109/ICCIDS.2019.8862041 Kanimozhi S. , Gayathri G. and Mala T. , Multiple Real-time object identification using Single shot Multi-Box detection, the 2nd International Conference on Computational Intelligence in Data Science, 2019.
10.1109/ICIP.2018.8451034 Kim J.U. , Kwon J. , Kim H.G. , Lee H. and Ro Y.M. , Object Bounding Box-Critic Networks for Occlusion-Robust Object Detection in Road Scene, the 25th IEEE International Conference on Image Processing, 2018, 1313-1317.
10.1109/ICCE-TW.2014.6904109 Ju T.F. , Lu W.M. , Chen K.H. and Guo J.I. , Vision-based moving objects detection for intelligent automobiles and a robustness enhancing method, IEEE International Conference on Consumer Electronics - Taiwan, 2014, 75-76.
10.1109/ICCONS.2018.8662921 Mane S. and Mangle S. , Moving object detection and tracking Using Convolutional Neural Networks, the 2nd International Conference on Intelligent Computing and Control Systems, 2018, 1809-1813.
10.1109/ICIVC.2018.8492803 Yu L. , Chen X. and Zhou S. , Research of image main objects detection algorithm based on deep learning, the 3rd IEEE International Conference on Image, Vision and Computing, 2018, 70-75.
10.1109/ICICES.2014.7033891 Bharath R.R. and Dhivya G. , Moving Object Detection, Classification and its Parametric Evaluation, International Conference on Information Communication and Embedded Systems, 2014.
10.1109/CVPR.2016.308 Szegedy C. , Vanhoucke V. , Ioffe S. , Shlens J. and Wojna Z. , Re-thinking the Inception Architecture for Computer-Vision, IEEE Conference on ComputerVision and Pattern Recognition (CVPR), 2016, 2818-2826.
이 논문을 인용한 문헌
활용도 분석정보
상세보기
다운로드
내보내기
활용도 Top5 논문
해당 논문의 주제분야에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다. 더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.