최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기로봇학회논문지 = The journal of Korea Robotics Society, v.17 no.3, 2022년, pp.264 - 272
전해인 (Department of Artificial Intelligence, Kyungpook National University) , 강정훈 (Department of Artificial Intelligence, Kyungpook National University) , 강보영 (Department of Robot and Smart System Engineering, Kyungpook National University)
Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training pr...
M. Tonkin, J. Vitale, S. Herse, M.-A. Williams, W. Judge, and X. Wang, "Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an Airport," 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago IL, USA, pp. 407-415, 2018, DOI: 10.1145/3171221.3171270.
T. Morita, N. Kashiwagi, A. Yorozu, H. Suzuki, and T. Yamaguchi, "Evaluation of a multi-robot cafe based on service quality dimensions," The Review of Socionetwork Strategies, vol. 14, no.1, pp. 55-76, 2020, DOI: 10.1007/s12626-019-00049-x.
J. Kober, J. A. Bagnell, and J. Peters, "Reinforcement learning in robotics: A survey," The International Journal of Robotics Research, vol. 32, no. 11, 2013, DOI: 10.1177/0278364913495721.
B. Price and C. Boutilier, "Accelerating reinforcement learning through implicit imitation," Journal of Artificial Intelligence Research, vol. 19, pp. 569-629, 2003, DOI: 10.1613/jair.898.
T. Brys, A. Harutyunyan, H. B. Suay, S. Chernova, M. E. Taylor, and A. Nowe, "Reinforcement learning from demonstration through shaping," 24th International Conference on Artificial Intelligence, pp. 3352-3358, 2015, [Online], https://dl.acm.org/doi/abs/10.5555/2832581.2832716.
M. Ullerstam and M. Mizukawa, "Teaching robots behavior patterns by using reinforcement learning: how to raise pet robots with a remote control," SICE 2004 Annual Conference, Sapporo, Japan, 2004, [Online], https://ieeexplore.ieee.org/document/1491384.
S. Griffith, K. Subramanian, J. Scholz, C. L. Isbell, and A. L. Thomaz, "Policy shaping: Integrating human feedback with reinforcement learning," Advances in neural information processing systems 26 (NIPS 2013), 2013, [Online], https://proceedings.neurips.cc/paper/2013/hash/e034fb6b66aacc1d48f445ddfb08da98-Abstract.html.
V. Veeriah, P. M. Pilarski, and R. S. Sutton, ''Face valuing: Training user interfaces with facial expressions and reinforcement learning,'' arXiv:1606.02807, 2016, [Online], http://arxiv.org/abs/1606.02807.
R. Arakawa, S. Kobayashi, Y. Unno, Y. Tsuboi, and S. Maeda, Dqn-tamer: Human-in-the-loop reinforcement learning with intractable feedback," arXiv preprint arXiv:1810.11748, 2018, [Online], https://arxiv.org/abs/1810.11748.
NAO the humanoid and programmable robot | SoftBank Robotics, [Online], https://www.softbankrobotics.com/emea/en/nao, Access: Jun. 7, 2022.
H.-S. Lee and B.-Y. Kang, "Continuous emotion estimation of facial expressions on JAFFE and CK+ datasets for human-robot interaction," Intelligent Service Robotics, vol. 13, no.1, 2020, DOI: 10.1007/s11370-019-00301-x.
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, "The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression," 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, San Francisco, CA, USA, 2010, DOI: 10.1109/CVPRW.2010.5543262.
J. A. Russel, "A circumplex model of affect," Journal of Personality and Social Psychology, vol. 39, no. 6, pp. 1161-1178, 1980, DOI: 10.1037/h0077714.
H. Zou and L. Xue, "A selective overview of sparse principal component analysis," Proceedings of the IEEE, vol. 106, no. 8, pp. 1311-1320, Aug., 2018, DOI: 10.1109/JPROC.2018.2846588.
R. Ewing and K. Park, "Linear regression," Basic Quantitative Research Methods for Urban Planners. Routledge, pp. 220-269, 2020, [Online], https://books.google.co.kr/books?hlko&%lr&idGzz3DwAAQBAJ&oifnd&pgPP1&otsHJz-Tw6pgs&sigoFD_mUSrG3iw3pFr8_uL9bc0STw&redi%r_escy#vonepage&q&ffalse.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, "Human-level control through deep reinforcement learning," Nature, vol. 518, no. 7540, pp. 529-533 2015, DOI: 10.1038/nature14236.
T.-T. Wong, "Performance evaluation of classification algorithms by k-fold and leave-one-out cross validation," Pattern Recognition vol. 48, no. 9, pp. 2839-2846, May, 2015, DOI: 10.1016/j.patcog.2015.03.009.
J. Duchi, E. Hazan, and Y. Singer, "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization," Journal of Machine Learning Research, vol. 12, pp. 2121-2159, 2011, [Online], https://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf.
E. M. Dogo, O. J. Afolabi, N. I. Nwulu, B. Twala, and C. O. Aigbavboa, "A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks," 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS), Belgaum, India, 2018, DOI: 10.1109/CTEMS.2018.8769211.
I. Kandel, M. Castelli, and A. Popovic, "Comparative study of first order optimizers for image classification using convolutional neural networks on histopathology images," Journal of Imaging, vol. 6, no. 9, 2020, DOI: 10.3390/jimaging6090092.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.