최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기로봇학회논문지 = The journal of Korea Robotics Society, v.17 no.1, 2022년, pp.76 - 85
김수영 (Dept. of Mechanical Engineering, Ulsan National Institute of Science and Technology) , 손흥선 (Dept. of Mechanical Engineering, Ulsan National Institute of Science and Technology)
One of the most fundamental challenges when designing controllers for dynamic systems is the adjustment of controller parameters. Usually the system model is used to get the initial controller, but eventually the controller parameters must be manually adjusted in the real system to achieve the best ...
F. Berkenkamp, A. P. Schoellig, and A. Krause, "Safe and automatic controller tuning with Gaussian processes," Workshop on Machine Learning in Planning and Control of Robot Motion, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, [Online], https://www.dynsyslab.org/wp-content/papercite-data/pdf/berkenkamp-icra16.pdf.
N. O. Lambert, D. S. Drew, J. Yaconelli, R. Calandra, S. Levine, and K. S. J. Pister, "Low level control of a quadrotor with deep model-based reinforcement learning," arXiv:1901.03737v2 [cs.RO], 2019, [Online], https://arxiv.org/pdf/1901.03737.pdf.
F. Berkenkamp, M. Turchetta, A. P. Schoellig, and A. Krause, "Safe model-based reinforcement learning with stability guarantees," IEEE Transactions on Automatic Control, vol. 64, no. 7, Jul., 2017, DOI: 10.1109/TAC.2018.2876389.
A.Y. Zomaya, "Reinforcement Learning to Adaptive Control of Nonlinear Systems," IEEE Transactions on Systems, Man, and Cybernetics, vol. 24, no. 2, Feb., 1994, DOI: 10.1109/21.281435.
X.-S. Wang, Y.-H. Cheng, and W. Sun, "A proposal of adaptive PID controller based on reinforcement learning," Journal of China Univ. Mining and Technology, vol. 17, no. 1, 2007, [Online], http://www.paper.edu.cn/scholar/showpdf/MUT2MNzINTD0cx2h.
M. N. Howell and M. C. Best, "On-line PID tuning for engine idle-speed control using continuous action reinforcement learning automata," Control Engineering Practice, vol. 8, no. 2, Feb., 2000, DOI: 10.1016/S0967-0661(99)00141-0.
Z. S. Jin, H. C. Li, and H. M. Gao, "An intelligent weld control strategy based on reinforcement learning approach," The International Journal of Advanced Manufacturing Technology, Feb., 2019, DOI: 10.1007/s00170-018-2864-2.
J. Achiam, D. Held, A. Tamar, and P. Abbeel, "Constrained policy optimization," arXiv:1705.10528v1 [cs.LG], 2017, [Online], https://arxiv.org/pdf/1705.10528.pdf.
S. Gangapurwala, A. Mitchell, and I. Havoutis, "Guided constrained policy optimization for dynamic quadrupedal robot locomotion," IEEE Robotics and Automation Letters, vol. 5, no. 2, Apr., 2020, DOI: 10.1109/LRA.2020.2979656.
Y. Sui, A. Gotovos, J. W. Burdick, and A. Krause, "Safe exploration for optimization with Gaussian processes," 32nd International Conference on Machine Learning, 2015, [Online], http://proceedings.mlr.press/v37/sui15.pdf.
A. Aswani, H. Gonzalez, S. S. Sastry, and C. Tomlin, "Provably safe and robust learning-based model predictive control," Automatica, vol. 49, no. 5, May, 2013, DOI: 10.1016/j.automatica.2013.02.003.
T. M. Moldovan and P. Abbeel, "Safe exploration in Markov decision processes," arXiv:1205.4810v3 [cs.LG], 2012, [Online], https://arxiv.org/pdf/1205.4810.pdf.
A. M. Lyapunov, The General Problem of the Stability of Motion, Taylor and Francis Ltd, London, UK, 1992, [Online], https://citeseerx.ist.psu.edu/viewdoc/download?doi10.1.1.910.9566&reprep1&typepdf.
A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, "Control barrier function based quadratic programs for safety critical systems," IEEE Transactions on Automatic Control, vol. 62, no. 8, Aug., 2017, DOI: 10.1109/TAC.2016.2638961.
K. Galloway, K. Sreenath, A. D. Ames, and J. W. Grizzle, "Torque saturation in bipedal robotic walking through control Lyapunov function-based quadratic programs," IEEE Access, vol. 3, pp. 323-332, 2015, DOI: 10.1109/ACCESS.2015.2419630.
A. Vahidi and A. Eskandarian, "Research advances in intelligent collision avoidance and adaptive cruise control," IEEE Trans. Intell. Transp. Syst., vol. 4, no. 3, pp. 143-153, Sep. 2003. DOI: 10.1109/TITS.2003.821292.
S. Li, K. Li, R. Rajamani, and J. Wang, "Model predictive multiobjective vehicular adaptive cruise control," IEEE Transactions on Control Systems Technology, vol. 19, no. 3, pp. 556-566, 2011, DOI: 10.1109/TCST.2010.2049203.
G. J. L. Naus, J. Ploeg, M. J. G. Van de Molengraft, W. P. M. H. Heemels, and M. Steinbuch, "Design and implementation of parameterized adaptive cruise control: An explicit model predictive control approach," Control Engineering Practice, vol. 18, no. 8, pp. 882-892, Aug., 2010, DOI: 10.1016/j.conengprac.2010.03.012.
P. A. Ioannou and C. C. Chien, "Autonomous intelligent cruise control," IEEE Transactions on Vehicular Technology, vol. 42, no. 4, pp. 657-672, Nov., 1993, DOI: 10.1109/25.260745.
J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms," arXiv:1707.06347v2 [cs.LG], 2017, [Online], https://arxiv.org/pdf/1707.06347.pdf.
J.-M. Kai, G. Allibert, M.-D. Hua, and T. Hamel, "Nonlinear feedback control of quadrotors exploiting first-order drag effects," IFAC-PapersOnLine, Jul., 2017, DOI: 10.1016/j.ifacol. 2017.08.1267.
E. D. Sontag, "A Lyapunov-like stabilization of asymptotic controllability," SIAM Journal of Control and Optimization, vol. 21, no. 3, 1983, DOI: 10.1137/0321028.
P. Auer. "Using confidence bounds for exploitation-exploration trade-offs," The Journal of Machine Learning Research, vol. 3, pp. 397-422, 2002, [Online], https://www.jmlr.org/papers/volume3/auer02a/auer02a.pdf.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.