Hwang, Illhoe
(Korea Advanced Institute of Science and Technology (KAIST),Dept. of Industrial and Systems Engineering,Daejeon,Korea)
,
Cho, Hyemin
(Korea Advanced Institute of Science and Technology (KAIST),Dept. of Industrial and Systems Engineering,Daejeon,Korea)
,
Hong, Sangpyo
(Korea Advanced Institute of Science and Technology (KAIST),Dept. of Industrial and Systems Engineering,Daejeon,Korea)
,
Lee, Junhui
(Korea Advanced Institute of Science and Technology (KAIST),Dept. of Industrial and Systems Engineering,Daejeon,Korea)
,
Kim, SeokJoong
(Korea Advanced Institute of Science and Technology (KAIST),Dept. of Industrial and Systems Engineering,Daejeon,Korea)
,
Jang, Young Jae
(Korea Advanced Institute of Science and Technology (KAIST),Dept. of Industrial and Systems Engineering,Daejeon,Korea)
We present a reinforcement learning-based algorithm for route guidance and vehicle assignment of an overhead hoist transport system, a typical form of automated material handling system in semiconductor fabrication facilities (fabs). As the size of the fab increases, so does the number of vehicles r...
We present a reinforcement learning-based algorithm for route guidance and vehicle assignment of an overhead hoist transport system, a typical form of automated material handling system in semiconductor fabrication facilities (fabs). As the size of the fab increases, so does the number of vehicles required to operate in the fab. The algorithm most commonly used in industry, a mathematical optimization-based algorithm that constantly seeks the shortest routes, has been proven ineffective in dealing with fabs operating around 1,000 vehicles or more. In this paper, we introduce Q-learning, a reinforcement learning-based algorithm for route guidance and vehicle assignment. Q-learning dynamically reroutes the vehicles based on the congestion and traffic conditions. It also assigns vehicles to tasks based on the overall congestion of the track. We show that the proposed algorithm is considerably more effective than the existing algorithm in an actual fab-scale experiment. Moreover, we illustrate that the Q-learning-based algorithm is more effective in designing the track layouts.
We present a reinforcement learning-based algorithm for route guidance and vehicle assignment of an overhead hoist transport system, a typical form of automated material handling system in semiconductor fabrication facilities (fabs). As the size of the fab increases, so does the number of vehicles required to operate in the fab. The algorithm most commonly used in industry, a mathematical optimization-based algorithm that constantly seeks the shortest routes, has been proven ineffective in dealing with fabs operating around 1,000 vehicles or more. In this paper, we introduce Q-learning, a reinforcement learning-based algorithm for route guidance and vehicle assignment. Q-learning dynamically reroutes the vehicles based on the congestion and traffic conditions. It also assigns vehicles to tasks based on the overall congestion of the track. We show that the proposed algorithm is considerably more effective than the existing algorithm in an actual fab-scale experiment. Moreover, we illustrate that the Q-learning-based algorithm is more effective in designing the track layouts.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.