Creating a global Reinforcement Learning (RL) model from subnetwork RL agents
원문보기
IPC분류정보
국가/구분
United States(US) Patent
공개
국제특허분류(IPC7판)
H04L-041/16
H04L-041/12
출원번호
18119586
(2023-03-09)
공개번호
20230216747
(2023-07-06)
발명자
/ 주소
Barber, Christopher
Altamimi, Sa'di
Shirmohammadi, Shervin
Côté, David
출원인 / 주소
Barber, Christopher
인용정보
피인용 횟수 :
0인용 특허 :
0
초록▼
Methods are provided for recommending actions to improve operability of a network. In one implementation, a method includes acknowledging a plurality of subnetworks in a whole network, each subnetwork including multiple nodes and being represented by a tunnel group having multiple end-to-end tunnels
Methods are provided for recommending actions to improve operability of a network. In one implementation, a method includes acknowledging a plurality of subnetworks in a whole network, each subnetwork including multiple nodes and being represented by a tunnel group having multiple end-to-end tunnels through the subnetwork. The method also includes selecting a first group of subnetworks from the plurality of subnetworks and generating a Reinforcement Learning (RL) agent for each subnetwork of the first group. Each RL agent is based on observations of end-to-end metrics of the end-to-end tunnels of the respective subnetwork. The observations are independent of specific topology information of the subnetwork. Also, the method includes training a global model based on the RL agents of the first group of subnetworks and applying the global model to an Action Recommendation Engine (ARE) configured for recommending actions that can be taken to improve a state of the whole network.
대표청구항▼
1. A non-transitory computer-readable medium configured to store computer logic having instructions that, when executed, enable a processing device to perform the steps of: acknowledging a plurality of subnetworks among a whole network, each subnetwork including a plurality of nodes and being repres
1. A non-transitory computer-readable medium configured to store computer logic having instructions that, when executed, enable a processing device to perform the steps of: acknowledging a plurality of subnetworks among a whole network, each subnetwork including a plurality of nodes and being represented by a tunnel group having a plurality of end-to-end tunnels through the respective subnetwork;selecting a first group of subnetworks from the plurality of subnetworks;generating a Reinforcement Learning (RL) agent for each subnetwork of the first group, each RL agent based on observations of end-to-end metrics of the end-to-end tunnels of the respective subnetwork, the observations being independent of specific topology information of the respective subnetwork;training a global model based on the RL agents of the first group of subnetworks; andapplying the global model to an Action Recommendation Engine (ARE) configured for recommending actions that can be taken to improve a state of the whole network. 2. The non-transitory computer-readable medium of claim 1, wherein, before applying the global model to the ARE, the instructions further enable the processing device to test the global model on a second group of subnetworks selected from the plurality of subnetworks. 3. The non-transitory computer-readable medium of claim 2, wherein, based on the testing of the global model, the instructions further enable the processing device to tune or retrain one or more of the RL agents and/or the global model as needed. 4. The non-transitory computer-readable medium of claim 2, wherein the instructions further enable the processing device to perform the steps of: matching remaining subnetworks with the first group of subnetworks based on similarities in topology; andapplying the RL agents of the first group of subnetworks to the remaining subnetworks that match the first group of subnetworks. 5. The non-transitory computer-readable medium of claim 2, wherein the steps of training and testing are performed on one or more of a real-world network, a virtual network, and a simulated network. 6. The non-transitory computer-readable medium of claim 1, wherein the observations are based on one or more of tickets, logs, user feedback, expert rules, and simulator output. 7. The non-transitory computer-readable medium of claim 1, wherein the step of generating the RL agent for each subnetwork includes: using one or more of an online RL technique and an offline RL technique; anditerating the step of generating the RL agent one or more times based on additional observations of end-to-end metrics. 8. The non-transitory computer-readable medium of claim 1, wherein the global model is a decentralized RL model. 9. The non-transitory computer-readable medium of claim 1, wherein the end-to-end metrics are related to Key Performance Indicator (KPI) metrics. 10. The non-transitory computer-readable medium of claim 9, wherein the end-to-end metrics are further related to aggregated information associated with a topology of the respective subnetwork. 11. The non-transitory computer-readable medium of claim 10, wherein the aggregated information includes one or more of the number of hops along each tunnel, the number of nodes along each tunnel, and the cost of transmitting data traffic along each tunnel. 12. The non-transitory computer-readable medium of claim 1, wherein the instructions further enable the processing device to provide the recommended action to a network engineer of a Network Operations Center (NOC) that utilizes the ARE. 13. The non-transitory computer-readable medium of claim 1, wherein, with respect to each subnetwork, the end-to-end tunnels are arranged from a client device to one or more servers associated with a video service provider. 14. The non-transitory computer-readable medium of claim 1, wherein, with respect to each tunnel group, the ARE is configured to switch an end-to-end primary tunnel to an end-to-end secondary tunnel selected from one or more backup tunnels of the respective tunnel group in order to optimize traffic in the whole network. 15. The non-transitory computer-readable medium of claim 1, wherein the whole network includes a training environment modeled as a Decoupled Partially-Observable Markov Decision Process (Dec-POMDP). 16. The non-transitory computer-readable medium of claim 1, wherein the observations that are independent of specific topology information include observations independent of a) conditions of the nodes, b) conditions of links arranged between the nodes, and c) actions by other RL agents. 17. The non-transitory computer-readable medium of claim 1, wherein the observations related to end-to-end metrics include one or more observations related to Quality of Service (QoS) metrics, delay, jitter, packet loss, Quality of Experience (QoE), bitrate, buffer level, startup delay, number of hops per tunnel, and number of nodes per tunnel. 18. The non-transitory computer-readable medium of claim 1, wherein the step of training the global model includes calculating an RL reward based on a Quality of Experience (QoE) metric and an operating expense (OPEX) metric. 19. The non-transitory computer-readable medium of claim 1, wherein the instructions further enable the processing device to: use the global model during inference or production in a real-world environment; anduse one or more of a tuning technique, a transfer learning technique, a zero-shot learning technique, and a retraining technique to modify the global model as needed. 20. A non-transitory computer-readable medium configured to store computer logic having instructions that, when executed, enable a processing device to: receive a global model in an Action Recommendation Engine (ARE), the global model created by acknowledging a plurality of subnetworks in a whole network, each subnetwork including a plurality of nodes and being represented by a tunnel group having a plurality of end-to-end tunnels through the respective subnetwork,selecting a first group of subnetworks from the plurality of subnetworks,generating a Reinforcement Learning (RL) agent for each subnetwork of the first group, wherein each RL agent is based on observations of end-to-end metrics of the end-to-end tunnels of the respective subnetwork, and wherein the observations are independent of specific topology information of the respective subnetwork, andtraining the global model based on the RL agents of the first group of subnetworks,utilize the global model during inference or production in a real-world environment; andrecommend one or more actions to be taken as needed to improve a state of the whole network.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.