System and method of using a machine learning algorithm to meet SLA requirements
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-009/455
H04L-012/24
H04L-012/26
출원번호
US-0170040
(2016-06-01)
등록번호
US-10091070
(2018-10-02)
발명자
/ 주소
Chopra, Danish
Narang, Anshu
Bhullar, Inderpreet
Patel, Hemant
Srinivasa, Shashidhar
출원인 / 주소
CISCO TECHNOLOGY, INC.
대리인 / 주소
Polsinelli PC
인용정보
피인용 횟수 :
0인용 특허 :
35
초록▼
A method includes collecting, at a monitoring and recovery node, a virtual network function key performance index data through multiple channels from a corresponding containerized virtual network function. The method includes maintaining, at the monitoring and recovery node, state information of the
A method includes collecting, at a monitoring and recovery node, a virtual network function key performance index data through multiple channels from a corresponding containerized virtual network function. The method includes maintaining, at the monitoring and recovery node, state information of the corresponding containerized virtual network function and running, at the monitoring and recovery node, a machine learning algorithm that, once trained, learns and predicts whether the corresponding containerized virtual network function requires one of a scaling, a healing or a context switching to sister virtual network function to yield a determination and meet the service level agreement of a network service.
대표청구항▼
1. A method comprising: collecting a virtual network function key performance index data from a corresponding containerized virtual network function;maintaining state information of the corresponding containerized virtual network function;running a machine learning algorithm that, once trained, lear
1. A method comprising: collecting a virtual network function key performance index data from a corresponding containerized virtual network function;maintaining state information of the corresponding containerized virtual network function;running a machine learning algorithm that, once trained, learns and predicts whether the corresponding containerized virtual network function requires one of a scaling, a healing or a context switching to sister virtual network function to yield a determination;executing the predicted scaling, the predicted healing, or the predicted context switching of the virtual network function in response to the predictions;wherein the machine learning algorithm comprises: T(s)=(Σ(M(v)+R(a)))%T(m)R(a)=Rvnf/Rtotal<=global median resource usageT(m)=M(v)max+R(a)max, whereT(s) is a threshold for the scaling, the healing or the context switching to the sister virtual network function for the corresponding containerized virtual network function;M(v) is a metric variable;R(a) comprises an absolute individual resource usage for the corresponding containerized virtual network function out of multiple containerized virtual network functions;Rvnf comprises a resource usage for a given virtual network function;Rtotal comprises a total resource usage for a network service comprising a group of virtual network functions;T(m) is a threshold maximum; andΣ comprises a summation from i=1 to N, wherein N is a number of times the threshold T(s) for the scaling, the healing or the context switching has succeeded. 2. The method of claim 1, wherein the collecting and running steps occur at a monitoring and recovery node. 3. The method of claim 1, further comprising, when the T(s) threshold is met N times, and the determination indicates an action should be taken, providing an instruction to a provisioning node to perform one of the scaling, the healing and the context switching for the corresponding containerized virtual network function. 4. The method of claim 1, wherein the step of collecting the virtual network function key performance index data occurs through multiple channels. 5. The method of claim 4, wherein the multiple channels comprise at least two of IPSLA, NETCONF, and SNMP. 6. The method of claim 3, wherein when the machine learning algorithm predicts that the corresponding containerized virtual network function requires scaling, providing an instruction to the provisioning node to add a new virtual network function instance. 7. The method of claim 6, wherein the provisioning node adds the new virtual network function instance using a new docker container or a virtual machine. 8. A system comprising: a processor; anda computer-readable medium, storing instructions which, when executed by the processor, cause the processor to perform operations comprising:collecting a virtual network function key performance index data from a corresponding containerized virtual network function;maintaining state information of the corresponding containerized virtual network function;running a machine learning algorithm that, once trained, learns and predicts whether the corresponding containerized virtual network function requires one of a scaling, a healing or a context switching to sister virtual network function to yield a determination;executing the predicted scaling, the predicted healing, or the predicted context switching of the virtual network function in response to the predictions;wherein the machine learning algorithm comprises: T(s)=(Σ(M(v)+R(a)))%T(m)R(a)=Rvnf/Rtotal<=global median resource usageT(m)=M(v)max+R(a)max, whereT(s) is a threshold for the scaling, the healing or the context switching to the sister virtual network function for the corresponding containerized virtual network function;M(v) is a metric variable;R(a) comprises an absolute individual resource usage for the corresponding containerized virtual network function out of multiple containerized virtual network functions;Rvnf comprises a resource usage for a given virtual network function;Rtotal comprises a total resource usage for a network service comprising a group of virtual network functions;T(m) is a threshold maximum; andΣ comprises a summation from i=1 to N, wherein N is a number of times the threshold T(s) for the scaling, the healing or the context switching has succeeded. 9. The system of claim 8, wherein the collecting and running steps occur at a monitoring and recovery node. 10. The system of claim 8, wherein the computer-readable medium stores instructions which, when executed by the processor, cause the processor to perform further operations comprising, when the T(s) threshold for scaling is met N times, and the determination indicates an action should be taken, providing an instruction to a provisioning node to perform one of the scaling, the healing and the context switching for the corresponding containerized virtual network function. 11. The system of claim 8, wherein the step of collecting the virtual network function key performance index data occurs through multiple channels. 12. The system of claim 11, wherein the multiple channels comprise at least two of IPSLA, NETCONF, and SNMP. 13. The system of claim 10, wherein when the machine learning algorithm predicts that the corresponding containerized virtual network function requires scaling, providing an instruction to the provisioning node to add a new virtual network function instance. 14. The system of claim 13, wherein the provisioning node adds the new virtual network function instance using a new docker container or a virtual machine. 15. A computer-readable storage device storing instructions which, when executed by a processor, cause the processor to perform operations comprising: collecting a virtual network function key performance index data from a corresponding containerized virtual network function;maintaining state information of the corresponding containerized virtual network function;running a machine learning algorithm that, once trained, learns and predicts whether the corresponding containerized virtual network function requires one of a scaling, a healing or a context switching to sister virtual network function to yield a determination,executing the predicted scaling, the predicted healing, or the predicted context switching of the virtual network function in response to the predictions;wherein the machine learning algorithm comprises: T(s)=(Σ(M(v)+R(a)))% T(m)R(a)=Rvnf/Rtotal<=global median resource usageT(m)=M(v)max+R(a)max, whereT(s) is a threshold for the scaling, the healing or the context switching to the sister virtual network function for the corresponding containerized virtual network function;M(v) is a metric variable;R(a) comprises an absolute individual resource usage for the corresponding containerized virtual network function out of multiple containerized virtual network functions;Rvnf comprises a resource usage for a given virtual network function;Rtotal comprises a total resource usage for a network service comprising a group of virtual network functions;T(m) is a threshold maximum; andΣ comprises a summation from i=1 to N, wherein N is a number of times the threshold T(s) for the scaling, the healing or the context switching has succeeded. 16. The computer-readable storage device of claim 15, wherein the collecting and running steps occur at a monitoring and recovery node. 17. The computer-readable storage device of claim 15, wherein the computer-readable device stores additional instructions which, when executed by the processor, cause the processor to perform further operations comprising, when the T(s) threshold for scaling is met N times, and the determination indicates an action should be taken, providing an instruction to a provisioning node to perform one of the scaling, the healing and the context switching for the corresponding containerized virtual network function. 18. The computer-readable storage device of claim 15, wherein the step of collecting the virtual network function key performance index data occurs through multiple channels. 19. The computer-readable storage device of claim 18, wherein the multiple channels comprise at least two of IPSLA, NETCONF, and SNMP. 20. The computer-readable storage device of claim 15, wherein when the machine learning algorithm predicts that the corresponding containerized virtual network function requires scaling, providing an instruction to the provisioning node to add a new virtual network function instance.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (35)
Nilsson, Mattias; Saabas, Ando; Vafin, Renat; Vaalgamaa, Markus; Dumitras, Adriana; Tamme, Teele; Veski, Andre, Adapting parameters of a call in progress with a model that predicts call quality.
Klein, Robert; Brisebois, Arthur, Adaptive R99 and HS PS (high speed packet-switched) link diversity for coverage and capacity enhancement of circuit-switched calls.
Williams, Jr., David Russell; Gutzwiller, Luke Robert; Hazen, Megan Ursula; Anderson, Brigham Sterling; McIntyre, Alan; Abeles, Tom, Classifying data with deep learning neural records incrementally refined through expert input.
Castelli Vittorio ; Hutchins Sharmila Thadhani ; Li Chung-Sheng ; Turek John Joseph Edward, Modifying an unreliable training set for supervised classification.
Mabe, Fred D.; Worden, Ian R.; Nicholas, David C.; Clark, Stephen M.; Anderson, Albert J.; Stevens, James A., Network routing process for regulating traffic through advantaged and disadvantaged nodes.
Banka, Tarun; Dutta, Debojyoti; Sen, Mainak; Duraisamy, Nagarajan; Pandey, Manoj Kumar, Reporting statistics on the health of a sensor node in a sensor network.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.