Adaptive learning enhancement to automated model maintenance
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06E-001/00
G06E-003/00
G06F-015/18
G06G-007/00
G05B-013/02
출원번호
US-0851630
(2004-05-21)
발명자
/ 주소
Meng,Zhuo
Duan,Baofu
Pao,Yoh Han
출원인 / 주소
Computer Associates Think, Inc.
대리인 / 주소
Baker Botts L.L.P.
인용정보
피인용 횟수 :
16인용 특허 :
39
초록▼
An adaptive learning method for automated maintenance of a neural net model is provided. The neural net model is trained with an initial set of training data. Partial products of the trained model are stored. When new training data are available, the trained model is updated by using the stored part
An adaptive learning method for automated maintenance of a neural net model is provided. The neural net model is trained with an initial set of training data. Partial products of the trained model are stored. When new training data are available, the trained model is updated by using the stored partial products and the new training data to compute weights for the updated model.
대표청구항▼
What is claimed is: 1. An adaptive learning method for automated maintenance of a neural net model, comprising: training a neural net model with an initial set of training data, the neural net model having one or more original weights associated with the neural net model; storing partial products o
What is claimed is: 1. An adaptive learning method for automated maintenance of a neural net model, comprising: training a neural net model with an initial set of training data, the neural net model having one or more original weights associated with the neural net model; storing partial products of the trained model, the partial products comprising a portion of the initial training set of training data and a new set of training data; and updating the trained model by using the stored partial products and new training data to compute adjusted weights for the updated model. 2. The method of claim 1, wherein the adjusted weights of the updated model are a least-squares solution for training the neural net model with a combined set consisting of (i) the new training data and (ii) the initial set of training data. 3. The method of claim 1, wherein an amount of information corresponding to the partial products of the trained model depends on the size of the neural net model but not on the size of the initial set of training data. 4. The method of claim 1, wherein the trained model is updated by using the stored partial products along with a forgetting factor α. 5. The method of claim 1, wherein the neural net model includes a functional link net. 6. The method of claim 5, wherein the weights of the updated model are computed using an orthogonal least squares technique. 7. The method of claim 5, wherein the updated model has more functional link nodes than the trained model. 8. The method of claim 5, further comprising computing a least-squares error of the updated model. 9. The method of claim 5, wherein the updated model has less functional link nodes than the trained model. 10. The method of claim 5, further comprising: determining a plurality of candidate functions, wherein selected ones of the candidate functions are used to create the functional link net model. 11. The method of claim 10, further comprising generating reserve candidate functions after the neural net model is trained, until a number of unused candidate functions reaches a predetermined threshold number. 12. The method of claim 11, wherein selected ones of the unused candidate functions are used to expand the functional line net model. 13. The method of claim 5, further comprising: determining whether the new training data falls in a range of the initial set of training data; and creating one or more local nets by using the new training data, if the new training data does not fall in the range of the initial set of training data. 14. The method of claim 1, further comprising: determining additional partial products by using the new training data; determining updated partial products for the updated model by using the stored partial products and the additional partial products; and storing the updated partial products. 15. The method of claim 14, further comprising updating further the updated model, when additional new training data become available, by using the additional new training data and the updated partial products. 16. The method of claim 1, wherein the new training data includes streaming data, and the method is used to update the trained neural net model in real time with the streaming new training data. 17. The method of claim 1, further comprising: receiving streaming new training data; computing additional partial products corresponding to the new training data; and computing the weights for the updated model by using the additional partial products corresponding to the new training data. 18. A computer system for automated maintenance of a neural net model, comprising: a memory operable to store partial products of a neural net model, the partial products comprising a portion of an initial training set of training data and a new set of training data; and a processor coupled to the memory and operable to: train the neural net model with an initial set of training data, the neural net model having one or more original weights associated with the neural net model; and update the trained neural net model by using the stored partial products and new training data to compute adjusted weights for the updated model. 19. The system of claim 18, wherein the adjusted weights of the updated model are a least-squares solution for training the neural net model with a combined set consisting of (i) the new training data and (ii) the initial set of training data. 20. The system of claim 18, wherein the trained model is updated by using the stored partial products along with a forgetting factor α. 21. The system of claim 18, wherein the neural net model includes a functional link net. 22. The system of claim 21, wherein the weights of the updated model are computed using an orthogonal least squares technique. 23. The system of claim 21, wherein the updated model has more functional link nodes than the trained model. 24. The system of claim 21, further comprising: determining a plurality of candidate functions, wherein selected ones of the candidate functions are used to create the functional link net model. 25. The system of claim 24, further comprising generating reserve candidate functions after the neural net model is trained, until a number of unused candidate functions reaches a predetermined threshold number. 26. The system of claim 18, further comprising: determining additional partial products by using the new training data; determining updated partial products for the updated model by using the stored partial products and the additional partial products; and storing the updated partial products. 27. The system of claim 26, further comprising updating further the updated model, when additional new training data become available, by using the additional new training data and the updated partial products. 28. Logic for automated maintenance of a neural net model, the logic encoded in a medium and operable to: train a neural net model with an initial set of training data, the neural net model having one or more original weights associated with the neural net model; store partial products of the trained model, the partial products comprising a portion of the initial training set of training data and a new set of training data; and update the trained model by using the stored partial products and new training data to compute adjusted weights for the updated model. 29. A computer data signal transmitted in one or more segments in a transmission medium which embodies instructions executable by a computer to: train a neural net model with an initial set of training data, the neural net model having one or more original weights associated with the neural net model; store partial products of the trained model, the partial products comprising a portion of the initial training set of training data and a new set of training data; and update the trained model by using the stored partial products and new training data to compute adjusted weights for the updated model.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (39)
Frye Robert C. (Piscataway NJ) Harry Thomas R. (Trenton NJ) Lory Earl R. (Pennington NJ) Rietman Edward A. (Madison NJ), Active neural network control of wafer attributes in a plasma etch process.
Willis Frederick G. (Ann Arbor MI) Radtke Richard R. (Plymouth MI) Ellison Joseph (Detroit MI) Fozo Steven R. (Westland MI) Kern Glenn A. (Ann Arbor MI), Adaptive strategy to control internal combustion engine.
Miyano Hideyo (Niza JPX) Suzaki Yukihiko (Nerima JPX) Takahashi Fumitaka (Hoya JPX) Ogasawara Ken-ichi (Fujimi JPX), Control unit of an internal combustion engine control unit utilizing a neural network to reduce deviations between exhau.
LeClair Steven R. (Spring Valley OH) Pao Yoh-han (Cleveland Heights OH) Westhoven Timothy E. (Huntington Beach CA) Al-Kamhawi Hilmi N. (Columbus OH) Chen C. L. Philip (Kettering OH) Jackson Allen G. , Inductive-deductive process design for machined parts.
Kramer Mark A. (Winchester MA) Leonard James A. (Cambridge MA) Ungar Lyle H. (Philadelphia PA), Method and apparatus for pattern mapping system with self-reliability check.
Higuerey, Evelitsa E.; Schweizerhof, Aaron L., Method and apparatus for predicting a characteristic of a product attribute formed by a machining process using a model of the process.
Vinberg Anders ; Cass Ronald J. ; Huddleston David E. ; Pao John D. ; Barthram Phil K.,GBX ; Bayer Christopher W.,GBX, Method and apparatus for system state monitoring using pattern recognition and neural networks.
Alexandro ; Jr. Frank J. (Kirkland WA) Colley Robert W. (Menlo Park CA) Ipakchi Ali (San Carlos CA) Khadem Mostafa (Los Altos CA), Method and apparatus utilizing neural networks to predict a specified signal value within a multi-element system.
Frerichs Donald K. (Shaker Heights OH) Kaya Azmi (Akron OH) Keyes ; IV Marion A. (Chagrin Falls OH), Method and procedure for neural control of dynamic processes.
Duranton Marc (Boissy-Saint-Leger FRX) Gobert Jean (Maisons Alfort FRX) Sirat Jacques-Ariel (Limeil-Brevannes FRX), Neural network system and circuit for use therein.
Axelby George S. (North Linthicum MD) Geldiay Vedat (Silver Spring MD) Moulds ; III Clinton W. (Millersville MD), Predictive model reference adaptive controller.
Nomura Masahide (Hitachi JPX) Saito Tadayoshi (Hitachi JPX) Matsumoto Hiroshi (Ibaraka JPX) Shimoda Makoto (Katsuta JPX) Kondoh Masakazu (Hitachi JPX) Miyagaki Hisanori (Ohta JPX) Sugano Akira (Katsu, Process control method and system for performing control of a controlled system by use of a neural network.
Dara, Rozita A.; Khan, Mohammad Tauseef; Azim, Jawad; Cicchello, Orlando; Cort, Gary P., Method and system for labelling unlabeled data records in nodes of a self-organizing map for use in training a classifier for data classification in customer relationship management systems.
Unsal, Cem, System and method for selecting data sample groups for machine learning of context of data fields for various document types and/or for test data generation for quality assurance systems.
Nasle, Adib, Systems and methods for automatic real-time capacity assessment for use in real-time power analytics of an electrical power distribution system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.