Systems and methods for training neural networks based on concurrent use of current and recorded data
원문보기
IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0845567
(2010-07-28)
|
등록번호 |
US-8489528
(2013-07-16)
|
발명자
/ 주소 |
- Chowdhary, Girish V.
- Johnson, Eric N.
- Oh, Seung Min
|
출원인 / 주소 |
- Georgia Tech Research Corporation
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
1 인용 특허 :
5 |
초록
▼
Various embodiments of the invention are neural network adaptive control systems and methods configured to concurrently consider both recorded and current data, so that persistent excitation is not required. A neural network adaptive control system of the present invention can specifically select an
Various embodiments of the invention are neural network adaptive control systems and methods configured to concurrently consider both recorded and current data, so that persistent excitation is not required. A neural network adaptive control system of the present invention can specifically select and record data that has as many linearly independent elements as the dimension of the basis of the uncertainty. Using this recorded data along with current data, the neural network adaptive control system can guarantee global exponential parameter convergence in adaptive parameter estimation problems. Other embodiments of the neural network adaptive control system are also disclosed.
대표청구항
▼
1. A computer program product embodied in a non-transitory computer-readable medium, the computer program product comprising an algorithm adapted to effectuate a method comprising: providing a neural network comprising a plurality of estimated weights for estimating a linearly parameterized uncertai
1. A computer program product embodied in a non-transitory computer-readable medium, the computer program product comprising an algorithm adapted to effectuate a method comprising: providing a neural network comprising a plurality of estimated weights for estimating a linearly parameterized uncertainty;receiving past data in the neural network;recording one or more of the past data for future use;receiving current data in the neural network; andupdating the estimated weights of the neural network, with a computer processor, based on concurrent processing of the current data and the selected past data, wherein convergence of the estimated weights to ideal weights is guaranteed when the recorded past data contains as many linearly independent elements as a dimension of a basis for the uncertainty. 2. The computer program product of claim 1, the neural network being integrated into an adaptive flight controller. 3. The computer program product of claim 1, wherein the estimated weights of the neural network are guaranteed to approach and remain bounded in a compact neighborhood of the ideal weights, if the uncertainty is unstructured. 4. The computer program product of claim 1, further comprising applying the estimated weights of the neural network to guarantee global exponential tracking error convergence in a model reference adaptive control architecture, when the uncertainty is structured. 5. The computer program product of claim 1, further comprising applying the estimated weights of the neural network to guarantee uniform ultimate boundedness of a tracking error in a model reference adaptive control architecture, when the uncertainty is unknown. 6. The computer program product of claim 1, the neural network being a single hidden layer neural network. 7. The computer program product of claim 1, the neural network being a radial basis function neural network, a series or parallel implementation of a single hidden layer neural network and a radial basis function neural network, a parameterized function approximator, or a linearly parameterized function approximator with a known basis. 8. The computer program product of claim 1, wherein the convergence of the estimated weights is independent of whether the inputs to the neural network are persistently exciting. 9. The computer program product of claim 1, the recorded past data being selected to contain as many linearly independent elements as the dimension of the basis for the linearly parameterized uncertainty. 10. The computer program product of claim 1, the method further comprising projecting a weight training algorithm based on the recorded past data onto the null space of a weight training vector obtained from the current data to update the weight training algorithm. 11. The computer program product of claim 10, the updated weight training algorithm having a matrix rank greater than one. 12. A system comprising: a neural network comprising a plurality of weights applicable to received data for estimating a paramaterized uncertainty;a storage device for storing recorded data of the neural network, the recorded data being selected to contain as many linearly independent elements as a dimension of a basis for the uncertainty; anda processor in communication with the storage device, for receiving current data and for updating the weights of the neural network by concurrently processing the recorded data along with the current data by orthogonally projecting a weight training algorithm of the recorded data onto the null space of a weight training vector of the current data;wherein convergence of the weights of the neural network is guaranteed regardless of whether the inputs to the neural network are persistently exciting. 13. The system of claim 12, the neural network having a single hidden layer. 14. The system of claim 12, the weight training algorithm having a matrix rank greater than one. 15. A computer-implemented method comprising: received data in a neural network, the neural network comprising a plurality of weights for estimating a linearly parameterized uncertainty;recording one or more of the received data, wherein the recorded data contains as many linearly independent elements as a dimension of a basis for the uncertainty;receiving current data in the neural network; andupdating the weights of the neural network, with a computer processor, based on an orthogonal projection of a weight training algorithm devised from the recorded data onto the null space of a weight training vector of the current data, wherein convergence of the weights of the neural network is guaranteed. 16. The computer-implemented method of claim 15, the neural network remaining active in the absence of persistent excitation. 17. The computer-implemented method of claim 15, the neural network having a single hidden layer. 18. The computer-implemented method of claim 15, the weight training algorithm having a matrix rank greater than one. 19. The computer-implemented method of claim 15, wherein weight updates based on the recorded data are restricted to the null space of the weight updates based on current data.
이 특허에 인용된 특허 (5)
-
Calise,Anthony J.; Hovakimyan,Naira; Idan,Moshe, Adaptive control system having direct output feedback and related apparatuses and methods.
-
Johnson, Eric Norman; Calise, Anthony J., Adaptive control system having hedge unit and related apparatus and methods.
-
Johnson,Eric Norman; Calise,Anthony J., Adaptive control system having hedge unit and related apparatus and methods.
-
de Vries Aalbert, Method and system for training a neural network with adaptive weight updating and adaptive pruning in principal componen.
-
Lo James Ting-Ho, Nonadaptively trained adaptive neural systems.
이 특허를 인용한 특허 (1)
-
Nguyen, Nhan T.; Burken, John J.; Hanson, Curtis E., Control systems with normalized and covariance adaptation by optimal control modification.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.