An artificial neuron integrates current and prior information, each of which predicts the state of a part of the world. The neuron's output corresponds to the discrepancy between the two predictions, or prediction error. Inputs contributing prior information are selected in order to minimize the err
An artificial neuron integrates current and prior information, each of which predicts the state of a part of the world. The neuron's output corresponds to the discrepancy between the two predictions, or prediction error. Inputs contributing prior information are selected in order to minimize the error, which can occur through an anti-Hebbian-type plasticity rule. Current information sources are selected to maximize errors, which can occur through a Hebbian-type rule. This insures that the neuron receives new information from its external world that is not redundant with the prior information that the neuron already possesses. By learning on its own to make predictions, a neuron or network of these neurons acquires information necessary to generate intelligent and advantageous outputs.
대표청구항▼
1. A method for processing information using at least one artificial neuron, each of said at least one artificial neuron receiving a first and a second class of input, wherein at least one of said first and said second class comprises a plurality of inputs, and for generating an output from each of
1. A method for processing information using at least one artificial neuron, each of said at least one artificial neuron receiving a first and a second class of input, wherein at least one of said first and said second class comprises a plurality of inputs, and for generating an output from each of said at least one artificial neuron, the method comprising for each of said at least one artificial neuron: providing said first class of input, each input having a weight and an activity;providing said second class of input, each input having a weight and an activity;selecting a value for said weight for each input in said first class to maximize a discrepancy corresponding to said output of said artificial neuron, said discrepancy comprising a difference between the weighted sum of activities in the first class of inputs versus the weighted sum of activities in the second class of inputs;selecting a value for said weight for each input in said second class to minimize said discrepancy corresponding to said output of said artificial neuron; andgenerating said output corresponding to said discrepancy, said discrepancy being between a weighted sum based on a selected weight values of said first class of input and based on a selected weight values of said second class of input;wherein said artificial neuron predicts an aspect of the world that is informative to said artificial neuron. 2. The method of claim 1, wherein said selection of weights for said first class of input is guided by a Hebbian-type plasticity rule. 3. The method of claim 1, wherein said selection of weights for said second class of input is guided by an anti-Hebbian-type plasticity rule. 4. The method of claim 1, wherein selection of each said weight in said second class of input is performed using linear mean squared estimation. 5. The method of claim 1, said artificial neuron further comprising a membrane, wherein each of said inputs comprise a distinct set of a plurality of ion channels within said artificial neuron's membrane, each of said weights corresponding to a number of functional ion channels associated with said input, said discrepancy corresponding to a voltage difference across said membrane. 6. The method of claim 1, wherein said weights for said first class of input are selected to maximize a function of said discrepancy and a correlation between a weighted sum of said first class of inputs and a rewarding goal. 7. A computer-readable storage medium containing program code executable by a processor for processing information using at least one artificial neuron, each of said at least one artificial neuron receiving a first and a second class of input, wherein at least one of said first and said second class comprises a plurality of inputs, and for generating an output, the method comprising for each of said at least one artificial neuron: program code for providing said first class of input, each input having a weight and an activity;program code for providing said second class of input, each input having a weight and an activity;program code for selecting a value for said weight for each input in said first class to maximize a discrepancy corresponding to said output of said artificial neuron, said discrepancy comprising a difference between the weighted sum of activities in the first class of inputs versus the weighted sum of activities in the second class of inputs;program code for selecting a value for said weight for each input in said second class to minimize said discrepancy corresponding to said output of said artificial neuron artificial neuron; andprogram code for generating said output corresponding to said discrepancy, said discrepancy being between a weighted sum based on selected weight values of said first class of input and based on selected weight values of said second class of input;wherein said artificial neuron predicts an aspect of the world that is informative to said at least one artificial neuron. 8. The computer-readable storage medium of claim 7, wherein said selection of weights for said first class of input is guided by a Hebbian-type plasticity rule. 9. The computer-readable storage medium of claim 7, wherein said selection of weights for said second class of input is guided by an anti-Hebbian-type plasticity rule. 10. The computer-readable storage medium of claim 7, wherein selection of each said weight for said second class of input is performed using linear mean squared estimation. 11. The computer-readable storage medium of claim 7, said artificial neuron further comprising a membrane, wherein each of said input comprises a distinct set of a plurality of ion channels within said artificial neuron's membrane, each of said weights corresponding to a number of functional ion channels associated with said input, said discrepancy corresponding to a voltage difference across said membrane. 12. The computer-readable storage medium of claim 7, wherein said weights for said first class of input are selected to maximize a function of said discrepancy and a correlation between a weighted sum of said first class of inputs and a rewarding goal. 13. The computer-readable storage medium of claim 7, wherein activities of said second class of input comprises a plurality of memories of past outputs of said artificial neuron. 14. An apparatus for processing information comprising: at least one artificial neuron, each of said at least one artificial neuron configured to receive a first and a second class of input, wherein at least one of said first and said second class comprises a plurality of inputs, and each of said at least one artificial neuron configured to generate an output;said artificial neuron providing said first class of input, each input of said first class of input having a weight and an activity;said artificial neuron providing said second class of input, each input of said second class of input having a weight and an activity;said artificial neuron selecting a value for said weight for each input in said first class to maximize a discrepancy corresponding to said output of said artificial neuron;said artificial neuron selecting a value for said weight for each input in said second class to minimize said discrepancy corresponding to said output of said artificial neuron; andsaid artificial neuron generating said output corresponding to said discrepancy, said discrepancy comprising a difference between a weighted sum of activities of said first class of input and a weighted sum of activities in said second class of input;wherein said artificial neuron predicts an aspect of the world that is informative to said artificial neuron. 15. The apparatus of claim 14, wherein said selection of weights for said first class of input is guided by a Hebbian-type plasticity rule. 16. The apparatus of claim 14, wherein said selection of weights for said second class of input is guided by an anti-Hebbian-type plasticity rule. 17. The apparatus of claim 14, wherein activities of said second class of input comprises a plurality of memories of past outputs of said at least one artificial neuron. 18. The apparatus of claim 14, said artificial neuron further comprising a membrane, wherein each of said input comprises a distinct set of a plurality of ion channels within said artificial neuron's membrane, each of said weights corresponding to a number of functional ion channels associated with said input, said discrepancy corresponding to a voltage difference across said membrane. 19. The apparatus of claim 14, wherein said weights for said first class of input are selected to maximize a function of said discrepancy and a correlation between a weighted sum of said first class of inputs and a rewarding goal.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (8)
Guiver John P. ; Klimasauskas Casimir C., Apparatus and method for selecting a working data set for model development.
DeYong Mark R. (Las Cruces NM) Findley Randall L. (Austin TX) Eskridge Thomas C. (Las Cruces NM) Fields Christopher A. (Rockville MD), Asynchronous temporal neural processing element.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.