Neural network and method of neural network training
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06N-003/08
G06N-003/04
G06N-003/063
출원번호
US-0178137
(2016-06-09)
등록번호
US-9619749
(2017-04-11)
발명자
/ 주소
Pescianschi, Dmitri
출원인 / 주소
Progress, Inc.
대리인 / 주소
Quinn IP Law
인용정보
피인용 횟수 :
0인용 특허 :
13
초록▼
A neural network includes a plurality of inputs for receiving input signals, and synapses connected to the inputs and having corrective weights established by a memory element that retains a respective weight value. The network additionally includes distributors. Each distributor is connected to one
A neural network includes a plurality of inputs for receiving input signals, and synapses connected to the inputs and having corrective weights established by a memory element that retains a respective weight value. The network additionally includes distributors. Each distributor is connected to one of the inputs for receiving the respective input signal and selects one or more corrective weights in correlation with the input value. The network also includes neurons. Each neuron has an output connected with at least one of the inputs via one synapse and generates a neuron sum by summing corrective weights selected from each synapse connected to the respective neuron. The output of each neuron provides the respective neuron sum to establish operational output signal of the network. A method of operating a neural network includes processing data thereby and using modified corrective weight values established by a separate analogous neural network during training thereof.
대표청구항▼
1. A neural network comprising: a plurality of inputs of the neural network, each input configured to receive an input signal having an input value;a plurality of synapses, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, wherein eac
1. A neural network comprising: a plurality of inputs of the neural network, each input configured to receive an input signal having an input value;a plurality of synapses, wherein each synapse is connected to one of the plurality of inputs and includes a plurality of corrective weights, wherein each corrective weight is established by a memory element that retains a respective weight value;a set of distributors, wherein each distributor is operatively connected to one of the plurality of inputs for receiving the respective input signal and is configured to select one or more corrective weights from the plurality of corrective weights in correlation with the input value; anda set of neurons, wherein: each neuron has at least one output and is connected with at least one of the plurality of inputs via one of the plurality of synapses;each neuron is configured to add up the weight values of the corrective weights selected from each synapse connected to the respective neuron and thereby generate a neuron sum; andthe output of each neuron provides the respective neuron sum to establish an operational output signal of the neural network. 2. The neural network of claim 1, further comprising a weight correction calculator configured to receive a desired output signal having a value, determine a deviation of the neuron sum from the desired output signal value, and modify respective corrective weight values established by the corresponding memory elements using the determined deviation, such that adding up the modified corrective weight values to determine the neuron sum minimizes the deviation of the neuron sum from the desired output signal value to thereby generate a trained neural network. 3. The neural network of claim 2, wherein the trained neural network is configured to receive supplementary training using solely a supplementary input signal having a value and a corresponding supplementary desired output signal. 4. The neural network of claim 3, wherein, during training or before the supplementary training of the neural network, each of the plurality of synapses is configured to accept one or more additional corrective weights established by the respective memory elements. 5. The neural network of claim 2, wherein the neural network is configured to remove from the respective synapses, during or after training of the neural network, one or more corrective weights established by the respective memory elements to retain only a number of memory elements required to operate the neural network. 6. The neural network of claim 2, wherein the neural network is configured to accept at least one of an additional input, an additional synapse, and an additional neuron before or during training of the neural network to thereby expand operational parameters of the neural network. 7. The neural network of claim 2, wherein the neural network is configured to remove at least one of an input, a synapse, and a neuron that is not being used by the neural network before, during, or after training of the neural network to thereby simplify a structure of the neural network and modify operational parameters of the neural network. 8. The neural network of claim 2, wherein each memory element is established by an electrical device characterized by at least one of an electrical and a magnetic characteristic configured to define a respective weight value wherein the respective at least one of the electrical and the magnetic characteristic of each electrical device is configured to be varied during training of the neural network, and wherein the weight correction calculator modifies the respective corrective weight values by varying the respective at least one of the electrical and the magnetic characteristic of the corresponding electrical devices. 9. The neural network of claim 8, wherein the electrical device is configured as one of a resistor, a memistor, a memristor, a transistor, a capacitor, a field-effect transistor, a photoresistor, or a magnetic dependent resistor. 10. The neural network of claim 2, wherein each memory element is established by a block of electrical resistors and includes a selector device configured to select one or more electrical resistors from the block using the determined deviation to establish each corrective weight. 11. The neural network of claim 10, wherein each memory element is additionally established by a block of electrical capacitors, and wherein the selector device is additionally configured to select capacitors using the determined deviation to establish each corrective weight. 12. The neural network of claim 2, wherein the neural network is configured as one of an analog, digital, and digital-analog network, such that at least one of the plurality of inputs, the plurality of synapses, the memory elements, the set of distributors, the set of neurons, the weight correction calculator, and the desired output signal is configured to operate in an analog, digital, and digital-analog format. 13. The neural network of claim 12, wherein the neural network is configured as the analog network, and wherein each neuron is established by one of a series and a parallel communication channel. 14. The neural network of claim 2, wherein the weight correction calculator is established as a set of differential amplifiers, and wherein each differential amplifier is configured to generate a respective correction signal. 15. The neural network of claim 1, wherein each of the distributors is a demultiplexer configured to select one or more corrective weights from the plurality of corrective weights in response to the received input signal. 16. The neural network of claim 1, wherein each distributor is configured to convert the received input signal into a binary code and select one or more corrective weights from the plurality of corrective weights in correlation with the binary code. 17. The neural network of claim 1, wherein the neural network is programmed into an electronic device having a memory, and wherein each memory element is stored in the memory of the electronic device. 18. A method of operating a utility neural network, the method comprising: processing data via the utility neural network using modified corrective weight values established by a separate analogous neural network during training thereof; andestablishing an operational output signal of the utility neural network using the modified corrective weight values established by the separate analogous neural network;wherein the separate analogous neural network was trained via: receiving, via an input to the neural network, a training input signal having a training input value;communicating the training input signal to a distributor operatively connected to the input;selecting, via the distributor, in correlation with the training input value, one or more corrective weights from a plurality of corrective weights, wherein each corrective weight is defined by a weight value and is positioned on a synapse connected to the input;adding up the weight values of the selected corrective weights, via a neuron connected with the input via the synapse and having at least one output, to generate a neuron sum;receiving, via a weight correction calculator, a desired output signal having a value;determining, via the weight correction calculator, a deviation of the neuron sum from the desired output signal value; andmodifying, via the weight correction calculator, respective corrective weight values using the determined deviation to establish the modified corrective weight values, such that adding up the modified corrective weight values to determine the neuron sum minimizes the deviation of the neuron sum from the desired output signal value to thereby train the neural network. 19. The method according to claim 18, wherein the utility neural network and the trained separate neural network include a matching neural network structure including a number of inputs, corrective weights, distributors, neurons, and synapses. 20. The method according to claim 18, wherein, in each of the utility neural network and the trained separate neural network, each corrective weight is established by a memory element that retains a respective weight value, and wherein in the separate neural network the memory element retains its respective weight value following training.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (13)
Sachse Wolfgang H. (Ithaca NY) Grabec D. Igor (Ljubljana YUX), Adaptive, neural-based signal processor.
Villarreal James A. (Friendswood TX) Shelton Robert O. (Houston TX), Neural network for processing both spatial and temporal data with time based back-propagation.
Huang Hsin-Hao (Kaohsiung TWX) Lin Shui-Shun (Tallahassee FL) Knapp Gerald M. (Baton Rouge LA) Wang Hsu-Pin (Tallahassee FL), Supervised training of a neural network.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.