IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0112069
(2002-03-27)
|
등록번호 |
US-7313550
(2007-12-25)
|
발명자
/ 주소 |
- Kulkarni,Bhaskar Dattatray
- Tambe,Sanjeev Shrikrishna
- Lonari,Jayaram Budhaji
- Valecha,Neelamkumar
- Dheshmukh,Sanjay Vasantrao
- Shenoy,Bhavanishankar
- Ravichandran,Sivaraman
|
출원인 / 주소 |
- Council of Scientific & Industrial Research
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
9 인용 특허 :
6 |
초록
▼
A method is described for improving the prediction accuracy and generalization performance of artificial neural network models in presence of input-output example data containing instrumental noise and/or measurement errors, the presence of noise and/or errors in the input-output example data used f
A method is described for improving the prediction accuracy and generalization performance of artificial neural network models in presence of input-output example data containing instrumental noise and/or measurement errors, the presence of noise and/or errors in the input-output example data used for training the network models create difficulties in learning accurately the nonlinear relationships existing between the inputs and the outputs, to effectively learn the noisy relationships, the methodology envisages creation of a large-sized noise-superimposed sample input-output dataset using computer simulations, here, a specific amount of Gaussian noise is added to each input/output variable in the example set and the enlarged sample data set created thereby is used as the training set for constructing the artificial neural network model, the amount of noise to be added is specific to an input/output variable and its optimal value is determined using a stochastic search and optimization technique, namely, genetic algorithms, the network trained on the noise-superimposed enlarged training set shows significant improvements in its prediction accuracy and generalization performance, the invented methodology is illustrated by its successful application to the example data comprising instrumental errors and/or measurement noise from an industrial polymerization reactor and a continuous stirred tank reactor (CSTR).
대표청구항
▼
The invention claimed is: 1. A computer-implemented method for improving the prediction accuracy and generalization performance of nonlinear artificial neural network models when an example set of input-output data, available for constructing the network model, comprises at least one of instrumenta
The invention claimed is: 1. A computer-implemented method for improving the prediction accuracy and generalization performance of nonlinear artificial neural network models when an example set of input-output data, available for constructing the network model, comprises at least one of instrumental noise and measurement errors, said method comprising the steps of: (a) generating noise-superimposed enlarged input-output sample data set using computer simulations; (b) generating for each input-output pattern in the example set, M number of noise-superimposed sample input-output patterns using computer simulations; (c) generating noise-superimposed sample input-output patterns using noise tolerance values, which are specific to each input-output variable; (d) generating Gaussian distributed random numbers using computer simulations to create noise-superimposed sample input-output patterns; (e) determining an exact amount of Gaussian noise to be added to each input-output variable in the example set by using a stochastic search and optimization technique; and (f) constructing the nonlinear artificial neural network model using a training set, the training set comprising the computer generated noise-superimposed sample input-output patterns, wherein the nonlinear artificial neural network models comprise a model of an industrial polymerization process or a jacketed non-isothermal continuous stirred tank reactor, wherein input-output variables represent at least two of a temperature, a pressure level, a liquid, and a gas, and wherein at least one input variable effects at least one output variable differently from another input variable, and wherein the exact amount of Gaussian noise for each of input variables is determined using the stochastic search and optimization technique based on effect of a respective input variable on said at least one output variable. 2. The method according to claim 1, wherein the exact amount of Gaussian noise to be added to each input output variable of the example set as determined by the genetic algorithms, is globally and not locally optimal. 3. The method according to claim 1, wherein generalization performance of the artificial neural network model is monitored using the example set as a test set. 4. The method according to claim 1, wherein the artificial neural network architecture is feed-forward and wherein, in the feed forward artificial neural network architecture, information flow within the network is unidirectional, from input layer to output layer. 5. The method according to claim 1, wherein the neural network architecture is a feed-forward, and wherein the feed-forward neural network architecture comprises at least one of multilayer perceptron (MLP) networks, radial basis function networks (RBFN), and counter propagation neural networks (CPNN). 6. The method according to claim 1, wherein the algorithms used for constructing or training the artificial neural network model includes error-back-propagation, conjugate gradient, Quickprop and RPROP. 7. The method according to claim 1, wherein the stochastic search and optimization technique used to optimize the noise tolerances refers to genetic algorithms and related methods, namely, simulated annealing (SA), simultaneous perturbation stochastic approximation (SPSA), evolutionary algorithms (LA) and memetic algorithms (MA). 8. The method according to claim 1, wherein an enlarged noise-superimposed sample input-output data set is created using computer simulations from the small-sized example input-output set.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.