IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0388858
(1999-09-01)
|
발명자
/ 주소 |
- Narayan Srinivasa
- Yuri Owechko
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
45 인용 특허 :
10 |
초록
▼
A boosting and pruning system and method for utilizing a plurality of neural networks, preferably those based on adaptive resonance theory (ART), in order to increase pattern classification accuracy is presented. The method utilizes a plurality of N randomly ordered copies of the input data, which i
A boosting and pruning system and method for utilizing a plurality of neural networks, preferably those based on adaptive resonance theory (ART), in order to increase pattern classification accuracy is presented. The method utilizes a plurality of N randomly ordered copies of the input data, which is passed to a plurality of sets of booster networks. Each of the plurality of N randomly ordered copies of the input data is divided into a plurality of portions, preferably with an equal allocation of the data corresponding to each class for which recognition is desired. The plurality of portions is used to train the set of booster networks. The rules generated by the set of booster networks are then pruned in an intra-booster pruning step, which uses a pair-wise Fuzzy AND operation to determine rule overlap and to eliminate rules which are sufficiently similar. This process results in a set of intra-booster pruned booster networks. A similar pruning process is applied in an inter-booster pruning process, which eliminates rules from the intra-booster pruned networks with sufficient overlap. The final, derivative booster network captures the essence of the plurality of sets of booster networks and provides for higher classification accuracy than available using a single network.
대표청구항
▼
1. A neural network classifier boosting and pruning method including the steps of:(a) providing a set of training data having inputs with correct classifications corresponding to the inputs; (b) ordering the set of training data into a plurality Y of differently ordered data sets, where Y is the tot
1. A neural network classifier boosting and pruning method including the steps of:(a) providing a set of training data having inputs with correct classifications corresponding to the inputs; (b) ordering the set of training data into a plurality Y of differently ordered data sets, where Y is the total number of differently ordered data sets and y represents a particular one of the Y differently ordered data sets; (c) dividing each of the Y differently ordered data sets into a set of N particular data portions Dn,y where N represents the total number of data portions into which the particular data set y was divided and n identifies a particular data portion among the N data portions derived from a particular data set y; (d) associating a plurality X,y of booster classifiers Bx,y for each particular data set y, where X represents the total number of booster types associated with each particular data set y and x represents a particular booster type, the plurality X,y of booster classifiers Bx,y for each particular data set y including a terminal booster; (e) training one of each plurality X,y of booster classifiers Bx,y with a particular data portion Dn,y, resulting in a plurality of rules; (f) testing the booster classifier Bx,y, trained in step (e) utilizing another particular data portion, Dn,y, said testing resulting in correctly classified data and mistakenly classified data; (g) creating a particular training data set Tx,y corresponding to each particular data set y, the contents of which are defined by: [EQU2] where M(Bx,y (.cndot.)) identifies data upon which mistakes were made by the associated booster classifier Bx,y, C(Bx,y (.cndot.)) identifies data for which correct classifications were made by the associated booster classifier Bx,y, and w represents independently selectable apportionment factors which apportion the amount of mistakenly classified data and the amount of correctly classified data, and j is a summation index which ranges from 1 to x-1 where x-1>0 and represents the total number booster classifiers Bx,y from which correct examples are used for the particular training set Tx,y; (h) training one of the plurality X,y of booster classifiers Bx,y associated with each particular data set y with the particular training data set Tx,y created in step (g) which corresponds to the same particular data set y, resulting in a plurality of rules; (i) testing the booster classifier Bx,y trained in step (h) utilizing another particular data portion Dn,y from the same particular data set y, said testing resulting in correctly classified data and mistakenly classified data; (j) repeating steps (g) through (i) X-3 additional times until a total of X-1 booster classifiers Bx,y have been trained and tested for each particular data set y; (k) testing the X-1 booster classifiers Bx,y trained in steps (e) through (j) for each particular data set y utilizing the particular data portion y associated with each X-1 booster classifiers Bx,y, said testing resulting in correctly classified data and mistakenly classified data; (l) creating a residual training set TR,y for each particular data set y including the mistakenly classified data resulting from step (k); (m) training the terminal booster for each particular data set y utilizing the residual training set TR,y created in step (l); resulting in a plurality of rules for use in a classification apparatus for receiving data for classification and, based on the classficiation data, outputting classfications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.