IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0680628
(2000-10-06)
|
발명자
/ 주소 |
- Hung, Henry
- Laskoskie, Clarence
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
50 인용 특허 :
2 |
초록
▼
A variable chirp optical modulator is provided. An optical waveguide is split for part of its length into first and second waveguide arms. Electrode pairs are positioned to be proximate a first portion of corresponding waveguide arms. The lengths of each of the electrodes are different and are selec
A variable chirp optical modulator is provided. An optical waveguide is split for part of its length into first and second waveguide arms. Electrode pairs are positioned to be proximate a first portion of corresponding waveguide arms. The lengths of each of the electrodes are different and are selected to provide a predetermined level of chirp.
대표청구항
▼
A variable chirp optical modulator is provided. An optical waveguide is split for part of its length into first and second waveguide arms. Electrode pairs are positioned to be proximate a first portion of corresponding waveguide arms. The lengths of each of the electrodes are different and are selec
A variable chirp optical modulator is provided. An optical waveguide is split for part of its length into first and second waveguide arms. Electrode pairs are positioned to be proximate a first portion of corresponding waveguide arms. The lengths of each of the electrodes are different and are selected to provide a predetermined level of chirp. p of splitting the training data further comprises the steps of: a. computing for each feature vector in the training data a difference vector between the hyperplane point and the feature vector; b. calculating an inner product of the hyperplane normal and the difference vector calculated in step a; c. identifying an algebraic sign of the inner product; and d. determining the side to be left if the algebraic sign is negative and right if the algebraic sign is positive. 5. The method of claim 1 further comprising the steps of: a. traversing the classifier tree with a feature vector to be classified until a leaf node is reached, wherein the tree is traversed to the left or to the right at each branch node according to whether the feature vector resides to the left or the right of the branch node hypetplane; b. deciding, upon arriving at a leaf node, how to classify the feature vector, wherein the decision how to classify the feature vector is made by computing log likelihoods for the feature vector for each mixture in the leaf node classifier and classifying the feature vector in the class identified by the mixture having the highest log likelihood; and c. whereby a feature vector is classified. 6. A method utilizing machine-readable data storage of a set of machine-executable instructions for using a data processing system to perform a method of machine learning in which the learning is an assignment of a feature vector to a classification carried out by creating and using first a binary classifier tree having nodes, which nodes further comprise branch nodes and leaf nodes and second a Bayesian classifier to obtain a result with, the method comprising the steps of: a. determining a kappa threshold and a minimum dataset size; b. using a binary tree classifier to create a node, wherein the node comprises training data having a dataset size, the training data comprising multiple sets of feature vectors and corresponding classifications; c. assuming that the node just constructed is a leaf node; d. constructing a Bayesian leaf node classifier for all of an training data in the node by use of the Expectation Maximization "EM" algorithm; e. forming a confusion matrix comprising m2numbers, where m is the number of classes represented by mixtures in a leaf node classifier and the ijthelement of the matrix comprises the number of training feature vectors belonging to class i and classified in class j; f. computing a kappa statistic from the numbers in the confusion matrix; g. comparing the kappa statistic to the kappa threshold; h. comparing the dataset size of the training data to the minimum dataset size; and i. determining the node to be a branch node if the dataset size of the training data is exceeds the minimum dataset size; j. creating for each node determined to be a branch node, a branch node hyperplane comprising a hyperplane point and a hyperplane normal, this step further comprising the steps of: (1) constructing a branch mixture comprising two kernels, a first branch mixture kernel and a second branch mixture kernel, wherein the branch mixture is constructed by using the EM algorithm with all the training data in the branch node; (2) calculating the hyperplane point by identifying the point along the line connecting the centers of the two branch mixture kernels such that the kernel log likelihoods at the hyperplane point are equal, wherein this step fails if the hyperplane point does not exist; (3) constructing, if the hyperplane point exists, the equivocation hyperconic section comprising a locus of points for which Mahanoblis distances between the points and the branch mixture kernels are the same; and (4) calculating the hyperplane normal as the normal to the equivocation hypereonic section at the hyperplane point; k. determining the node to be a leaf node if the hyperplane point does not exist; l. splitting the training data for the branch node into two subsets, a left subset and a right subset, according to which side of the branch node hyperplane each element of training data resides, this step comprising the further steps of: (1) computing for each feature vector in the training data a difference vector between the hyperplane point and the feature vector; (2) calculating an inner product of the hyperplane normal and the difference vector calculated in step a; (3) identifying an algebraic sign of the inner product; (4) determining the side to be left if the algebraic sign is negative and right if the algebraic sign is positive; m. passing the subsets to the next nodes recursively to be created by passing the left subset to a next left node and the right subset to a next right node; n. creating, for each node determined to be a branch node, a left node pointer and a right node pointer for pointing to a next layer of nodes recursively to be created in the binary classifier tree; and o. repeating steps b through h recursively until the application of step c or step e results in a determination that the current node is a leaf node. 7. The method of claim 6 further comprising the step of determining how a feature vector is to be classified in a class identified by a mixture in the leaf node classifier, this step comprising the further steps of: a. traversing the classifier tree with a feature vector to be classified until a leaf node is reached, this step comprising the further steps of: (1) calculating for a branch node the inner product of the hyperplane normal with the difference between the hyperplane point and the feature vector; (2) determining an algebraic sign of the inner product; (3) traversing the classifier tree to a next node, wherein the traversal is to the left of the algebraic sign is negative and to the right if the algebraic sum is positive; (4) repeating steps 7.a (1) through 7.a. (3) until a leaf node is reached; b. deciding, upon arriving at a leaf node, how to classify the feature vector, this step comprising the further steps of: (1) computing a kernel log likelihood for the feature vector with respect to each kernel in each mixture in the leaf node classifier, wherein the kernel log likelihood is computed by use of the Mahanoblis distance between the feature vector and the kernel, the size of the kernel, and normalizing factors, wherein the kernel log likelihood of a feature vector x with respect to a kernel with mean m, covariance C and dimension n is: 1/2(x-m)TC(x-m)+1/2 log C+n/2 log 2π; (2) computing a mixture log likelihood for each mixture in the leaf node classifier, the log likelihood for each mixture comprising the log of the weighted sum of antlogarithms of kernel log likelihoods for all kernels in the mixture, wherein the mixture log likelihood of feature vector x with respect to a mixture is: log Σkernal(ekernellog likelihood(x,kernel))weight(kernel); and (3) classifying the feature vector, wherein the feature vector is classified by assigning it to the class identified by the mixture having the highest mixture log likelihood among all of the mixtures in the leaf node classifier; (4) whereby a feature vector is classified. 8. The method of claim 6 further comprising the step of determining how a feature vector is to be classified in a class identified by a mixture in the leaf node classifier, this step comprising the further steps of: a. traversing the classifier tree with a feature vector to be classified until a leaf node is reached, this step comprising the further steps of: (1) calculating an inner product of the hyperplane normal and the difference vector calculated in step a; (2) determining an algebraic sign of the inner product; (3) traversing the classifier tree to a next node, wherein the traversal is to the left if the algebraic sign is negative and to the right if the algebraic sum is positive; (4) repeating steps 8.a. (1) through 8.a. (3) until a leaf node is reache
※ AI-Helper는 부적절한 답변을 할 수 있습니다.