IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0398116
(2009-03-04)
|
등록번호 |
US-8155807
(2012-04-10)
|
발명자
/ 주소 |
- Doria, David M.
- Frankot, Robert T.
|
출원인 / 주소 |
|
대리인 / 주소 |
Christie, Parker & Hale, LLP
|
인용정보 |
피인용 횟수 :
1 인용 특허 :
8 |
초록
▼
A method of predicting a target type in a set of target types from at least one image is provided. At least one image is obtained. A first and second set of confidence values and associated azimuth angles are determined for each target type in the set of target types from the at least one image. The
A method of predicting a target type in a set of target types from at least one image is provided. At least one image is obtained. A first and second set of confidence values and associated azimuth angles are determined for each target type in the set of target types from the at least one image. The first and second set of confidence values are fused for each of the azimuth angles to produce a fused curve for each target type in the set of target types. When multiple images are obtained, first and second set of possible detections are compiled corresponding to regions of interest in the multiple images. The possible detections are associated by regions of interest. The fused curves are produced for every region of interest. In the embodiments, the target type is predicted from the set of target types based on criteria concerning the fused curve.
대표청구항
▼
1. A method of predicting a target type in a set of target types from multiple images, comprising: obtaining first and second images;compiling a first set of possible detections corresponding to regions of interest within the first image and a second set of possible detections corresponding to regio
1. A method of predicting a target type in a set of target types from multiple images, comprising: obtaining first and second images;compiling a first set of possible detections corresponding to regions of interest within the first image and a second set of possible detections corresponding to regions of interest within the second image;associating each of the regions of interest from the first set of possible detections with a corresponding one of the regions of interest from the second set of possible detections;determining a first set of confidence values and associated azimuth angles for each target type in the set of target types from the first image;determining a second set of confidence values and associated azimuth angles for each target type in the set of target types from the second image;fusing the first set of confidence values and the second set of confidence values in accordance with the associated regions of interest for each of the azimuth angles to produce a fused curve for each target type in the set of target types, each point in the fused curve being derived from one of the first set of confidence values and a corresponding one of the second set of confidence values for each azimuth angle; andpredicting the target type from the set of target types based on at least one of a maximum value of the fused curve, a maximum area under the fused curve and a maximum area under the fused curve within an azimuth angle window. 2. The method of claim 1, the method further comprising: normalizing each of the first set of confidence values by subtracting a mean value of the first set of confidence values from each of the first set of confidence values and dividing the result by a standard deviation of the first set of confidence values; andnormalizing each of the second set of confidence values by subtracting a mean value of the second set of confidence values from each of the second set of confidence values and dividing the result by a standard deviation of the second set of confidence values. 3. The method of claim 1, the method further comprising: aligning the first set of confidence values with the second set of confidence values for each target type in the set of target types by adding an angular offset to each of the azimuth angles associated with the second set of confidence values in proportion to a relative offset between the azimuth angles associated with the first set of confidence values and the azimuth angles associated with the second set of confidence values. 4. The method of claim 3, wherein the target type is further predicted by: determining an area under the fused curve for each of a number of azimuth angle windows and for each target type in the set of target types;determining a maximum area under the fused curve from among the areas corresponding to each of the azimuth angle windows for each target type in the set of target types; andcomparing each of the maximum areas to determine an absolute maximum area, the target type with the absolute maximum area being the predicted target type. 5. The method of claim 1, wherein the first image and the second image are obtained from a first sensor. 6. The method of claim 3, wherein the first image is obtained from the first sensor and the second image is obtained from a second sensor. 7. The method of claim 1, wherein the azimuth angle window is 360°. 8. The method of claim 1, wherein the azimuth angle window is less than or equal to 1°. 9. The method of claim 1, wherein the azimuth angle window is greater than 1° and less than 360°. 10. The method of claim 1, wherein when the target has a first characteristic, the azimuth angle window is less than or equal to 120 ; when the target has a second characteristic, the azimuth angle window is 360°; and when the target has a third characteristic, the azimuth angle window is greater than 1° and less than 360°. 11. The method of claim 10, wherein the first characteristic is asymmetry, the second characteristic is symmetry, and the third characteristic is semi-symmetry and semi-asymmetry. 12. The method of claim 1, wherein the fused curve is a summation of the first set of confidence values and the second set of confidence values. 13. The method of claim 1, wherein the fused curve is a product of the first set of confidence values and the second set of confidence values. 14. The method of claim 1, further comprising: estimating uncertainty in the first set of confidence values and associated azimuth angles and applying a density function of the estimated uncertainty to generate a weighting function;weighting the first set of confidence values and associated azimuth angles in accordance with the weighting function;estimating uncertainty in the second set of confidence values and associated azimuth angles and applying a density function of the estimated uncertainty to generate a weighting function; andweighting the second set of confidence values and associated azimuth angles in accordance with the weighting function. 15. A method of predicting a target type in a set of target types from an image, the method comprising: obtaining the image;determining a first set of confidence values and associated azimuth angles for each target type in the set of target types from the image;determining a second set of confidence values and associated azimuth angles for each target type in the set of target types from the image;fusing the first set of confidence values and the second set of confidence values for each of the azimuth angles to produce a fused curve for each target type in the set of target types, each point in the fused curve being derived from one of the first set of confidence values and a corresponding one of the second set of confidence values for each azimuth angle; andpredicting the target type from the set of target types based on at least one of a maximum value of the fused curve, a maximum area under the fused curve, and a maximum area under the fused curve within an azimuth angle window. 16. The method of claim 15, wherein the target type is further predicted by: determining an area under the fused curve for each of the azimuth angle windows and for each target type in the set of target types;determining a maximum area under the fused curve from among the areas corresponding to each of the azimuth angle windows for each target type in the set of target types; andcomparing each of the maximum areas to determine an absolute maximum area, the target type with the absolute maximum area being the predicted target type. 17. The method of claim 15, wherein the azimuth angle window is 360°. 18. The method of claim 15, wherein the azimuth angle window is less than or equal to 1°. 19. The method of claim 15, wherein the azimuth angle window is greater than 1° and less than 360°. 20. The method of claim 15, wherein when the target has a first characteristic, the azimuth angle window is less than or equal to 1°; when the target has a second characteristic, the azimuth angle window is 360°; and when the target has a third characteristic, the azimuth angle window is greater than 1° and less than 360°. 21. The method of claim 20, wherein the first characteristic is asymmetry, the second characteristic is symmetry, and the third characteristic is semi-symmetry and semi-asymmetry. 22. The method of claim 15, wherein the fused curve is a summation of the first set of confidence values and the second set of confidence values. 23. The method of claim 15, wherein the fused curve is a product of the first set of confidence values and the second set of confidence values. 24. An automatic target recognition fusion system comprising: at least one sensor configured to obtain images of a scene;at least one fusion processor configured to compile a first set of possible detections corresponding to regions of interest within a first image, compile a second set of possible detections corresponding to regions of interest within a second image, and associate each one of the regions of interest from the first set of possible detections with a corresponding one of the regions of interest from the second set of possible detections; andat least one automatic target recognition processor configured to determine a first set of confidence values and associated azimuth angles for each target type in a set of target types from the first image and determine a second set of confidence values and associated azimuth angles for each target type in the set of target types from the second image,wherein the fusion processor is further configured to fuse the first set of confidence values and the second set of confidence values in accordance with the associated regions of interest for each of the azimuth angles to produce a fused curve for each target type in the set of target types, each point in the fused curve being derived from one of the first set of confidence values and a corresponding one of the second set of confidence values for each azimuth angle, and predict the target type from the set of target types based on at least one of a maximum value of the fused curve, a maximum area under the fused curve and a maximum area under the fused curve within an azimuth angle window. 25. The automatic target recognition fusion system of claim 24, wherein the fusion processor is further configured to normalize each of the first set of confidence values by subtracting a mean value of the first set of confidence values from each of the first set of confidence values and dividing the result by a standard deviation of the first set of confidence values, and normalize each of the second set of confidence values by subtracting a mean value of the second set of confidence values from each of the second set of confidence values and dividing the result by a standard deviation of the second set of confidence values. 26. The automatic target recognition fusion system of claim 24, wherein the fusion processor is further configured to align the first set of confidence values with the second set of confidence values for each target type in the set of target types by adding an angular offset to each of the azimuth angles associated with the second set of confidence values in proportion to a relative offset within the azimuth angles associated with the first set of confidence values and the azimuth angles associated with the second set of confidence values.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.