IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0746134
(2007-05-09)
|
등록번호 |
US-8260036
(2012-09-04)
|
발명자
/ 주소 |
- Hamza, Rida M.
- Blaser, Robert A.
|
출원인 / 주소 |
- Honeywell International Inc.
|
대리인 / 주소 |
Shumaker & Sieffert, P.A.
|
인용정보 |
피인용 횟수 :
13 인용 특허 :
5 |
초록
▼
Methods and apparatus are provided for detecting and tracking a target. Images are captured from a field of view by at least two cameras mounted on one or more platforms. These images are analyzed to identify landmarks with the images which can be used to track the targets position from frame to fra
Methods and apparatus are provided for detecting and tracking a target. Images are captured from a field of view by at least two cameras mounted on one or more platforms. These images are analyzed to identify landmarks with the images which can be used to track the targets position from frame to frame. The images are fused (merged) with information about the target or platform position from at least one sensor to detect and track the target. The targets position with respect to the position of the platform is displayed or the position of the platform relative to the target is displayed.
대표청구항
▼
1. A method comprising: with at least two cameras mounted on one or more moving platforms at different perspectives, capturing an image of a field of view comprising a target;visually triangulating the field of view based on the images from the at least two cameras using a perspective plane approach
1. A method comprising: with at least two cameras mounted on one or more moving platforms at different perspectives, capturing an image of a field of view comprising a target;visually triangulating the field of view based on the images from the at least two cameras using a perspective plane approach, wherein visually triangulating the field of view comprises: determining a tilt angle, a tilt vector, and an elevation angle of each camera based on information from an inertial sensor;for each camera, constructing an imaging plane vector (xc) equation set as a function of target coordinates of the target observed by the respective camera, and the tilt angle, the tilt vector, and the elevation angle of the camera;for each camera, solving the imaging plane vector (xc) equation set to extract an observation vector (x0); andmapping the observation vector (x0) of each camera into common coordinates;determining a position of the target relative to a position of the one or more moving platforms based on the mapped observation vectors (x0); anddisplaying the position of the target with respect to the position of the one or more platforms. 2. The method of claim 1, further comprising predicting a future position of the target prior to capturing a next image of the field of view. 3. The method of claim 2, wherein the predicting is performed via a Kalman filter and information from the inertial sensor. 4. The method of claim 1, wherein the inertial sensor comprises a plurality of inertial sensors, each inertial sensor being associated with a respective one of the cameras, the method further comprising, for each camera, initializing a camera position using the respective inertial sensor. 5. The method of claim 1, further comprising displaying the position of the one or more platforms with respect to the target. 6. The method of claim 1, further comprising illuminating the field of view with a structured light source prior to capturing the images. 7. The method of claim 6, wherein the structured light source produces a lighting pattern. 8. The method of claim 1, further comprising iterating an estimation of the position of the target and the position of the one or more platforms. 9. The method of claim 1, the method further comprising determining at least one of an absolute position, relative position, absolute attitude, relative attitude, acceleration, velocity, or attitude of the one or more platforms, or absolute navigation information or relative navigation information based on the inertial sensor, or determining synthetic vision information from a synthetic vision database. 10. The method of claim 9, further comprising processing information about the one or more platforms using a Kalman filter in the event of failure of the inertial sensor. 11. The method of claim 1, further comprising: for each camera, merging information about the camera from the inertial sensor with the image captured by the cameras; andchecking the position of the target based on the merged information. 12. The method of claim 1, wherein mapping the observation vector (x0) of each camera into common coordinates comprises mapping the observation vector (x0) of each camera into common coordinates using a scaling parameter of a feature within the images captured by each of the cameras. 13. The method of claim 12, further comprising searching the images captured by the cameras for the feature using scale invariant features technologies. 14. A method comprising: with at least two cameras mounted on a moving platform at different perspectives, capturing an image of a field of view comprising a target;visually triangulating the field of view based on the images from the at least two cameras using the perspective plane approach, wherein visually triangulating the field of view comprises: determining the tilt angle, tilt vector, and elevation angle of each camera based on information from an inertial sensor;for each camera, constructing an imaging plane vector (xc) equation set as a function of target coordinates of the target observed by the respective camera, and the tilt angle, the tilt vector, and the elevation angle of the camera;for each camera, solving the equation set to extract the observed object plane vector (x0); andmapping the observation vector (x0) of each camera into common coordinates;determining a position of the platform relative to a position of the target based on the mapped observation vectors; anddisplaying the position of the platform with respect to the position of the target. 15. The method of claim 14, further comprising predicting a future position of the platform using a Kalman filter. 16. A system comprising: P1 at least two cameras mounted on a moving platform, wherein the at least two cameras are each configured to capture an image of a field of view comprising a target; an inertial sensor;a processor configured to visually triangulate the field of view based on the images from the at least two cameras using a perspective plane approach, wherein the processor is configured to visually triangulate the field of view by at least;determining a tilt angle, a tilt vector, and an elevation angle of each camera based on information from an inertial sensor,for each camera, constructing an imaging plane vector (Xc) equation set as a function of target coordinates of the target observed by the respective camera, and the tilt angle, the tilt vector, and the elevation angle of the camera,for each camera, solving the equation set to extract the observed object plane vector (x0), andmapping the observation vector (x0) of each camera into common coordinates, and wherein the processor is configured to determine a position of the target relative to a position of the moving platform based on the mapped observation vectors; anda display device configured to display the position of the target with respect to the position of the moving platform or the position of the moving platform with respect to the position of the target. 17. The system of claim 16, wherein the processor is configured to predict a future position of the target prior to the at least two cameras capturing a next image of the field of view. 18. The system of claim 17, wherein the processor is configured to predict a future position of the target using a Kalman filter. 19. The system of claim 16, wherein the inertial sensor comprises a plurality of inertial sensors, each inertial sensor being associated with a respective one of the cameras, wherein the processor is configured to initialize a camera position of each camera using the respective inertial sensor. 20. The system of claim 16, wherein the inertial sensor is configured to provide information indicative of at least one of an absolute position, relative position, absolute attitude, relative attitude, acceleration, velocity, or attitude of the platform, or absolute navigation information, relative navigation information, or synthetic vision information from a synthetic vision database. 21. The system of claim 20, further comprising a light source configured to illuminate the field of view with structured light prior to capturing the images. 22. The system of claim 21, wherein light source is configured to produce a lighting pattern. 23. The system of claim 16, wherein the processor is configured to predict a next position of the platform using a Kalman filter. 24. The system of claim 16, wherein the inertial sensor is configured to provide information indicative of at least one of an absolute position, relative position, absolute attitude, relative attitude, acceleration, velocity, or attitude of the moving platform, or absolute navigation information, relative navigation information, or synthetic vision information from a synthetic vision database. 25. The system of claim 16, wherein the processor is configured to map the observation vector (x0) of each camera into common coordinates by at least mapping the observation vector (x0) of each camera into common coordinates using a scaling parameter of a feature within the images captured by each of the cameras.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.