Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
B60R-001/00
G06K-009/20
G06K-009/62
G06T-007/20
G08G-001/16
출원번호
US-0866362
(2015-09-25)
등록번호
US-9443154
(2016-09-13)
발명자
/ 주소
Stein, Gideon
Shashua, Amnon
Gdalyahu, Yoram
출원인 / 주소
Mobileye Vision Technologies Ltd.
대리인 / 주소
Morrison & Foerster LLP
인용정보
피인용 횟수 :
1인용 특허 :
61
초록▼
A computerized system mountable on a vehicle operable to detect an object by processing first image frames from a first camera and second image frames from a second camera. A first range is determined to said detected object using the first image frames. An image location is projected of the detecte
A computerized system mountable on a vehicle operable to detect an object by processing first image frames from a first camera and second image frames from a second camera. A first range is determined to said detected object using the first image frames. An image location is projected of the detected object in the first image frames onto an image location in the second image frames. A second range is determined to the detected object based on both the first and second image frames. The detected object is tracked in both the first and second image frames When the detected object leaves a field of view of the first camera, a third range is determined responsive to the second range and the second image frames.
대표청구항▼
1. A method performable by a computerized system mounted to travel with a vehicle, the system comprising a first camera having a first field of view and a second camera having a second field of view, the method comprising: acquiring multiple first image frames via the first camera,acquiring multiple
1. A method performable by a computerized system mounted to travel with a vehicle, the system comprising a first camera having a first field of view and a second camera having a second field of view, the method comprising: acquiring multiple first image frames via the first camera,acquiring multiple second image frames via the second camera,processing the first image frames to detect an object,processing the first image frames to determine a first distance between the vehicle and the detected object,projecting an image location of the detected object in at least one of the first image frames onto a location in at least one of the second image frames,processing the first image frames and the second image frames to determine a second distance between the vehicle and the detected object,tracking the detected object in the first image frames and the second image frames, andprocessing the second image frames and the second distance to determine a third distance between the vehicle and the detected object,whereby estimated distances between the vehicle and the detected object are provided when the detected object is inside the first field of view and when the detected object is outside the first field of view. 2. The method of claim 1, wherein projecting an image location of the detected object in at least one of the first image frames onto a location in at least one of the second image frames comprises aligning a patch of at least one first image and a patch of at least one second image. 3. The method of claim 2, wherein aligning a patch of at least one first image and a patch of at least one second image comprises at least one of gray scale value histogram computation, sub-patch correlation and Hausdorf distance computation. 4. The method of claim 1, wherein processing the first image frames and the second image frames to determine a second distance comprises stereo analysis. 5. The method of claim 1, wherein the first camera continues tracking the detected object when the second camera terminates tracking the detected object and wherein the second camera continues tracking the detected object when the first camera terminates tracking the detected object. 6. The method of claim 1, comprising adjusting a gain or exposure of the second camera responsive to detecting the object in first camera image frame. 7. The method of claim 1, wherein the locating the detected object in the second camera image frame includes searching for the image of the detected object in the second camera image frame around an epipolar line corresponding to the detected object in the first camera image frame. 8. A computerized system mountable on a vehicle, the system comprising: a first camera and a second camera connectable to a processor,the processor operable to:acquire multiple first image frames from the first camera,acquire multiple second image frames from the second camera,detect an object by processing the first image frames to produce thereby a detected object,determine a first distance to the detected object using the first image frames,project an image location of the detected object in at least one of the first image frames onto an image location in at least one of the second image frames,determine a second distance from the vehicle to the detected object based on both the first image frames and the second image frames,track the detected object in both the first image frames and second image frames and when the detected object leaves a field of view of the first camera,determine a third distance to the detected object responsive to the second distance and the second image frames. 9. The system of claim 8, wherein the first camera has a field of view less than thirty five degrees. 10. The system of claim 8, wherein the second camera has a field of view greater than twenty five degrees. 11. The system of claim 8, wherein the vehicle has a travel direction and the first and second cameras are positioned to acquire images in the vehicle travel direction. 12. The system of claim 8, wherein the vehicle has a travel direction and the first and second cameras are positioned to acquire images in a direction other than the vehicle travel direction. 13. The system of claim 8, wherein the first and second cameras are relatively displaced along a vertical axis. 14. The system of claim 8, wherein the vehicle has a travel direction and the first and second cameras are relatively displaced along an axis perpendicular to the vehicle travel direction. 15. The system of claim 8, wherein the first camera comprises a filter that blocks visible light and passes near infrared light.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (61)
Daily Michael J. (Thousand Oaks CA), Apparatus and method for self-calibrating visual time-to-contact sensor.
Kakinami Toshiaki,JPX ; Saiki Mitsuyoshi,JPX ; Soshi Kunihiko,JPX ; Satonaka Hisashi,JPX, Apparatus for detecting an object located ahead of a vehicle using plural cameras with different fields of view.
Stein, Gideon S.; Shashua, Amnon; Gdalyahu, Yoram; Liyatan, Harel, Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications.
Stein, Gideon; Shashua, Amnon; Gdalyahu, Yoram; Livyatan, Harel, Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications.
Takano Kazuaki,JPX ; Monzi Tatsuhiko,JPX ; Tanaka Yasunari,JPX ; Ondoh Eiryoh,JPX ; Shioya Makoto,JPX, Imaging system for a vehicle which compares a reference image which includes a mark which is fixed to said vehicle to su.
Totsuka Takashi,JPX ; Yokoyama Taku,JPX ; Mitsunaga Tomoo,JPX, Key signal generating apparatus and picture synthesis apparatus, and key signal generating method and picture synthesis method.
Liu Lingnan, Method and apparatus for automatic discriminating and locating patterns such as finder patterns, or portions thereof, in machine-readable symbols.
John Carroll ; Shiuh-Yung James Chen, Method and apparatus for three-dimensional reconstruction of coronary vessels from angiographic images and analytical techniques applied thereto.
Dufour Jean-Yves (Chatillon sous Bagneux FRX) Le Gall Serge (Malakoff FRX) Waldburger Hugues (Meudon FRX), Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recogni.
Hahn,Wolfgang; Weidner,Thomas, Method and device for visualizing a motor vehicle environment with environment-dependent fusion of an infrared image and a visual image.
Valerie Shuman ; Cynthia Paulauskas ; T. Russell Shields ; Richard J. Weiland ; John C. Jasper, Method and system for an in-vehicle computer architecture.
Shuman Valerie ; Paulauskas Cynthia ; Shields T. Russell ; Weiland Richard J. ; Jasper John C., Method and system for an in-vehicle computing architecture.
Shuman, Valerie; Paulauskas, Cynthia; Shields, T. Russell; Weiland, Richard J.; Jasper, John C., Method and system for an in-vehicle computing architecture.
Sogawa,Yoshiyuki; Murakami,Keiichi; Tozawa,Yoshio, Method for examining shooting direction of camera apparatus, device thereof and structure for installing sensor.
Borcherts Robert H. (Ann Arbor MI) Jurzak Jacek L. (Rochester Hills MI) Liou Shih-Ping (Ann Arbor MI) Yeh Tse-Liang A. (Rochester Hills MI), System and method for automatically steering a vehicle within a lane in a road.
Saitoh Hiroshi (Kanagawa JPX) Noso Kazunori (Kanagawa JPX) Kurami Kunihiko (Kanagawa JPX) Kishi Norimasa (Kanagawa JPX), System and method for calculating movement direction and position of an unmanned vehicle.
Shaw David C. H. (3312 E. Mandeville Pl. Orange CA 92667) Shaw Judy Z. Z. (3312 E. Mandeville Pl. Orange CA 92667), Vehicle collision avoidance system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.