IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
UP-0013513
(2004-12-17)
|
등록번호 |
US-7834905
(2011-01-16)
|
우선권정보 |
DE-102 27 171(2002-06-18) |
발명자
/ 주소 |
- Hahn, Wolfgang
- Weidner, Thomas
|
출원인 / 주소 |
- Bayerische Motoren Werke Aktiengesellschaft
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
6 인용 특허 :
15 |
초록
▼
The invention relates to an improved method for visualizing the environment of a vehicle, especially in the dark. The invention also relates to a night vision system, which especially provides a visual image of the environment or the digital data thereof. Preferably, the visual image is a color imag
The invention relates to an improved method for visualizing the environment of a vehicle, especially in the dark. The invention also relates to a night vision system, which especially provides a visual image of the environment or the digital data thereof. Preferably, the visual image is a color image which indicates the visually perceptible objects of the environment. The system also provides an infrared image of the environment or the digital data thereof. The infrared image indicates the infrared radiation radiated by the visually perceptible and/or other objections. In a preferred form of embodiment, a merged image of the visual image and the infrared image of largely identical sections of the environment of the vehicle is represented on a display comprising at least one merged region and at least one region which is not merged or not merged to the same extent or not merged with the same weighting.
대표청구항
▼
What is claimed is: 1. Method of visualizing an environment of a vehicle in darkness, the method comprising the acts of: providing a visual image or its digital data of the environment, preferably a colored visual image, the visual image showing visible objects; and providing an infrared image or i
What is claimed is: 1. Method of visualizing an environment of a vehicle in darkness, the method comprising the acts of: providing a visual image or its digital data of the environment, preferably a colored visual image, the visual image showing visible objects; and providing an infrared image or its digital data of the environment, the infrared image showing the infrared radiation emanating from the visible and/or other objects; wherein the visual image or its digital data are provided by a visual camera sensitive in a visual spectral range, preferably a color-sensitive visual camera, or a first sensor, and the infrared image or its digital data are provided by an infrared camera sensitive in the infrared spectral range or a second sensor; wherein the visual camera or the first sensor or its lens system has a first optical axis, and the infrared camera or the second optical sensor or its lens system has a second optical axis, which are offset parallel to one another so that the cameras or sensors provide at least partially different cutouts of the environment of the vehicle in the form of a first and a second cutout; wherein the provided first cutout and the provided second cutout are completely or partially superposed or merged by a superposing or merging device with respect to pixels and/or areas, and wherein during the merging at least one distance-dependent adaptation parameter obtained during calibration for different distances, particularly at least one registration or transformation parameter, is taken into account, and the adaptation parameter or parameters are stored during the calibration in a data memory in the vehicle; and wherein the distance between the vehicle and a vehicle driving in front of it is determined by a transmitting and receiving device and it is checked by the superposing or merging device whether the actually used distance-dependent adaptation parameter is suitable for the determined distance for providing a ghost-image-free merged image and, in the event of a lacking suitability, at least one additional suitable distance-dependent adaptation parameter is determined and the latter is used for providing a partially merged image which shows the vehicle driving ahead. 2. Method according to claim 1, wherein the at least one distance-dependent adaptation parameter is obtained by a first calibration for a first distance or a first distance range and at least one additional calibration for at least one other distance or one other distance range. 3. Method according to claim 2, wherein the first distance or the first distance range corresponds to a driving situation typical of city driving, particularly a distance range of from approximately 15 to 75 m. 4. Method according to claim 3, wherein a second distance range corresponds to a driving situation typical of highway driving, particularly a distance range of from approximately 30 to 150 m. 5. Method according to claim 4, wherein a third distance or a third distance range corresponds to a driving situation typical of expressway driving, particularly a distance range of from approximately 50 to 250 m. 6. Method according to claim 3, wherein a third distance or a third distance range corresponds to a driving situation typical of expressway driving, particularly a distance range of from approximately 50 to 250 m. 7. Method according to claim 2, wherein a second distance range corresponds to a driving situation typical of highway driving, particularly a distance range of from approximately 30 to 150 m. 8. Method according to claim 7, wherein a third distance or a third distance range corresponds to a driving situation typical of expressway driving, particularly a distance range of from approximately 50 to 250 m. 9. Method according to claim 2, wherein a third distance or a third distance range corresponds to a driving situation typical of expressway driving, particularly a distance range of from approximately 50 to 250 m. 10. Method of visualizing an environment of a vehicle in darkness, the method comprising the acts of: providing, by a visual camera, a visual image or its digital data of the environment, preferably a colored visual image, the visual image showing visible objects; providing, by an infrared camera, an infrared image or its digital data of the environment, the infrared image showing the infrared radiation emanating from the visible and/or other objects; and merging or superposing, by a superposing or merging device, the visual image and the infrared image during which a merged image is created which has at least a first merged image area and a second merged image area, the first merged image area being formed by using at least a first distance-dependent adaptation parameter, and the second merged image area is formed by using at least a second distance-dependent adaptation parameter, wherein the distance between the vehicle and a vehicle driving in front of it is determined by a transmitting and receiving device and it is checked by the superposing or merging device whether the actually used distance-dependent adaptation parameter is suitable for the determined distance for providing a ghost-image-free merged image and, in the event of a lacking suitability, at least one additional suitable distance-dependent adaptation parameter is determined and the latter is used for providing a partially merged image which shows the vehicle driving ahead. 11. Method according to claim 10, wherein infrared radiation emanating from the visible objects and/or the additional objects, and detected, has a wavelength in the range of from approximately 7 to 14 mm, preferably approximately 7.5-10.5 mm. 12. Method according to claim 10, wherein infrared radiation emanating from the visible objects and/or the additional objects, and detected, has a wavelength in the range of from approximately 3 mm to approximately 5 mm. 13. Method according to claim 10, wherein infrared radiation emanating from the visible objects and/or the additional objects and detected has a wavelength in the range of from approximately 800 nm to approximately 2.5 mm. 14. Method according to claim 10, wherein the visual image of the environment of the vehicle present in the form of digital data is normalized by using a calibrating device. 15. Method according to claim 14, wherein infrared image of the cutout of the environment present in the form of digital data is normalized by using the calibrating device. 16. Method according to claim 15, wherein the calibrating device emits visible radiation and infrared radiation. 17. Method according to claim 16, wherein the calibrating device has several incandescent lamps arranged in a checkerboard-type manner. 18. Method according to claim 15, wherein the calibrating device has several incandescent lamps arranged in a checkerboard-type manner. 19. Method according to claim 15, wherein infrared pixels and visual pixels or such pixel areas are weighted differently. 20. Method according to claim 14, wherein the calibrating device has several incandescent lamps arranged in a checkerboard-type manner. 21. Method according to claim 14, wherein infrared pixels and visual pixels or such pixel areas are weighted differently. 22. Method according to claim 21, wherein regions with large amounts of information in comparison to regions with small amounts of information of the visual image and/or of the infrared image are weighted higher during the superposition or averaging. 23. A system for visualizing an environment of a vehicle in darkness, wherein the system implements the method according to claim 10. 24. The system according to claim 23, comprising the visual camera that is a color visual camera, the infrared camera, a first normalization device for normalizing a colored visual image of a cutout of the environment of the vehicle provided by the color visual camera, a second normalization device for normalizing the infrared image of the cutout of the environment of the vehicle provided by the infrared camera, an aligning device for generating image pairs identical with respect to time and location from visual images and infrared images, and the superposing or merging device which superposes the image pairs largely identical with respect to time and location in a pixel-type or area-type manner and/or forms average values. 25. A calibrating device for calibrating a system according to claim 24, the calibration device having at least one radiation source, which source emits visible radiation as well as infrared radiation. 26. The calibration device according to claim 25, wherein the radiation source is an incandescent lamp. 27. A calibrating device for calibrating a system according to claim 23, the calibration device having at least one radiation source, which source emits visible radiation as well as infrared radiation. 28. The calibration device according to claim 27, wherein the radiation source is an incandescent lamp. 29. Method of visualizing an environment of a vehicle in darkness, the method comprising the acts of: providing, by a visual camera, a visual image or its digital data of the environment, preferably a colored visual image, the visual image showing visible objects; providing, by an infrared camera, an infrared image or its digital data of the environment, the infrared image showing the infrared radiation emanating from the visible and/or other objects; and merging or superposing, by a superposing or merging device, the visual image and the infrared image, during which a merged image is created which has at least a first area which shows a portion of the visual image and a portion of the infrared image, and has at least a second area which is formed by the merging of another portion of the visual image and of the corresponding portion of the infrared image; wherein the distance between the vehicle and a vehicle driving in front of it is determined by a transmitting and receiving device and it is checked by the superposing or merging device whether the actually used distance-dependent adaptation parameter is suitable for the determined distance for providing a ghost-image-free merged image and, in the event of a lacking suitability, only the corresponding portion of the visual image or of the infrared image is partially illustrated in the merged image. 30. Method of visualizing an environment of a vehicle in darkness, the method comprising the acts of: providing a visual image or its digital data of the environment, preferably a colored visual image, the visual image showing visible objects; and providing an infrared image or its digital data of the environment, the infrared image showing the infrared radiation emanating from the visible and/or other objects; wherein the visual image or its digital data are provided by a visual camera sensitive in a visual spectral range, preferably a color-sensitive visual camera, or a first sensor, and the infrared image or its digital data are provided by an infrared camera sensitive in the infrared spectral range or a second sensor; wherein the visual camera or the first sensor or its lens system has a first optical axis, and the infrared camera or the second optical sensor or its lens system has a second optical axis, which are offset parallel to one another so that the cameras or sensors provide at least partially different cutouts of the environment of the vehicle in the form of a first and a second cutout; wherein the visual image or a normalized visual image is aligned with respect to the infrared image or the normalized visual image or vice-versa by the processing of digital data of the images by a superposing or merging device, so that image pairs of both spectral ranges are provided which are identical with respect to time and location; wherein a weighted superposition or averaging is carried out by the superposing or merging device for one or more pixels largely having the same location from the visual image and the infrared image; and wherein regions with large amounts of information in comparison to regions with small amounts of information of the visual image and/or of the infrared image are weighted higher during the superposition or averaging. 31. Method according to claim 30, wherein an image repetition rate of the visual camera or of the first sensor and of the infrared camera or of the second sensor are at least largely identical. 32. Method according to claim 30, wherein same-location pixels or pixel areas of the images of the different spectral ranges which are largely identical with respect to time and location are superposed by the processing of their digital data, or that an averaging takes place. 33. Method according to claim 32, wherein brightness values and/or color values of the pixels or pixel areas are superposed or are used for an averaging. 34. Method according to claim 30, wherein the weighting takes place taking into account the brightness and/or the visual condition in the environment of the vehicle.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.