IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0059811
(2016-03-03)
|
등록번호 |
US-9866816
(2018-01-09)
|
발명자
/ 주소 |
|
출원인 / 주소 |
- 4D Intellectual Properties, LLC
|
대리인 / 주소 |
Patterson Thuente Pederson, P.A.
|
인용정보 |
피인용 횟수 :
0 인용 특허 :
246 |
초록
▼
An active-pulsed four-dimensional camera system that utilizes a precisely-controlled light source produces spatial information and human-viewed or computer-analyzed images. The acquisition of four-dimensional optical information is performed at a sufficient rate to provide accurate image and spatial
An active-pulsed four-dimensional camera system that utilizes a precisely-controlled light source produces spatial information and human-viewed or computer-analyzed images. The acquisition of four-dimensional optical information is performed at a sufficient rate to provide accurate image and spatial information for in-motion applications where the camera is in motion and/or objects being imaged, detected and classified are in motion. Embodiments allow for the reduction or removal of image-blocking conditions like fog, snow, rain, sleet and dust from the processed images. Embodiments provide for operation in daytime or nighttime conditions and can be utilized for day or night full-motion video capture with features like shadow removal. Multi-angle image analysis is taught as a method for classifying and identifying objects and surface features based on their optical reflective characteristics.
대표청구항
▼
1. An active pulsed four-dimensional (4D) camera system for acquiring information about a scene comprising: at least one emitter configured to generate and emit light within a defined frequency range throughout a field of view of the scene, wherein the emitted light is an active sequence of pulses;a
1. An active pulsed four-dimensional (4D) camera system for acquiring information about a scene comprising: at least one emitter configured to generate and emit light within a defined frequency range throughout a field of view of the scene, wherein the emitted light is an active sequence of pulses;an array of detectors configured to receive light within the defined frequency range for a field of view of the array, the field of view of the scene defined by the field of view of the array and each detector in the array having an individual field of view that is a different subset of the field of view of the array;control circuitry operably coupled to the at least one emitter and the array of detectors and configured to cause the at least one emitter to begin to emit the active sequence of pulses at a first time and to cause the array of detectors to begin to receive light at a second time after the first time in an emitter/detector cycle, wherein the control circuitry is configured to vary an elapsed time between the first time and the second time in K successive emitter/detector cycles, wherein K is greater than 4 and a total elapsed time of the K successive emitter/detector cycles is less than 50 microseconds; anda processing system operably coupled to the array of detectors and the control circuitry and configured to: generate and store digital information corresponding to light received by each detector in the array of detectors, the digital information for each detector in the array of detectors being sampled and stored in one of K frame buffers corresponding to one of the K emitter/detector cycles, andanalyze the digital information for each detector and construct a representation of at least a portion of the scene based at least in part on time-of-flight (TOF) data in the digital information corresponding to the sequence of pulses received by different ones of the K frame buffers and on relative timing differences of the sequence of pulses for each of the K emitter/detector cycles. 2. The system of claim 1, wherein the processing system is configured to construct a depth map based on the TOF data corresponding to multiple detectors as part of the representation of at least the portion of the scene. 3. The system of claim 1, wherein the processing system is further configured to output an image of the at least a portion of the scene. 4. The system of claim 1, wherein the processing system is configured to construct the representation by analyzing the digital information for color information from different ones of the K frame buffers. 5. The system of claim 1, wherein the emitter comprises at least one light-emitting diode (LED). 6. The system of claim 5, wherein the emitter comprises a plurality of LEDs having multiple color components. 7. The system of claim 1, wherein the emitter comprises at least one vehicle headlamp. 8. The system of claim 1, wherein the defined frequency range of the at least one emitters is selected from the group consisting of: 100 nanometers (nm) to 400 nm; 400 nm to 700 nm; 700 nm to 1400 nm; 1400 nm to 8000 nm; 8 micrometers (micron) to 15 micron; 15 micron to 1000 micron; and 0.1 mm to 1 mm. 9. The system of claim 1, wherein the array of detectors comprises a bandpass filter to eliminate received light outside of the defined frequency range. 10. The system of claim 1, wherein the array of detectors is configured to receive reflected light associated with the active sequence of K pulses to distinguish from ambient light. 11. The system of claim 1, wherein the emitted light is non-uniform in the defined frequency range. 12. The system of claim 1, wherein each detector in the array of detectors comprises a photodiode and a detector integration circuit, wherein the photodiode is configured to transfer a charge to the detector integration circuit when the photodiode receives light during the emitter/detector cycles. 13. The system of claim 12, wherein the detector integration circuit of each detector in the array of detectors is configured to transfer the charge from a first emitter/detector cycle to the one of the K frame buffers for that detector before a second emitter/detector cycle begins in the K successive emitter/detector cycles. 14. The system of claim 1, wherein the individual field of view of each detector comprises a pixel. 15. The system of claim 1, wherein individual fields of view of adjacent detectors can partially overlap. 16. The system of claim 1, wherein the processing circuitry is further configured to detect attenuation of at least one of the emitted light or the light received by the array of detectors and reduce an effect of the attenuation in constructing the representation. 17. The system of claim 1, wherein the control circuitry is configured to vary the elapsed time by varying the first time for the sequence of successive pulses in successive ones of the K successive emitter/detector cycles. 18. The system of claim 1, wherein the array of detectors utilizes a Bayer pattern for color filtering. 19. The system of claim 18, wherein the digital information for each color plane is converted from a sparse color map to a dense color map by demosaicing. 20. A method of acquiring information about a scene using an active pulsed four-dimensional (4D) camera system, the method comprising: emitting light from the 4D camera system as an active sequence of pulses of light within a defined frequency range throughout a field of view of the scene;after a first elapsed time from a start of the emitting, detecting light by the 4D camera system within the defined frequency range for the field of view;converting the light from the detecting to digital information; andstoring the digital information in one of K frame buffers in the 4D camera system;repeating the emitting, the detecting, the converting and the storing for K successive cycles with a second elapsed time between the emittings in successive cycles, wherein the first elapsed time and the second elapsed time vary between each of the K successive cycles, wherein K is greater than 4; andanalyzing the digital information stored in the K frame buffers and constructing a digital representation of at least a portion of the scene based at least in part on time-of-flight (TOF) data in the digital information corresponding to the sequence of pulses received by different ones of the K frame buffers and on relative timing differences of the sequence of pulses for each of the K successive cycles. 21. The method of claim 20, further comprising constructing a depth map of distances between the 4D camera system and the at least a portion of the scene by analyzing the digital information in the K frame buffers for distance information. 22. The method of claim 20, further comprising outputting an image of the at least the portion of the scene. 23. The method of claim 20, further wherein constructing the digital representation of the at least the portion of the scene includes analyzing the digital information from the K successive cycles for color information. 24. The method of claim 20, wherein the converting in one of the K successive cycles is complete before the emitting of a next one of the K successive cycles begins. 25. The method of claim 20, further comprising: detecting attenuation of at least one of the emitted light or the light detected by the 4D camera system; andreducing an effect of the detected attenuation in constructing the digital representation. 26. An active-pulsed four-dimensional (4D) camera system for acquiring information from a scene comprising: at least one emitter configured to emit K successive cycles of a plurality of active light pulses throughout the scene, wherein a time between a start of one of the K successive cycles and a start of a next one of the K successive cycles is varied throughout the K successive cycles;an array of detectors configured to receive reflected light related to the active light pulses emitted in each of the K successive cycles, wherein a time between the start of emitting the light pulses and a start of receiving reflected light is varied in each of the K successive cycles; anda plurality of buffers configured to store digital information related to the reflected light received by the array of detectors; anda processing system operably coupled to the plurality of buffers and configured to: analyze the digital information to determine distance information and color information for the scene based at least in part on the time between the start of one of the K successive cycles and the start of the next one of the K successive cycles being varied throughout the K successive cycles, andconstruct a digital representation of the scene based on the determined distance information and the determined color information.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.