Within examples, systems and methods of generating a synthetic image representative of an environment of a vehicle are described comprising generating a first image using infrared information from an infrared (IR) camera, generating a second image using laser point cloud data from a LIDAR, generatin
Within examples, systems and methods of generating a synthetic image representative of an environment of a vehicle are described comprising generating a first image using infrared information from an infrared (IR) camera, generating a second image using laser point cloud data from a LIDAR, generating an embedded point cloud representative of the environment based on a combination of the first image and the second image, receiving navigation information traversed by the vehicle, transforming the embedded point cloud into a geo-referenced coordinate space based on the navigation information, and combining the transformed embedded point cloud with imagery of terrain of the environment to generate the synthetic image representative of the environment of the vehicle.
대표청구항▼
1. A method of adjusting parameters of sensors of a vehicle, the method comprising: generating a first image of an environment of the vehicle using infrared information from an infrared (IR) camera on the vehicle;generating a second image of the environment using laser point cloud data from a light
1. A method of adjusting parameters of sensors of a vehicle, the method comprising: generating a first image of an environment of the vehicle using infrared information from an infrared (IR) camera on the vehicle;generating a second image of the environment using laser point cloud data from a light detection and ranging (LIDAR) on the vehicle;generating an embedded point cloud representative of the environment based on a combination of the first image and the second image such that additional data is embedded into the laser point cloud data;determining a level of obscuration of the embedded point cloud;adjusting parameters of one or more of the IR camera and the LIDAR on the vehicle based on the level of obscuration for adaptive feedback control of sensor parameters;storing the infrared information from the IR camera and the laser point cloud data from the LIDAR into a dataset;determining, as data of the infrared information from the IR camera and the laser point cloud data from the LIDAR is received, whether the data indicates additional spatial resolution of a representation of the environment based on the level of obscuration;based on the data indicating the additional spatial resolution, generating a new higher resolution data point in the dataset. 2. The method of claim 1, wherein determining the level of obscuration of the embedded point cloud comprises: receiving one or more outputs of other on-board sensors of the vehicle that indicate a level of precipitation in the environment of the vehicle. 3. The method of claim 1, wherein adjusting parameters of the LIDAR on the vehicle based on the level of obscuration comprises: reducing a power of the LIDAR. 4. The method of claim 1, wherein adjusting parameters of one or more of the IR camera and the LIDAR on the vehicle based on the level of obscuration comprises: adjusting one or more of a contrast, a gain, and a power of the one or more of the IR camera and the LIDAR. 5. The method of claim 1, wherein the first image and the second image are each generated and received at approximately the same point in time. 6. The method of claim 1, further comprising performing the method in real-time as the infrared information is received from the IR camera and the laser point cloud data is received from the LIDAR during operation of the vehicle traversing the environment. 7. The method of claim 1, further comprising: receiving navigation information of the environment traversed by the vehicle from data stored in a navigation database; andtransforming the embedded point cloud into a geo-referenced coordinate space based on the navigation information. 8. The method of claim 7, further comprising: combining the transformed embedded point cloud with imagery of terrain of the environment to generate a synthetic image representative of the environment of the vehicle. 9. The method of claim 8, wherein combining the transformed embedded point cloud with imagery of terrain of the environment comprises: overlaying an image of the terrain onto the transformed embedded point cloud. 10. The method of claim 8, further comprising, while the vehicle is traversing the environment: receiving the infrared information from the IR camera;receiving the laser point cloud data from the LIDAR;receiving the navigation information;performing the method in real-time to generate the synthetic image representative of the environment of the vehicle while the vehicle is traversing the environment; anddisplaying the synthetic image of the terrain of the environment on a display of the vehicle. 11. The method of claim 8, further comprising displaying the synthetic image of the terrain of the environment on a multi-function display (MFD) of the vehicle. 12. The method of claim 8, further comprising displaying the synthetic image of the terrain of the environment on a head mounted display (HMD). 13. The method of claim 8, further comprising: generating the synthetic image based on the vehicle operating in a degraded visual environment (DVE) including near-zero to zero visibility conditions. 14. The method of claim 1, further comprising: for each laser data point of the laser point cloud data from the LIDAR, projecting the laser data point into a corresponding pixel location of the first image so as to map the laser point cloud data onto the first image. 15. The method of claim 14, further comprising: based on mapping the laser point cloud data onto the first image, generating the embedded point cloud representative of the environment based on a combination of the first image and the second image such that additional data is embedded into the laser point cloud data. 16. The method of claim 1, further comprising: based on the data not indicating the additional spatial resolution, evaluating the data for update. 17. The method of claim 16, wherein determining whether the data indicates additional spatial resolution of the representation of the environment comprises: determining sensor measurement accuracy, wherein the sensor measurement accuracy is based on the level of obscuration of the environment. 18. A non-transitory computer readable medium having stored thereon instructions that, upon executed by a computing device, cause the computing device to perform functions comprising: generating a first image of an environment of a vehicle using infrared information from an infrared (IR) camera on the vehicle;generating a second image of the environment using laser point cloud data from a light detection and ranging (LIDAR) on the vehicle;generating an embedded point cloud representative of the environment based on a combination of the first image and the second image such that additional data is embedded into the laser point cloud data;determining a level of obscuration of the embedded point cloud;adjusting parameters of one or more of the IR camera and the LIDAR on the vehicle based on the level of obscuration for adaptive feedback control of sensor parameters;storing the infrared information from the IR camera and the laser point cloud data from the LIDAR into a dataset;determining, as data of the infrared information from the IR camera and the laser point cloud data from the LIDAR is received, whether the data indicates additional spatial resolution of a representation of the environment based on the level of obscuration;based on the data indicating the additional spatial resolution, generating a new higher resolution data point in the dataset. 19. The non-transitory computer readable medium of claim 18, wherein adjusting parameters of the LIDAR on the vehicle based on the level of obscuration comprises: reducing a power of the LIDAR. 20. A system comprising: an infrared (IR) camera to collect infrared information of an environment of an vehicle;a light detection and ranging (LIDAR) to collect laser point cloud data of the environment of the vehicle; anda processor to adjust a parameter of one or more of the IR camera and the LIDAR based on outputs of the IR camera and the LIDAR, wherein adjustment of the parameter of the one or more of the IR camera and the LIDAR comprises: generating an embedded point cloud representative of the environment based on a combination of the infrared information and the laser point cloud data such that additional data is embedded into the laser point cloud data;determining a level of obscuration of the embedded point cloud;adjusting the parameter of the one or more of the IR camera and the LIDAR based on the level of obscuration for adaptive feedback control of sensor parameters;storing the infrared information from the IR camera and the laser point cloud data from the LIDAR into a dataset;determining, as data of the infrared information from the IR camera and the laser point cloud data from the LIDAR is received, whether the data indicates additional spatial resolution of a representation of the environment based on the level of obscuration;based on the data indicating the additional spatial resolution, generating a new higher resolution data point in the dataset.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (3)
Tiana, Carlo A.; Bell, Douglas A.; Etherington, Timothy J., Image data combining systems and methods of multiple vision systems.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.