Within examples, systems and methods of generating a synthetic image representative of an environment of a vehicle are described comprising generating a first image using infrared information from an infrared (IR) camera, generating a second image using laser point cloud data from a LIDAR, generatin
Within examples, systems and methods of generating a synthetic image representative of an environment of a vehicle are described comprising generating a first image using infrared information from an infrared (IR) camera, generating a second image using laser point cloud data from a LIDAR, generating an embedded point cloud representative of the environment based on a combination of the first image and the second image, receiving navigation information traversed by the vehicle, transforming the embedded point cloud into a geo-referenced coordinate space based on the navigation information, and combining the transformed embedded point cloud with imagery of terrain of the environment to generate the synthetic image representative of the environment of the vehicle.
대표청구항▼
1. A method of generating a synthetic image representative of an environment of a vehicle, the method comprising: generating a first image of the environment using infrared information from an infrared (IR) camera on the vehicle;generating a second image of the environment using laser point cloud da
1. A method of generating a synthetic image representative of an environment of a vehicle, the method comprising: generating a first image of the environment using infrared information from an infrared (IR) camera on the vehicle;generating a second image of the environment using laser point cloud data from a light detection and ranging (LIDAR) on the vehicle;for each laser data point of the laser point cloud data from the LIDAR, projecting the laser data point into a corresponding pixel location of the first image so as to map the laser point cloud data onto the first image;based on mapping the laser point cloud data onto the first image, generating an embedded point cloud representative of the environment based on a combination of the first image and the second image such that additional data is embedded into the laser point cloud data;receiving navigation information of the environment traversed by the vehicle from data stored in a navigation database;transforming the embedded point cloud into a geo-referenced coordinate space based on the navigation information; andcombining the transformed embedded point cloud with imagery of terrain of the environment to generate the synthetic image representative of the environment of the vehicle. 2. The method of claim 1, wherein the first image, the second image, and the navigation information are each generated and received at approximately the same point in time. 3. The method of claim 1, wherein transforming the embedded point cloud data into the geo-referenced coordinate space based on the navigation information comprises: adjusting the embedded point cloud data according to the IR camera and LIDAR attitude and linear offsets relative a navigation system of the vehicle, wherein adjustments are made using six degree of freedom (DOF) platform data including latitude, longitude, altitude, heading, pitch, and roll from the navigation system of the vehicle. 4. The method of claim 1, wherein combining the transformed embedded point cloud with imagery of terrain of the environment comprises: overlaying an image of the terrain onto the transformed embedded point cloud. 5. The method of claim 1, further comprising displaying the synthetic image of the terrain of the environment on a multi-function display (MFD) of the vehicle. 6. The method of claim 1, further comprising displaying the synthetic image of the terrain of the environment on a head mounted display (HMD). 7. The method of claim 1, further comprising performing the method in real-time as the infrared information is received from the IR camera and the laser point cloud data is received from the LIDAR during operation of the vehicle traversing the environment. 8. The method of claim 1, further comprising, while the vehicle is traversing the environment: receiving the infrared information from the IR camera;receiving the laser point cloud data from the LIDAR;receiving the navigation information from a navigation system on the vehicle;performing the method in real-time to generate the synthetic image representative of the environment of the vehicle while the vehicle is traversing the environment; anddisplaying the synthetic image of the terrain of the environment on a display of the vehicle. 9. The method of claim 1, further comprising: generating the synthetic image based on the vehicle operating in a degraded visual environment (DVE) including near-zero to zero visibility conditions. 10. The method of claim 1, further comprising: determining a level of obscuration of the embedded point cloud; andadjusting parameters of one or more of the IR camera and the LIDAR on the vehicle based on the level of obscuration for adaptive feedback control of sensor parameters. 11. The method of claim 1, further comprising: storing the infrared information from the IR camera and the laser point cloud data from the LIDAR into a dataset;determining, as data of the infrared information from the IR camera and the laser point cloud data from the LIDAR is received, whether the data indicates additional spatial resolution of a representation of the environment;based on the data indicating the additional spatial resolution, generating a new higher resolution data point in the dataset; andbased on the data not indicating the additional spatial resolution, evaluating the data for update. 12. The method of claim 11, wherein determining whether the data indicates additional spatial resolution of the representation of the environment comprises: determining whether the data is of an alternate range or viewpoint to a targeted object or area. 13. The method of claim 11, wherein determining whether the data indicates additional spatial resolution of the representation of the environment comprises: determining sensor measurement accuracy, wherein the sensor measurement accuracy is based on a level of obscuration of the environment. 14. A non-transitory computer readable medium having stored thereon instructions that, upon executed by a computing device, cause the computing device to perform functions comprising: generating a first image of the environment using infrared information collected from an infrared (IR) camera on an vehicle;generating a second image of the environment using laser point cloud data collected from a light detection and ranging (LIDAR) on the vehicle;for each laser data point of the laser point cloud data from the LIDAR, projecting the laser data point into a corresponding pixel location of the first image so as to map the laser point cloud data onto the first image;based on mapping the laser point cloud data onto the first image, generating an embedded point cloud representative of the environment based on a combination of the first image and the second image such that additional data is embedded into the laser point cloud data;receiving navigation information of the environment traversed by the vehicle from data stored in a navigation database;transforming the embedded point cloud into a geo-referenced coordinate space based on the navigation information; andcombining the transformed embedded point cloud with imagery of terrain of the environment to generate the synthetic image representative of the environment of the vehicle. 15. The non-transitory computer readable medium of claim 14, wherein combining the transformed embedded point cloud with imagery of terrain of the environment comprises: overlaying an image of the terrain onto the transformed embedded point cloud. 16. The non-transitory computer readable medium of claim 14, further comprising performing the functions in real-time as the infrared information is received from the IR camera and the laser point cloud data is received from the LIDAR during operation of the vehicle traversing the environment. 17. A system comprising: an infrared (IR) camera to collect infrared information of an environment of an vehicle;a light detection and ranging (LIDAR) to collect laser point cloud data of the environment of the vehicle;a navigation system configured to determine navigation information of the vehicle;a processor to generate a synthetic representation of the environment of the vehicle, in real-time while the vehicle is traversing the environment, based on outputs of the IR camera, the LIDAR, and the navigation system, wherein generation of the synthetic representation comprises: for each laser data point of the laser point cloud data from the LIDAR, projecting the laser data point into a corresponding pixel location of the first image so as to map the laser point cloud data onto the first image;based on mapping the laser point cloud data onto the first image, generating an embedded point cloud representative of the environment based on a combination of the infrared information and the laser point cloud data such that additional data is embedded into the laser point cloud data;combining the embedded point cloud with imagery of terrain of the environment; anda display to display the synthetic image of the terrain of the environment on a display of the vehicle. 18. The system of claim 17, wherein the processor is further configured to generate the synthetic representation based on the vehicle operating in a degraded visual environment (DVE) including near-zero to zero visibility conditions. 19. The system of claim 17, wherein the processor is further configured to: determine a level of obscuration of the embedded point cloud; andadjust parameters of one or more of the IR camera and the LIDAR on the vehicle based on the level of obscuration for adaptive feedback control of sensor parameters.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (2)
Whalen, Donald J.; Walker, Brad A.; Schultz, Scott E.; Heberlein, Ronald E.; Han, David I., Sensor-based navigation correction.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.