Single frame 4D detection using deep fusion of camera image, imaging RADAR and LiDAR point cloud
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/62
G01S-013/86
G01S-007/41
G01S-007/40
G01S-013/42
G06K-009/46
G06T-007/80
G06T-007/70
G06T-007/20
G06T-007/62
G05D-001/02
G05D-001/00
출원번호
16781672
(2020-02-04)
등록번호
11113584
(2021-09-07)
발명자
/ 주소
Deng, Huazeng
Rao, Ajaya H S
Aithal, Ashwath
Chen, Xu
Tan, Ruoyu
Yalla, Veera Ganesh
출원인 / 주소
NIO USA, Inc.
대리인 / 주소
Sheridan Ross P.C.
인용정보
피인용 횟수 :
0인용 특허 :
0
초록▼
Embodiments of the present disclosure are directed to a method for object detection. The method includes receiving sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem. The sensor data is received simultaneously and within on
Embodiments of the present disclosure are directed to a method for object detection. The method includes receiving sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem. The sensor data is received simultaneously and within one frame for each of the subsystems. The method also includes extracting one or more feature representations of the objects from camera image data, LiDAR point cloud data and imaging RADAR point cloud data and generating image feature maps, LiDAR feature maps and imaging RADAR feature maps. The method further includes combining the image feature maps, the LiDAR feature maps and the imaging RADAR feature maps to generate merged feature maps and generating object classification, object position, object dimensions, object heading and object velocity from the merged feature maps.
대표청구항▼
1. A method for object detection, the method comprising: receiving, by a processor, sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem,wherein the sensor data includes camera image data, LiDAR point cloud data and imaging R
1. A method for object detection, the method comprising: receiving, by a processor, sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem,wherein the sensor data includes camera image data, LiDAR point cloud data and imaging RADAR point cloud data and the sensor data is received simultaneously and within one frame for each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem;extracting, by the processor, one or more feature representations of the objects from the camera image data, LiDAR point cloud data and imaging RADAR point cloud data;generating, by the processor, image feature maps from the extracted one or more feature representations of the objects from the camera image data, LiDAR feature maps from the LiDAR point cloud data, and imaging RADAR feature maps from the image RADAR point cloud data;combining, by the processor, the image feature maps, the LiDAR feature maps, and the imaging RADAR feature maps to generate merged feature maps; andgenerating, by the processor, object classification, object position, object dimensions, object heading and object velocity from the merged feature maps. 2. The method of claim 1, wherein each of the image feature maps, LiDAR feature maps and imaging RADAR feature maps are feature vectors that have a same dimension of width, length and a number of channels. 3. The method of claim 1, wherein the processor includes a feature extractor algorithm to extract the one or more feature representations of the objects from the camera image data, LiDAR point cloud data and imaging RADAR point cloud data and a classifier and regressor algorithm to generate the object classification, the object position, the object dimensions, the object heading and the object velocity. 4. The method of claim 3, wherein the feature extractor algorithm includes a Deep Neural Network (DNN) algorithm, a Histogram of Oriented Gradients (HOG) algorithm, a Scale Invariant Feature Transform (SIFT) algorithm or a Speeded-Up Robust Feature (SURF) algorithm. 5. The method of claim 3, wherein the classifier and regressor algorithm includes a Deep Neural Network (DNN) algorithm, a Decision Tree (DT) algorithm or a Support Vector Machine (SVM) algorithm. 6. The method of claim 1, further comprising initializing and calibrating, by the processor, sensors from each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem. 7. The method of claim 6, wherein calibrating the sensors from each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem includes intrinsic and extrinsic calibrations. 8. A method for object detection, the method comprising: receiving, by a processor, sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem,wherein the sensor data includes camera image data, LiDAR point cloud data and imaging RADAR point cloud data and the sensor data is received simultaneously and within one frame for each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem;combining, by the processor, the camera image data, LiDAR point cloud data and imaging RADAR point cloud data to create fused raw data;extracting, by the processor, one or more feature representations of the objects from the fused raw data;generating, by the processor, fused feature maps from the extracted one or more feature representations of the objects from the fused raw data; andgenerating, by the processor, object classification, object position, object dimensions, object heading and object velocity from the fused feature maps. 9. The method of claim 8, wherein the processor includes a feature extractor algorithm to extract the one or more feature representations of the objects from fused raw data and a classifier and regressor algorithm to generate the object classification, the object position, the object dimensions, the object heading and the object velocity. 10. The method of claim 9, wherein the feature extractor algorithm includes a Deep Neural Network (DNN) algorithm, a Histogram of Oriented Gradients (HOG) algorithm, a Scale Invariant Feature Transform (SIFT) algorithm or a Speeded-Up Robust Feature (SURF) algorithm. 11. The method of claim 9, wherein the classifier and regressor algorithm includes a Deep Neural Network (DNN) algorithm, a Decision Tree (DT) algorithm or a Support Vector Machine (SVM) algorithm. 12. The method of claim 8, further comprising initializing and calibrating, by the processor, sensors from each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem. 13. The method of claim 12, wherein calibrating the sensors from each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem includes intrinsic and extrinsic calibrations. 14. A vehicle control system, comprising: a processor; anda memory coupled with and readable by the processor and storing therein a set of instructions, which when executed by the processor, cause the processor to detect objects in a single frame by: receiving sensor data indicative of one or more objects for each of a camera subsystem, a LiDAR subsystem, and an imaging RADAR subsystem;wherein the sensor data includes camera image data, LiDAR point cloud data and imaging RADAR point cloud data and the sensor data is received simultaneously and within one frame for each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem;extracting one or more feature representations of the objects from the camera image data, LiDAR point cloud data and imaging RADAR point cloud data;generating image feature maps from the extracted one or more feature representations of the objects from the camera image data, LiDAR feature maps from the LiDAR point cloud data, and imaging RADAR feature maps from the image RADAR point cloud data;combining the image feature maps, the LiDAR feature maps, and the imaging RADAR feature maps to generate merged feature maps; andgenerating object classification, object position, object dimensions, object heading and object velocity from the merged feature maps. 15. The vehicle control system of claim 14, wherein each of the image feature maps, LiDAR feature maps and imaging RADAR feature maps are feature vectors that have a same dimension of width, length and a number of channels. 16. The vehicle control system of claim 14, wherein the processor includes a feature extractor algorithm to extract the one or more feature representations of the objects from the camera image data, LiDAR point cloud data and imaging RADAR point cloud data and a classifier and regressor algorithm to generate the object classification, the object position, the object dimensions, the object heading and the object velocity. 17. The vehicle control system of claim 16, wherein the feature extractor algorithm includes a Deep Neural Network (DNN) algorithm, a Histogram of Oriented Gradients (HOG) algorithm, a Scale Invariant Feature Transform (SIFT) algorithm or a Speeded-Up Robust Feature (SURF) algorithm. 18. The vehicle control system of claim 16, wherein the classifier and regressor algorithm includes a Deep Neural Network (DNN) algorithm, a Decision Tree (DT) algorithm or a Support Vector Machine (SVM) algorithm. 19. The vehicle control system of claim 14, further comprising initializing and calibrating sensors from each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem. 20. The vehicle control system of claim 19, wherein calibrating the sensors from each of the camera subsystem, the LiDAR subsystem, and the imaging RADAR subsystem includes intrinsic and extrinsic calibrations.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.