A system and method for collecting and processing sensor data for facilitating and/or enabling autonomous, semi-autonomous, and remote operation of a vehicle, including: collecting surroundings at one or more sensors, and determining properties of the surroundings of the vehicle and/or the behavior
A system and method for collecting and processing sensor data for facilitating and/or enabling autonomous, semi-autonomous, and remote operation of a vehicle, including: collecting surroundings at one or more sensors, and determining properties of the surroundings of the vehicle and/or the behavior of the vehicle based on the surroundings data at a computing system.
대표청구항▼
1. A system for facilitating autonomous, semi-autonomous, and remote operation of a vehicle, comprising: a first forward camera, arranged exterior to a cabin of the vehicle superior to a windshield of the vehicle and aligned along a centerline of a longitudinal axis of the cabin, that outputs a firs
1. A system for facilitating autonomous, semi-autonomous, and remote operation of a vehicle, comprising: a first forward camera, arranged exterior to a cabin of the vehicle superior to a windshield of the vehicle and aligned along a centerline of a longitudinal axis of the cabin, that outputs a first video stream, wherein the first forward camera defines a first angular field of view (AFOV) and a first focal length (FL);a second forward camera, arranged interior to the cabin proximal an upper portion of the windshield and aligned along the centerline, that outputs a second video stream, wherein the second forward camera defines a second AFOV narrower than the first AFOV;a forward radar module, coupled to an exterior surface of the vehicle and defining a first ranging length greater than the first FL, that outputs a first object signature;an onboard computing subsystem, arranged at the vehicle, comprising: a central processing unit (CPU) that continuously processes the first video stream and outputs a first object localization dataset according to a set of explicitly programmed rules,a graphical processing unit (GPU) cluster that continuously processes the first video stream and the second video stream, in parallel to and simultaneously with the CPU, and outputs a second object localization dataset based on a trained machine-learning model, anda scoring module that generates a first comparison between the first object localization dataset and the second object localization dataset, and outputs a confidence metric based on the comparison and the first object signature, wherein the confidence metric is indicative of the usability of the first and second video streams for at least one of localization, mapping and control of the vehicle;wherein the onboard computing subsystem controls at least one of the first forward camera, the second forward camera, and the forward radar module based on the confidence metric. 2. The system of claim 1, further comprising: a side camera, mounted to a mirror stem at a side of the vehicle and defining a third AFOV oriented toward a rear direction relative to the vehicle, and operable between an on-state and an off-state, wherein in the on-state the side camera outputs a third video stream and in the off-state the side camera is inoperative;wherein the CPU processes the third video stream and the first video simultaneously during operation of the side camera in the on-state to output the first object localization dataset. 3. The system of claim 2, wherein the onboard computing system automatically operates the side camera in the off-state in response to vehicle motion away from the side direction. 4. The system of claim 2, further comprising a side radar module, mounted at an external surface of the vehicle below a door at the side of the vehicle, that outputs a second object signature, and wherein the scoring module outputs the confidence metric based on the second object signature. 5. The system of claim 1, wherein the first object localization dataset comprises a first foreign vehicle location and the second object localization dataset comprises a second foreign vehicle location. 6. The system of claim 1, wherein at least one of the first forward camera and the second forward camera comprises a stereocamera. 7. The system of claim 1, wherein an orientation of the second forward camera is adjustable based on an instruction set received from a remote operator. 8. The system of claim 7, wherein the instruction set is automatically generated based on a head position of the remote operator. 9. The system of claim 1, further comprising an alert module that outputs an alert signature based on the confidence metric exceeding a threshold value, wherein the alert signature comprises at least one of a visible signature and an audible signature. 10. A method for facilitating autonomous, semi-autonomous, and remote operation of a vehicle, comprising: collecting a first video stream at a first camera arranged external to a cabin of the vehicle, the first camera comprising a first angular field of view (AFOV) oriented toward a forward direction relative to the vehicle;collecting a second video stream at a second camera arranged within the cabin along a longitudinal centerline of the cabin, the second camera comprising a second AFOV oriented toward the forward direction, wherein the second AFOV is narrower than the first AFOV;processing the first video stream at a central processing unit (CPU) arranged within the cabin to extract a first object localization dataset according to an explicitly programmed set of rules;processing a combination of the first and second video stream at a graphical processing unit (GPU) cluster arranged within the cabin to extract a second object localization dataset according to a trained machine-learning model, simultaneously with processing the first video stream;generating a comparison between the first object localization dataset and the second object localization dataset;generating a confidence metric based on the comparison, wherein the confidence metric is indicative of the usability of the first and second video streams for at least one of localization, mapping and control of the vehicle; andcontrolling at least one of the first camera and the second camera based on the confidence metric. 11. The method of claim 10, further comprising: collecting a third video stream at a third camera arranged external to the cabin, the third camera comprising a third AFOV oriented toward a side direction relative to the vehicle, wherein the third AFOV is wider than the first AFOV; andprocessing the third video stream at the CPU contemporaneously with processing the first video stream to extract the first object localization dataset. 12. The method of claim 11, further comprising: receiving a navigation instruction indicative of vehicle motion away from the side direction; andin response to receiving the navigation instruction, ceasing collecting the third video stream and ceasing processing the third video stream. 13. The method of claim 11, further comprising: predicting a navigation instruction indicative of vehicle motion away from the side direction; andin response to predicting the navigation instruction, ceasing collecting the third video stream and ceasing processing the third video stream. 14. The method of claim 10, further comprising: collecting a fourth video stream at a fourth camera adjacent to the third camera, the fourth camera comprising a fourth AFOV oriented toward a rear direction relative to the vehicle, wherein the fourth AFOV is narrower than the first AFOV; andprocessing the fourth video stream at the GPU cluster contemporaneously with processing the combination of the first and second video streams to extract the second object localization dataset. 15. The method of claim 14, wherein the vehicle comprises a detachable trailer defining a length, and further comprising automatically determining the length based on the fourth video stream and automatically adjusting a focal length of the fourth camera based on the length. 16. The method of claim 10, wherein the first object localization dataset comprises a first lane location dataset, and wherein the second object localization dataset comprises a second lane location dataset. 17. The method of claim 10, wherein at least one of the first camera and the second camera comprises a stereocamera, and further comprising adjusting an orientation of the stereocamera based on a head orientation of a teleoperator in communication with the vehicle. 18. The method of claim 10, further comprising determining a first static portion of the first video stream and a second static portion of the second video stream, eliminating the first static portion of the first video stream prior to processing the first video stream, and eliminating the second static portion of the second video stream prior to processing the combination of the first video stream and the second video stream. 19. The method of claim 10, further comprising: collecting a range data stream at a set of rangefinders defining a total field of view including the first, second, third, and fourth fields of view;extracting an object signature from the range data stream; andgenerating the confidence metric based on the object signature. 20. The method of claim 10, further comprising automatically controlling the vehicle to pull over into a shoulder region of a roadway on which the vehicle is traveling, based on the confidence metric, holding the vehicle stationary for a predetermined time period, and automatically resuming travel on the roadway.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (22)
Chiu, Chung-Cheng; Chung, Meng-Liang; Chen, Wen-Chung; Ku, Min-Yu, Apparatus and method for detecting obstacle through stereovision.
Levinson, Jesse Sol; Sibley, Gabriel Thurston; Rege, Ashutosh Gajanan, Machine-learning systems and techniques to optimize teleoperation and/or planner decisions.
Kleimenhagen Karl W. (Peoria IL) Kemner Carl A. (Peoria Heights IL) Bradbury Walter J. (Peoria IL) Koehrsen Craig L. (Peoria IL) Peterson Joel L. (Peoria IL) Schmidt Larry E. (Peoria IL) Stafford Dar, System and method for operating an autonomous navigation system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.