Predictive sensor array configuration system for an autonomous vehicle
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
B60W-040/02
G01S-013/86
G01S-017/93
G01S-007/40
H04W-004/48
G01S-017/02
G01S-013/93
G01S-007/497
H04W-084/18
H04W-004/38
G05D-001/02
G01S-007/48
H04L-029/08
출원번호
US-0694493
(2017-09-01)
등록번호
US-10220852
(2019-03-05)
발명자
/ 주소
Valois, Jean-Sebastien
출원인 / 주소
Uber Technologies, Inc.
대리인 / 주소
Mahamedi IP Law LLP
인용정보
피인용 횟수 :
0인용 특허 :
24
초록▼
An autonomous vehicle (AV) can include a set of sensors generating sensor data corresponding to a surrounding environment of the AV. The AV can further include a control system that determines imminent lighting conditions for one or more cameras of the set of sensors, and executes a set of configura
An autonomous vehicle (AV) can include a set of sensors generating sensor data corresponding to a surrounding environment of the AV. The AV can further include a control system that determines imminent lighting conditions for one or more cameras of the set of sensors, and executes a set of configurations for the one or more cameras to preemptively compensate for the imminent lighting conditions.
대표청구항▼
1. An autonomous vehicle (AV) comprising: a set of sensors generating sensor data corresponding to a surrounding environment of the AV;a control system comprising one or more processors executing an instruction set, causing the control system to: based on the sensor data generated by the set of sens
1. An autonomous vehicle (AV) comprising: a set of sensors generating sensor data corresponding to a surrounding environment of the AV;a control system comprising one or more processors executing an instruction set, causing the control system to: based on the sensor data generated by the set of sensors, determine imminent lighting conditions for one or more cameras of the set of sensors; andexecute a set of configurations for the one or more cameras to preemptively compensate for the imminent lighting conditions. 2. The AV of claim 1, wherein the executed instructions further cause the control system to: store a set of sub-maps comprising recorded surface data of a given region upon which the AV operates;wherein the executed instructions cause the control system to determine the imminent lighting conditions by analyzing a current sub-map from the stored set of sub-maps. 3. The AV of claim 2, wherein the stored set of sub-maps identify road features that affect the imminent lighting conditions, the road features comprising at least one of a tunnel, an overpass, or proximate buildings. 4. The AV of claim 2, wherein each sub-map in the set of sub-maps comprises at least one of recorded LIDAR data or recorded image data. 5. The AV of claim 1, further comprising: acceleration, braking, and steering systems;wherein the executed instructions further cause the control system to: dynamically process the sensor data from the set of sensors to autonomously operate the acceleration, braking, and steering systems along a current route;wherein the executed instructions cause the control system to determine the imminent lighting conditions and execute the set of configurations dynamically as the AV travels along the current route. 6. The AV of claim 1, wherein the executed instructions further cause the control system to: dynamically determine a brightness differential between current lighting conditions and the imminent lighting conditions;wherein executed instructions cause the control system to determine the set of configurations based on the brightness differential. 7. The AV of claim 1, wherein executing the set of configurations for the one or more cameras comprises preemptively adjusting aperture settings of the one or more cameras based on the imminent lighting conditions. 8. The AV of claim 1, wherein the executed instructions cause the control system to determine the imminent lighting conditions by analyzing the sensor data from the set of sensors. 9. The AV of claim 1, wherein changes in the imminent lighting conditions correspond to at least one of shadows, lights, the Sun, or solar reflections. 10. The AV of claim 1, wherein the executed instructions cause the control system to determine the imminent light conditions by receiving data indicating the imminent lighting conditions from forward operating AVs in relation to the AV. 11. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors of an autonomous vehicle (AV) cause the one or more processors to: based on sensor data generated by one or more sensors of the AV, determine imminent lighting conditions for one or more cameras of a set of sensors of the AV, wherein the sensor data corresponds to a surrounding environment of the AV; andexecute a set of configurations for the one or more cameras to preemptively compensate for the imminent lighting conditions. 12. The non-transitory computer-readable medium of claim 11, wherein the executed instructions further cause the one or more processors to: store a set of sub-maps comprising recorded surface data of a given region upon which the AV operates;wherein the executed instructions cause the one or more processors to determine the imminent lighting conditions by analyzing a current sub-map from the stored set of sub-maps. 13. The non-transitory computer-readable medium of claim 12, wherein the stored set of sub-maps identify road features that affect the imminent lighting conditions, the road features comprising at least one of a tunnel, an overpass, or proximate buildings. 14. The non-transitory computer-readable medium of claim 12, wherein each sub-map in the set of sub-maps comprises at least one of recorded LIDAR data or recorded image data. 15. The non-transitory computer-readable medium of claim 11, wherein the AV further comprises acceleration, braking, and steering systems, and wherein the executed instructions further cause the one or more processors to: dynamically process the sensor data from the set of sensors to autonomously operate the acceleration, braking, and steering systems along a current route;wherein the executed instructions cause the one or more processors to determine the imminent lighting conditions and execute the set of configurations dynamically as the AV travels along the current route. 16. The non-transitory computer-readable medium of claim 11, wherein the executed instructions further cause the one or more processors to: dynamically determine a brightness differential between current lighting conditions and the imminent lighting conditions;wherein executed instructions cause the one or more processors to determine the set of configurations based on the brightness differential. 17. The non-transitory computer-readable medium of claim 11, wherein executing the set of configurations for the one or more cameras comprises preemptively adjusting aperture settings of the one or more cameras based on the imminent lighting conditions. 18. The non-transitory computer-readable medium of claim 11, wherein the executed instructions cause the one or more processors to determine the imminent lighting conditions by analyzing the sensor data from the set of sensors. 19. The non-transitory computer-readable medium of claim 11, wherein changes in the imminent lighting conditions correspond to at least one of shadows, lights, the Sun, or solar reflections. 20. A computer-implemented method of preemptively configuring sensors of an autonomous vehicle (AV), the method being performed by one or more processors of the AV and comprising: based on sensor data generated by one or more sensors of the AV, determining imminent lighting conditions for one or more cameras of a set of sensors of the AV, wherein the sensor data corresponds to a surrounding environment of the AV; andexecuting a set of configurations for the one or more cameras to preemptively compensate for the imminent lighting conditions.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (24)
Levinson, Jesse Sol; Douillard, Bertrand Robert; Sibley, Gabriel Thurston, Calibration for autonomous vehicle operation.
Elangovan, Vidya; Milnes, Kenneth A.; Heidmann, Timothy P., Detecting an object in an image using camera registration data indexed to location or camera sensors.
Ferguson, David I.; Templeton, Bradley, Use of relationship between activities of different traffic signals in a network to improve traffic signal state estimation.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.