Autonomous vehicles use various computing systems to transport passengers from one location to another. A control computer sends messages to the various systems of the vehicle in order to maneuver the vehicle safely to the destination. The control computer may display information on an electronic di
Autonomous vehicles use various computing systems to transport passengers from one location to another. A control computer sends messages to the various systems of the vehicle in order to maneuver the vehicle safely to the destination. The control computer may display information on an electronic display in order to allow the passenger to understand what actions the vehicle may be taking in the immediate future. Various icons and images may be used to provide this information to the passenger.
대표청구항▼
1. A vehicle comprising: a plurality of control apparatuses including a braking apparatus, an acceleration apparatus, and a steering apparatus;a user input device for inputting destination information;a geographic position component for determining the current location of the vehicle;an object detec
1. A vehicle comprising: a plurality of control apparatuses including a braking apparatus, an acceleration apparatus, and a steering apparatus;a user input device for inputting destination information;a geographic position component for determining the current location of the vehicle;an object detection apparatus for detecting and identifying a type of an object in or proximate to a roadway;memory for storing a detailed roadway map including roadways, traffic signals, and intersections;an electronic display for displaying information to a passenger; anda processor programmed to:receive the destination information;identify a route to the destination;determine, from location information received from the geographic position component and the stored map information, the current geographic location of the vehicle;identify an object and object type based on object information received from the object detection apparatus;determine at least one action to be taken including controlling at least one of the control apparatuses based on the identified object, the current geographic location of the vehicle, and the route, wherein the at least one action to be taken includes avoiding a headroom zone in front of the identified object;select a headroom zone image to be displayed based on the at least one action to be taken and the identified object; anddisplay the selected headroom zone image on the electronic display. 2. The vehicle of claim 1, wherein the processor is further programmed to: select a second image to be displayed, wherein the second image to be displayed is an icon representing a second vehicle; anddisplay the second selected image on the electronic display. 3. The vehicle of claim 2, wherein the icon representing the second vehicle is selected based on the type of vehicle. 4. The vehicle of claim 1, wherein the processor is further programmed to display, on the display, an image indicting a portion of the route to be traveled by the vehicle in the next few seconds. 5. The vehicle of claim 1, wherein the identified object is a third vehicle and the at least one action to be taken further includes maintaining a safe following distance behind the third vehicle. 6. The vehicle of claim 5, wherein the processor is further programmed to; select a safe following distance image to be displayed; anddisplaying the safe following distance image on the electronic display. 7. The vehicle of claim 1, wherein the at least one action to be taken further includes waiting and the vehicle is further programmed to display text indicating that the vehicle is waiting. 8. The vehicle of claim 1, wherein the processor is further programmed determine a geographic area to be displayed such that a larger geographic area is displayed where the vehicle is moving faster, and a smaller geographic area is displayed where the vehicle is moving slower. 9. The vehicle of claim 1, wherein the processor is further programmed to determine a geographic area to be displayed such that a larger geographic area is displayed where the roadway is associated with a relatively high speed limit, and a smaller geographic area is displayed where the roadway is associated with a relatively low speed limit. 10. The vehicle of claim 1, wherein the at least one action to be taken further includes stopping at an intersection and the processor is further programmed to select and display an image indicating where the vehicle will stop at the intersection. 11. The vehicle of claim 1, wherein the processor is further programmed to determine the geographic area to be displayed based on the action to be taken, where if the action to be taken is a turn, the geographic area includes a larger view in the direction opposite to the turn. 12. The vehicle of claim 1, wherein the identified object is a traffic signal and the processor is further programmed to: select an image indicating a traffic signal, anddisplay the image indicating the traffic signal on the display proximate to the location of the traffic signal. 13. The vehicle of claim 12, wherein the processor is further programmed to: determine the state of the traffic signal, wherein the image indicating the traffic signal is selected based on the state of the traffic signal; anddetermine the at least one action to be taken based on the state of the traffic signal. 14. The vehicle of claim 1, wherein the at least one action to be taken further includes changing to a different lane, and the processor is further programmed to select and display an image indicating a turn signal. 15. A method for selecting images for display on an display apparatus of a vehicle, the method comprising: receiving destination information from a user input device;identifying a route to the destination;receiving location information from a geographic position component;accessing stored map information including roadways, traffic signals, and intersections;determining, from the location information and the stored map information, the current geographic location of the vehicle;identifying an object of a roadway and an object type based on object information received from an object detection apparatus;determining at least one action to be taken including controlling at least one of a plurality of control apparatuses including a braking apparatus, an acceleration apparatus, and a steering apparatus, wherein the action to be taken is determined based on the identified object, the current geographic location of the vehicle, and the route, wherein the at least one action to be taken includes avoiding a headroom zone in front of the identified object; andselecting a headroom zone image based on the at least one action to be taken and the identified object; anddisplaying the selected headroom zone image on the electronic display. 16. The method of claim 15, further comprising determining a geographic area to be displayed such that a larger geographic area is displayed where the vehicle is moving faster, and a smaller geographic area is displayed where the vehicle is moving slower. 17. The method of claim 15, further comprising determining the geographic area to be displayed based on the action to be taken, where if the at least one action to be taken includes a turn, the geographic area includes a larger view in the direction opposite to the turn.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (26)
Kagawa Kazunori,JPX ; Tanaka Hiroaki,JPX, Auto-drive control unit for vehicles.
Trepagnier, Paul Gerard; Nagel, Jorge Emilio; Dooner, Matthew Taylor; Dewenter, Michael Thomas; Traft, Neil Michael; Drakunov, Sergey; Kinney, Powell; Lee, Aaron, Control and systems for autonomously driven vehicles.
Bhogal,Kulvir S.; Boss,Gregory J.; Hamilton, II,Rick A.; Polozoff,Alexandre, Method, system, and computer program product for determining and reporting tailgating incidents.
Bhogal,Kulvir S.; Boss,Gregory J.; Hamilton, II,Rick A.; Polozoff,Alexandre, Method, system, and computer program product for determining and reporting tailgating incidents.
Bhogal,Kulvir S.; Boss,Gregory J.; Hamilton, II,Rick A.; Polozoff,Alexandre, Method, system, and computer program product for determining and reporting tailgating incidents.
Trepagnier, Paul Gerard; Nagel, Jorge Emilio; Kinney, Powell McVay; Dooner, Matthew Taylor; Wilson, Bruce Mackie; Schneider, Jr., Carl Reimers; Goeller, Keith Brian, Navigation and control system for autonomous vehicles.
Rieth,Peter; B��hm,J��rgen; Linkenbach,Steffen; Hoffmann,Oliver; Nell,Joachim; Schirling,Andreas; Netz,Achim; Stauder,Peter; Kuhn,Matthias, Steering handle for motor vehicles and method for recording a physical parameter on a steering handle.
Gudat Adam J. ; Bradbury Walter J. ; Christensen Dana A. ; Kemner Carl A. ; Koehrsen Craig L. ; Kyrtsos Christos T. ; Lay Norman K. ; Peterson Joel L. ; Schmidt Larry E. ; Stafford Darrell E. ; Weinb, System and a method for enabling a vehicle to track a preset path.
Wilbrink,Tijs I.; Kelley,Edward E.; Walsh,William D., System and method for performing interventions in cars using communicated automotive information.
Norris, William Robert; Allard, James; Filippov, Mikhail O.; Haun, Robert Dale; Turner, Christopher David Glenn; Gilbertson, Seth; Norby, Andrew Julian, Systems and methods for switching between autonomous and manual operation of a vehicle.
Jenkins Gary Kim (Arlington TX) Evans Bruno Jack (Euless TX) Williams ; Jr. David Collis (Burleson TX) Bornowski Arthur Steven (Garland TX), Visual recognition system for LADAR sensors.
Burnette, Donald Jason; Chatham, Andrew Hughes; McNaughton, Matthew Paul, Automatic collection of quality control statistics for maps used in autonomous driving.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian, User interface for displaying object-based indications in an autonomous driving system.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas, User interface for displaying object-based indications in an autonomous driving system.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas, User interface for displaying object-based indications in an autonomous driving system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.