A method and apparatus are provided for optimizing one or more object detection parameters used by an autonomous vehicle to detect objects in images. The autonomous vehicle may capture the images using one or more sensors. The autonomous vehicle may then determine object labels and their correspondi
A method and apparatus are provided for optimizing one or more object detection parameters used by an autonomous vehicle to detect objects in images. The autonomous vehicle may capture the images using one or more sensors. The autonomous vehicle may then determine object labels and their corresponding object label parameters for the detected objects. The captured images and the object label parameters may be communicated to an object identification server. The object identification server may request that one or more reviewers identify objects in the captured images. The object identification server may then compare the identification of objects by reviewers with the identification of objects by the autonomous vehicle. Depending on the results of the comparison, the object identification server may recommend or perform the optimization of one or more of the object detection parameters.
대표청구항▼
1. An apparatus for optimizing object detection performed by an autonomous vehicle, the apparatus comprising: a memory; anda processor in communication with the memory, the processor operative to: receive a first plurality of images captured by an autonomous vehicle, wherein a first image in the fir
1. An apparatus for optimizing object detection performed by an autonomous vehicle, the apparatus comprising: a memory; anda processor in communication with the memory, the processor operative to: receive a first plurality of images captured by an autonomous vehicle, wherein a first image in the first plurality of images includes at least one applied object label applied by the autonomous vehicle;display the first image from the first plurality of images, wherein the first image comprises at least one object;receive an object label for the at least one object displayed in the first image to obtain a received object label in a first image in a second plurality of images, wherein the second plurality of images corresponds to the first plurality of images;compare the at least one received object label with at least one object label applied by the autonomous vehicle to the object in the first image in the first plurality of images;determine whether the at least one received object label corresponds to the at least one object label applied by the autonomous vehicle,compare a number of object labels applied by the autonomous vehicle in the first image in the first plurality of images with a number of object labels received in the first image in the second plurality of images;determining whether the autonomous vehicle missed a predetermined number of the object labels in the first image in the first plurality of images based on the comparison of the number of object labels applied by the autonomous vehicle in the first image in the first plurality of images and the number of object labels received in the first image in the second plurality of images; andoptimize a plurality of object detection parameters of at least one of a plurality of sensors when an indication of a correspondence between the received object label and the object label applied by the autonomous vehicle is below a predetermined threshold and the autonomous vehicle missed the predetermined number of the object labels in the first image of the first plurality of images;wherein the object label applied by the autonomous vehicle is defined by at least one object label parameter that corresponds to a type of sensor that captured the first image. 2. The apparatus of claim 1, wherein the first plurality of images comprise a first plurality of images captured by a first sensor of a first sensor type and a second plurality of images captured by a second sensor of a second sensor type. 3. The apparatus of claim 2, wherein the first sensor comprises a camera and the second sensor comprises a laser. 4. The apparatus of claim 2, wherein the first plurality of images captured by the first sensor are images captured from a forward perspective of the autonomous vehicle. 5. The apparatus of claim 2, wherein the second plurality of images captured by the second sensor are images captured from a panoramic perspective of the autonomous vehicle. 6. The apparatus of claim 1, wherein: the at least one object label comprises a plurality of parameters that define the object label; andthe plurality of parameters depend on an image sensor type used to capture the first image from the first plurality of images captured by the autonomous vehicle. 7. The apparatus of claim 1, wherein the processor is further operative to determine whether the received object label corresponds to the object label applied by the autonomous vehicle by determining whether the object label applied by the autonomous vehicle overlaps any portion of the received object label. 8. The apparatus of claim 1, wherein the processor is further operative to determine whether the received object label corresponds to the object label applied by the autonomous vehicle by determining an object identification ratio derived from the received object label and the object label applied by the autonomous vehicle. 9. The apparatus of claim 1, wherein the processor is operative to determine whether the received object label corresponds to the object label applied by the autonomous vehicle based on: a first area represented by the intersection of an area of the received object label with an area of the object label applied by the autonomous vehicle; anda second area represented by the union of the area of the received object label with the area of the object label applied by the autonomous vehicle. 10. The apparatus of claim 1, wherein the object label applied by the autonomous vehicle is based on the plurality of object detection parameters. 11. A method for optimizing object detection performed by an autonomous vehicle, the method comprising: storing, in a memory, a first plurality of images captured by an autonomous vehicle, wherein a first image in the first plurality of images includes at least one applied object label applied by the autonomous vehicle; anddisplaying, with a processor in communication with a memory, a first image from the first plurality of images, wherein the first image comprises at least one object;receiving an object label for the at least one object displayed in the first image from the first plurality of images to obtain a received object label in a first image for a second plurality of images;comparing the at least one received object label with at least one object label applied by the autonomous vehicle to the object in the first image in the first plurality of images;determining whether the received object label corresponds to the object label applied by the autonomous vehicle;compare a number of object labels applied by the autonomous vehicle in the first image in the first plurality of images with a number of object labels received in the first image in the second plurality of images;determining whether the autonomous vehicle missed a predetermined number of object labels in the first image in the first plurality of images based on the comparison of the number of object labels applied by the autonomous vehicle in the first image from the first plurality of images and the number of object labels received in the first image from the second plurality of images; andoptimizing a plurality of object detection parameters of at least one sensor of a plurality of sensors when an indication of a correspondence between the received object label and the object label applied by the autonomous vehicle is below a predetermined correspondence threshold and the autonomous vehicle missed the predetermined number of object labels in the first image of the first plurality of images,wherein the object label applied by the autonomous vehicle is defined by at least one object label parameter that corresponds to a type of sensor that captured the first image. 12. The method of claim 11, wherein the first plurality of images comprise a first plurality of images captured by a first sensor of a first sensor type and a second plurality of images captured by a second sensor of a second sensor type. 13. The method of claim 12, wherein the first sensor comprises a camera and the second sensor comprises a laser. 14. The method of claim 12, wherein the first plurality of images captured by the first sensor are images captured from a forward perspective of the autonomous vehicle. 15. The method of claim 12, wherein the second plurality of images captured by the second sensor are images captured from a panoramic perspective of the autonomous vehicle. 16. The method of claim 11, wherein: the at least one object label comprises a plurality of parameters that define the object label; andthe plurality of parameters depend on an image sensor type used to capture the first image from the first plurality of images captured by the autonomous vehicle. 17. The method of claim 11, wherein determining whether the received object label corresponds to the object label applied by the autonomous vehicle comprises determining whether the object label applied by the autonomous vehicle overlaps any portion of the received object label. 18. The method of claim 11, wherein determining whether the received object label corresponds to the object label applied by the autonomous vehicle comprises determining an object identification ratio derived from the received object label and the object label applied by the autonomous vehicle. 19. The method of claim 11, wherein determining whether the received object label corresponds to the object label applied by the autonomous vehicle is based on: a first area represented by the intersection of an area of the received object label with an area of the object label applied by the autonomous vehicle; anda second area represented by the union of the area of the received object label with the area of the object label applied by the autonomous vehicle. 20. The method of claim 11, wherein the object label applied by the autonomous vehicle is based on the plurality of object detection parameters. 21. The method of claim 11, further comprising: associating the object label applied by the autonomous vehicle with an image frame number that identifies the first image in the first plurality of images.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (60)
Oda, Tamami; Watanabe, Tomo; Sato, Tsuyoshi, AUTOMATIC VEHICLE GUIDANCE SYSTEM, CONTROL APPARATUS IN AUTOMATIC VEHICLE GUIDANCE SYSTEM, AUTOMATIC VEHICLE GUIDANCE METHOD, AND COMPUTER-READABLE DATA RECORDED MEDIUM IN WHICH AUTOMATIC VEHICLE GUI.
Huang, Jihua; Lin, William C.; Chin, Yuen-Kwok, Adaptive vehicle control system with driving style recognition based on maneuvers at highway on/off ramps.
Isogai Akira,JPX ; Takagi Kiyokazu,JPX, Automatic deceleration control system, vehicle-to-obstacle distance control system, and system program storage medium for vehicle.
Latarnik Michael (Friedrichsdorf DEX) Kolbe Alexander (Gross Zimmern DEX) Honus Klaus (Frankfurt am Main DEX), Circuit configuration and method for controlling a traction slip control system with brake and/or engine management.
Holloway,Lane Thomas; Kobrosly,Walid M.; Malik,Nadeem; Quiller,Marques Benjamin, Limiting and controlling motor vehicle operation functions enabled for each of a group of drivers of the vehicle.
Bhogal, Kulvir S.; Peterson, Robert R.; Seacat, Lisa A.; Talbot, Mark W., Method and system to alert user of local law via the Global Positioning System (GPS).
Toepfer Bernhard (Stuttgart DEX) Reiner Michael (Fellbach DEX) Klein Bodo (Borsinghausen DEX), Method of determining a wear-dependent braking force distribution.
Trepagnier, Paul Gerard; Nagel, Jorge Emilio; Kinney, Powell McVay; Dooner, Matthew Taylor; Wilson, Bruce Mackie; Schneider, Jr., Carl Reimers; Goeller, Keith Brian, Navigation and control system for autonomous vehicles.
O Connor, Michael L.; Bell, Thomas; Eglington, Michael L.; Leckie, Lars; Gutt, Gregory M.; Zimmerman, Kurt R., Rapid adjustment of trajectories for land vehicles.
Rao Prithvi N. (Pittsburgh PA) Shin Dong Hun (Pittsburgh PA) Whittaker William L. (Pittsburgh PA) Kleimenhagen Karl W. (Peoria IL) Singh Sanjiv J. (Pittsburgh PA) Kemner Carl A. (Peoria Heights IL) B, System and method for enabling an autonomous vehicle to track a desired path.
Barrett,David S.; Allard,James; Filippov,Misha; Pack,Robert Todd; Svendsen,Selma, System and method for processing safety signals in an autonomous vehicle.
Norris, William Robert; Allard, James; Filippov, Mikhail O.; Haun, Robert Dale; Turner, Christopher David Glenn; Gilbertson, Seth; Norby, Andrew Julian, Systems and methods for switching between autonomous and manual operation of a vehicle.
Nagda, Paresh; Li, Wenbin; Howlett, Julia; Fan, Rodric C.; Yang, Xinnong; Fay, James D., Using location data to determine traffic and route information.
Harumoto,Satoshi; Yamato,Toshitaka; Takeuchi,Hiroshi; Maeno,Yoshihiko; Miyamoto,Naotoshi; Sakiyama,Kazuhiro, Vehicle control apparatus, vehicle control method, and computer program.
Sweet, III, Charles Wheeler; Swart, Hugo, System and method of dynamically controlling parameters for processing sensor output data for collision avoidance and path planning.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.