The prevention of vehicle accidents is targeted. A road texture model is created based on a vehicle camera image. An initial vehicle location estimate is determined, and map imagery is obtained based on this location estimate. A refined vehicle location is determined using visual egomotion. In parti
The prevention of vehicle accidents is targeted. A road texture model is created based on a vehicle camera image. An initial vehicle location estimate is determined, and map imagery is obtained based on this location estimate. A refined vehicle location is determined using visual egomotion. In particular, 3D features of the vehicle image and the retrieved map imagery are identified and aligned. A map image is selected based on this alignment, and the location associated with the map image is modified by a displacement between the selected map image and the vehicle image to produce a refined vehicle location. A road boundary model is created based on the road texture model and the refined vehicle location, and a road departure model is created based on the road boundary model and vehicle odometry information. The operator of the vehicle is warned of a road departure based on the road departure model.
대표청구항▼
1. A method of providing a road departure warning alert in a vehicle comprising: creating a road texture model based on a vehicle image received from a vehicle camera, the road texture model classifying each of a plurality of portions of the vehicle image as road or non-road;smoothing the road textu
1. A method of providing a road departure warning alert in a vehicle comprising: creating a road texture model based on a vehicle image received from a vehicle camera, the road texture model classifying each of a plurality of portions of the vehicle image as road or non-road;smoothing the road texture model by: identifying outlier portions of the vehicle image, wherein each outlier portion comprises an image portion classified as non-road and surrounded by a threshold number of portions of the vehicle image classified as road; andre-classifying the identified outlier portions as road;retrieving map imagery based on an initial vehicle location estimate;determining a refined vehicle location using visual egomotion based on the vehicle image and the map imagery by: aligning 3D features between the vehicle image and each map image in the retrieved map imagery;selecting a map image based on the aligned 3D features;determining a displacement between a location associated with the selected map image and a corresponding location associated with the vehicle image; andapplying the determined displacement to the initial vehicle location estimate to determine the refined vehicle location;creating a road boundary model describing the edges of the road on which the vehicle is located based on the smoothed road texture model and the refined vehicle location;creating a road departure model based on the road boundary model and vehicle odometry information; andwarning a vehicle operator based on the road departure model. 2. The method of claim 1, wherein the vehicle image comprises an image of the road on which the vehicle is located in the direction in which the vehicle is moving. 3. The method of claim 1, wherein creating a road texture model further comprises: processing the vehicle image using a filter bank to produce a texton for each pixel in the vehicle image; andclassifying each texton in vehicle image using a set of classifiers. 4. The method of claim 3, further comprising: training the set of classifiers using the filter bank and a set of training images with pre-classified pixels. 5. The method of claim 3, wherein the set of classifiers comprise random forest classifiers. 6. The method of claim 3, further comprising: updating the set of classifiers based on the classification of textons representing pixels with known classifications. 7. The method of claim 1, wherein the retrieved map imagery comprises one of: satellite map imagery, aerial map imagery, and street view map imagery. 8. The method of claim 1, wherein the initial vehicle location estimate comprises location coordinates received from a GPS receiver. 9. The method of claim 1, wherein the 3D features are identified using edge detectors to detect edges in images. 10. The method of claim 1, wherein aligning the 3D features between the vehicle image and each map image comprises: projecting the 3D features onto the ground plane; andcomparing the ground plane location of the vehicle image's 3D features to the ground plane location of each map image's 3D features. 11. The method of claim 1, wherein the selected map image comprises the map image with the most 3D features in common with the vehicle image. 12. The method of claim 1, wherein creating a road boundary model comprises: identifying the edges of the road based on an image of the road;identifying the width of the road based on the identified road edges;reducing the identified width of the road by a safety margin; andfitting one or more curves to the road edges and reduced width of the road to create a road boundary model. 13. The method of claim 12, wherein the image of the road comprises one or more of the vehicle image, a satellite map image of the road, an aerial map image of the road, and a street view map image of the road. 14. The method of claim 12, further comprising: identifying one or both of the center line of the road and the shape of the road; andfitting one or more curves to the center line of the road or the shape of the road. 15. The method of claim 1, wherein creating a road departure model comprises identifying the probability of a road departure based on the road boundary model and the vehicle odometry information. 16. The method of claim 15, wherein warning a vehicle operator based on the road departure model comprises warning the vehicle operator when the identified probability of a road departure exceeds a pre-determined threshold. 17. A system of providing a road departure warning alert in a vehicle comprising: a non-transitory computer-readable storage medium storing computer executable instructions comprising: a road texture module configured to create a road texture model based on a vehicle image received from a vehicle camera, the road texture model classifying each of a plurality of portions of the vehicle image as road or non-road;a smoothing module configured to smooth the road texture module by: identifying outlier portions of the vehicle image, wherein each outlier portion comprises an image portion classified as non-road and surrounded by a threshold number of portions of the vehicle image classified as road; andre-classifying the identified outlier portions as road;a map module configured to retrieve map imagery based on an initial vehicle location estimate;a location module configured to determine a refined vehicle location using visual egomotion based on the vehicle image and the map imagery by: aligning 3D features between the vehicle image and each map image in the retrieved map imagery;selecting a map image based on the aligned 3D features;determining a displacement between a location associated with the selected map image and a corresponding location associated with the vehicle image; andapplying the determined displacement to the initial vehicle location estimate to determine the refined vehicle location;a road boundary module configured to create a road boundary model describing the edges of the road on which the vehicle is located based on the smoothed road texture model and the refined vehicle location;a road departure module configured to create a road departure model based on the road boundary model and vehicle odometry information; anda warning module configured to warn a vehicle operator based on the road departure model; anda processor configured to execute the computer executable instructions. 18. A method of refining a vehicle location estimate comprising: creating a road texture model based on a vehicle image received from a vehicle camera, the road texture model classifying each of a plurality of portions of the vehicle image as road or non-road;smoothing the road texture model by: identifying outlier portions of the vehicle image, wherein each outlier portion comprises an image portion classified as non-road and surrounded by a threshold number of portions of the vehicle image classified as road; andre-classifying the identified outlier portions as road;retrieving map imagery based on an initial vehicle location estimate; anddetermining a refined vehicle location using visual egomotion based on the vehicle image, the smoothed road texture model, and the map imagery at least in part by: aligning 3D features between the vehicle image and each map image in the retrieved map imagery;selecting a map image based on the aligned 3D features;determining a displacement between a location associated with the selected map image and a corresponding location associated with the vehicle image; andapplying the determined displacement to the initial vehicle location estimate to determine the refined vehicle location. 19. The method of claim 18, wherein the vehicle comprises a GPS receiver, and wherein the initial vehicle location estimate comprises a GPS location estimate received from the GPS receiver.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (14)
Breed, David S.; Johnson, Wendell C.; DuVall, Wilbur E., Accident avoidance system.
Panahpour Tehrani, Mehrdad; Ishikawa, Akio; Sakazawa, Shigeyuki, Apparatus, method and computer program for classifying pixels in a motion picture as foreground or background.
Keiichi Kimura JP, Navigation apparatus, method for map matching performed in the navigation apparatus, and computer-readable medium storing a program for executing the method.
Saneyoshi Keiji (Tokyo JPX) Hanawa Keiji (Tokyo JPX), System for monitoring condition outside vehicle using imaged picture by a plurality of television cameras.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.