System and method for creating, storing and utilizing images of a geographic location
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/60
G08G-001/123
H04N-007/00
G01C-021/00
출원번호
UP-0482314
(2009-06-10)
등록번호
US-7805025
(2010-10-21)
발명자
/ 주소
DiBernardo, Enrico
Goncalves, Luis F.
출원인 / 주소
Vederi, LLC
대리인 / 주소
Christie, Parker & Hale, LLP
인용정보
피인용 횟수 :
17인용 특허 :
28
초록▼
A system and method synthesizing images of a locale to generate a composite image that provide a panoramic view of the locale. A video camera moves along a street recording images of objects along the street. A GPS receiver and inertial navigation system provide the position of the camera as the ima
A system and method synthesizing images of a locale to generate a composite image that provide a panoramic view of the locale. A video camera moves along a street recording images of objects along the street. A GPS receiver and inertial navigation system provide the position of the camera as the images are being recorded. The images are indexed with the position data provided by the GPS receiver and inertial navigation system. The composite image is created on a column-by-column basis by determining which of the acquired images contains the desired pixel column, extracting the pixels associated with the column, and stacking the columns side by side. The composite images are stored in an image database and associated with a street name and number range of the street being depicted in the image. The image database covers a substantial amount of a geographic area allowing a user to visually navigate the area from a user terminal.
대표청구항▼
What is claimed is: 1. In a system including an image source and a user terminal having a screen and an input device, a method for enabling visual navigation of a geographic area from the user terminal, the method comprising: receiving a first user input specifying a first location in the geographi
What is claimed is: 1. In a system including an image source and a user terminal having a screen and an input device, a method for enabling visual navigation of a geographic area from the user terminal, the method comprising: receiving a first user input specifying a first location in the geographic area; retrieving from the image source a first image associated with the first location, the image source providing a plurality of images depicting views of objects in the geographic area, the views being substantially elevations of the objects in the geographic area, wherein the images are associated with image frames acquired by an image recording device moving along a trajectory; displaying on the screen a direction identifier for indicating the viewing direction depicted in the first image; receiving a second user input specifying a navigation direction relative to the first location in the geographic area; determining a second location based on the user specified navigation direction; and retrieving from the image source a second image associated with the second location. 2. The method of claim 1 further comprising: displaying the first image on the screen of the user terminal; and updating the first image with the second image. 3. The method of claim 1, wherein the first image depicts a view of the objects in the geographic area from a first viewing direction, and the direction identifier identifies the first viewing direction. 4. The method of claim 3 further comprising: receiving a user command to change the first viewing direction to a second viewing direction; retrieving from the image source a third image depicting a view of the objects in the geographic area from the second viewing direction; and updating the direction identifier for identifying the second viewing direction. 5. The method of claim 1, wherein the direction identifier is an arrow. 6. The method of claim 1, wherein the first and second images are each a composite image, wherein each composite image is created based on a first one of the image frames acquired at a first point in the trajectory and a second one of the image frames acquired at a second point in the trajectory. 7. The method of claim 1, wherein the first and second images are each a composite image, wherein each composite image is created by processing pixel data of a plurality of the image frames. 8. The method of claim 1, wherein the first and second images each depict a wider field of view than is depicted in any one of the image frames. 9. The method of claim 1 further comprising: acquiring position information associated with the image recording device as the image recording device moves along the trajectory; and synchronizing the image frames acquired by the image recording device with the position information. 10. The method of claim 9, wherein the first and second images are associated to respectively the first and second locations, based on the synchronized position information. 11. The method of claim 1 further comprising: segmenting the trajectory on which the image recording devices moves, into a plurality of segments; correlating the plurality of segments to a plurality of street segments in a geographic information database; identifying one of the plurality of street segments based on the first user input specifying the first location; and retrieving the first image based on the identified one of the plurality of street segments. 12. The method of claim 11, wherein the correlating the plurality of segments includes correlating position data of the plurality of segments to position data of the plurality of street segments. 13. In a system including an image source and a user terminal having a screen and an input device, a method for enabling visual navigation of a geographic area from the user terminal, the method comprising: providing by the image source a plurality of images depicting views of objects in the geographic area, the views being substantially elevations of the objects in the geographic area, wherein the images are associated with image frames acquired by an image recording device moving along a trajectory; receiving by the user terminal a first user input specifying a first location in the geographic area; retrieving by the user terminal a first image associated with the first location, the first image being one of the plurality of images provided by the image source; displaying the first image on the screen of the user terminal; displaying on the screen a direction identifier for indicating the viewing direction depicted in the first image; receiving by the user terminal a second user input specifying a navigation direction relative to the first location in the geographic area; determining a second location based on the user specified navigation direction; retrieving by the user terminal a second image associated with the second location, the second image being one of the plurality of images provided by the image source; and updating the first image with the second image. 14. The method of claim 13, wherein the first image depicts a view of the objects in the geographic area from a first viewing direction, and the direction identifier identifies the first viewing direction. 15. The method of claim 14 further comprising: receiving by the user terminal a user command to change the first viewing direction to a second viewing direction; retrieving by the user terminal a third image depicting a view of the objects in the geographic area from the second viewing direction, the third image being one of the plurality of images provided by the image source; and updating by the user terminal the direction identifier for identifying the second viewing direction. 16. The method of claim 13, wherein the direction identifier is an arrow. 17. The method of claim 13, wherein the first and second images each provide a panoramic view of the objects at respectively the first and second locations. 18. The method of claim 13, wherein the first and second images are each a composite image, wherein each composite image is created based on a first one of the image frames acquired at a first point in the trajectory and a second one of the image frames acquired at a second point in the trajectory. 19. The method of claim 13, wherein the first and second images are each a composite image, wherein each composite image is created by processing pixel data of a plurality of the image frames. 20. The method of claim 13, wherein the first and second images each depict a wider field of view than is depicted in any one of the image frames. 21. A method for enabling visual navigation of a geographic area via a computer system coupled to an image source, the computer system including one or more computer devices, at least one of the computer devices having a display screen, the method comprising: providing by the image source a plurality of images depicting views of objects in the geographic area, the views being substantially elevations of the objects in the geographic area, wherein the images are associated with image frames acquired by an image recording device moving along a trajectory; receiving by the computer system a first user input specifying a first location in the geographic area; retrieving by the computer system a first image associated with the first location, the first image being one of the plurality of images provided by the image source; providing by the computer system the retrieved first image for displaying on a first display area of the display screen; invoking by the computer system a display of a direction identifier for indicating the viewing direction depicted in the first image; receiving by the computer system a second user input specifying a navigation direction relative to the first location in the geographic area; determining by the computer system a second location based on the user specified navigation direction; retrieving by the computer system a second image associated with the second location, the second image being one of the plurality of images provided by the image source; and providing by the computer system the retrieved second image for updating the first image with the second image. 22. The method of claim 21, wherein the first image depicts a view of the objects in the geographic area from a first viewing direction, and the direction identifier identifies the first viewing direction. 23. The method of claim 22 further comprising: receiving by the computer system a user command to change the first viewing direction to a second viewing direction; retrieving by the computer system a third image depicting a view of the objects in the geographic area from the second viewing direction, the third image being one of the plurality of images provided by the image source; and updating by the computer system the direction identifier for identifying the second viewing direction. 24. The method of claim 21, wherein the direction identifier is an arrow. 25. The method of claim 21, wherein the first location specified by the first user input is an address specifying information selected from the group consisting of street name, city, state, and zip code. 26. The method of claim 21 further comprising: invoking by the computer system a display of a navigation button indicating the navigation direction; and receiving by the computer system user selection of the navigation button. 27. The method of claim 21 further comprising: receiving by the computer system a user selection associated with a particular one of the objects depicted in the first image; and invoking by the computer system a display of information on the particular one of the objects in response to the user selection. 28. The method of claim 27, wherein the particular one of the objects is a retail establishment, the method further comprising: accessing a web page for the retail establishment; and invoking by the computer system a display of the web page on the display screen. 29. The method of claim 27 further comprising: invoking by the computer system a display of an icon in association with the particular one of the objects, wherein the user selection is actuation of the icon. 30. The method of claim 21 further comprising: invoking by the computer system a display of a map of at least a portion of the geographic area, wherein the direction identifier is displayed on the map. 31. The method of claim 30 further comprising: invoking by the computer system a display of a location identifier on the map for identifying the location depicted by the first or second image. 32. The method of claim 30 further comprising: receiving by the computer system a user selection of a location on the displayed map; retrieving by the computer system a third image associated with the selected location on the map, the third image being one of the plurality of images provided by the image source; and invoking by the computer system a display of the third image on the display screen. 33. The method of claim 21, wherein the first and second images each provide a panoramic view of the objects at respectively the first and second locations. 34. The method of claim 21, wherein the first and second images are each a composite image, wherein each composite image is created based on a first one of the image frames acquired at a first point in the trajectory and a second one of the image frames acquired at a second point in the trajectory. 35. The method of claim 21, wherein the first and second images are each a composite image, wherein each composite image is created by processing pixel data of a plurality of the image frames. 36. The method of claim 21, wherein the first and second images each depict a wider field of view than is depicted in any one of the image frames. 37. The method of claim 21 further comprising: acquiring position information associated with the image recording device as the image recording device moves along the trajectory; and synchronizing the image frames acquired by the image recording device with the position information. 38. The method of claim 37, wherein the first and second images are associated to respectively the first and second locations, based on the synchronized position information. 39. The method of claim 21 further comprising: segmenting the trajectory on which the image recording devices moves, into a plurality of segments; correlating the plurality of segments to a plurality of street segments in a geographic information database; identifying one of the plurality of street segments based on the first user input specifying the first location; and retrieving the first image based on the identified one of the plurality of street segments. 40. The method of claim 39, wherein the correlating the plurality of segments includes correlating position data of the plurality of segments to position data of the plurality of street segments. 41. The method of claim 21, wherein the one or more computer devices includes a server. 42. The method of claim 21, wherein the one or more computer devices includes a user terminal. 43. A user terminal coupled to an image source for visually navigating a geographic area, the user terminal including: a display screen; a processor coupled to the display screen; and a memory coupled to the processor and storing computer program instructions therein, the processor configured to execute the computer program instructions, the computer program instructions including: receiving a first user input specifying a first location in the geographic area; retrieving from the image source a first image associated with the first location, the image source providing a plurality of images depicting views of objects in the geographic area, the views being substantially elevations of the objects in the geographic area, wherein the images are associated with image frames acquired by an image recording device moving along a trajectory; displaying on the display screen a direction identifier for indicating the viewing direction depicted in the first image; receiving a second user input specifying a navigation direction relative to the first location in the geographic area; determining a second location based on the user specified navigation direction; retrieving from the image source a second image associated with the second location; and updating the first image with the second image. 44. The user terminal of claim 43, wherein the first image depicts a view of the objects in the geographic area from a first viewing direction, and the direction identifier identifies the first viewing direction. 45. The user terminal of claim 44, wherein the computer program instructions further include: receiving a user command to change the first viewing direction to a second viewing direction; retrieving from the image source a third image depicting a view of the objects in the geographic area from the second viewing direction; and updating the direction identifier for identifying the second viewing direction. 46. The user terminal of claim 44, wherein the direction identifier is an arrow. 47. The user terminal of claim 43, wherein the first location specified by the first user input is an address specifying information selected from the group consisting of street name, city, state, and zip code. 48. The user terminal of claim 43, wherein the computer program instructions further include: displaying a navigation button indicating the navigation direction; and receiving user selection of the navigation button. 49. The user terminal of claim 43, wherein the computer program instructions further include: receiving a user selection associated with a particular one of the objects depicted in the first image; and displaying information on the particular one of the objects in response to the user selection. 50. The user terminal of claim 49, wherein the particular one of the objects is a retail establishment, and the computer program instructions further include: accessing a web page for the retail establishment; and displaying the web page on the display screen. 51. The user terminal of claim 49, wherein the computer program instructions further include: displaying an icon in association with the particular one of the objects, wherein the user selection is actuation of the icon. 52. The user terminal of claim 43, wherein the computer program instructions further include: displaying a map of at least a portion of the geographic area, wherein the direction identifier is displayed on the map. 53. The user terminal of claim 52, wherein the computer program instructions further include: displaying on the map a location identifier identifying the location depicted by the first or second image. 54. The user terminal of claim 52, wherein the computer program instructions further include: receiving a user selection of a location on the displayed map; retrieving from the image source a third image associated with the selected location on the map; and displaying the third image on the display screen. 55. A system for enabling visual navigation of a geographic area, the system comprising: an image source providing a plurality of images depicting views of objects in the geographic area, the views being substantially elevations of the objects in the geographic area, wherein the images are associated with image frames acquired by an image recording device moving along a trajectory; and one or more computer devices coupled to the image source, at least one of the computer devices having a display screen, the one or more computer devices being configured to execute computer program instructions including: receiving a first user input specifying a first location in the geographic area; retrieving a first image associated with the first location, the first image being one of the plurality of images provided by the image source; providing the retrieved first image for displaying on a first display area of the display screen; invoking display of a direction identifier for indicating the viewing direction depicted in the first image; receiving a second user input specifying a navigation direction relative to the first location in the geographic area; determining a second location based on the user specified navigation direction; retrieving a second image associated with the second location, the second image being one of the plurality of images provided by the image source; and providing the retrieved second image for updating the first image with the second image. 56. The system of claim 55, wherein the first image depicts a view of the objects in the geographic area from a first viewing direction, and the direction identifier identifies the first viewing direction. 57. The system of claim 56, wherein the computer program instructions further include: receiving a user command to change the first viewing direction to a second viewing direction; retrieving a third image depicting a view of the objects in the geographic area from the second viewing direction, the third image being one of the plurality of images provided by the image source; and updating the direction identifier for identifying the second viewing direction. 58. The system of claim 55, wherein the direction identifier is an arrow. 59. The system of claim 55, wherein the first location specified by the first user input is an address specifying information selected from the group consisting of street name, city, state, and zip code. 60. The system of claim 55, wherein the computer program instructions further include: invoking display of a navigation button indicating the navigation direction; and receiving user selection of the navigation button. 61. The system of claim 55, wherein the computer program instructions further include: receiving a user selection associated with a particular one of the objects depicted in the first image; and invoking display of information on the particular one of the objects in response to the user selection. 62. The system of claim 61, wherein the particular one of the objects is a retail establishment, the method further comprising: accessing a web page for the retail establishment; and invoking display of the web page on the display screen. 63. The system of claim 61, wherein the computer program instructions further include: invoking display of an icon in association with the particular one of the objects, wherein the user selection is actuation of the icon. 64. The system of claim 55, wherein the computer program instructions further include: invoking by the computer system display of a map of at least a portion of the geographic area, wherein the direction identifier is displayed on the map. 65. The system of claim 64, wherein the computer program instructions further include: invoking display of a location identifier on the map for identifying the location depicted by the first or second image. 66. The system of claim 64, wherein the computer program instructions further include: receiving a user selection of a location on the displayed map; retrieving a third image associated with the selected location on the map, the third image being one of the plurality of images provided by the image source; and invoking display of the third image on the display screen. 67. The system of claim 55, wherein the first and second images each provide a panoramic view of the objects at respectively the first and second locations. 68. The system of claim 55, wherein the first and second images are each a composite image, wherein each composite image is created based on a first one of the image frames acquired at a first point in the trajectory and a second one of the image frames acquired at a second point in the trajectory. 69. The system of claim 55, wherein the first and second images are each a composite image, wherein each composite image is created by processing pixel data of a plurality of the image frames. 70. The system of claim 55, wherein the first and second images each depict a wider field of view than is depicted in any one of the image frames. 71. The system of claim 55, wherein the computer program instructions further include: acquiring position information associated with the image recording device as the image recording device moves along the trajectory; and synchronizing the image frames acquired by the image recording device with the position information. 72. The system of claim 71, wherein the first and second images are associated to respectively the first and second locations, based on the synchronized position information. 73. The system of claim 55, wherein the computer program instructions further include: segmenting the trajectory on which the image recording devices moves, into a plurality of segments; correlating the plurality of segments to a plurality of street segments in a geographic information database; identifying one of the plurality of street segments based on the first user input specifying the first location; and retrieving the first image based on the identified one of the plurality of street segments. 74. The system of claim 73, wherein the correlating the plurality of segments includes correlating position data of the plurality of segments to position data of the plurality of street segments.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (28)
DeLorme David M. ; Gray Keith A., Computer aided routing and positioning system.
Gorr Russell E. ; Hancock Thomas R. ; Judd J. Stephen ; Lin Long-Ji ; Novak Carol L. ; Rickard ; Jr. Scott T., Method and apparatus for automatically tracking the location of vehicles.
Lachinski Theodore M. (Andover MN) Ptacek Louis S. (Mound MN) Blais Paul M. (St. Paul MN) Boggs Stephen (Fridley MN) Longfellow John W. (St. Paul MN) Setterholm Jeffrey M. (Lakeville MN), Method and apparatus for collecting and processing visual and spatial position information from a moving platform.
Keh-shin Fu Cheng ; Keeranoor G. Kumar ; James Sargent Lipscomb ; Jai Prakash Menon ; Marc Hubert Willebeek-LeMair, Method and apparatus for displaying panoramas with streaming video.
Babcock, Jeffrey A.; Sadovnikov, Alexei, Method for creating the high voltage complementary BJT with lateral collector on bulk substrate with resurf effect.
Wysocki David A. (Unitech Research ; Inc. ; 3802 Packers Ave. Madison WI 53704) Hooper Paul S. (Unitech Research ; Inc. ; 3802 Packers Ave. Madison WI 53704), Real time three dimensional geo-referenced digital orthophotograph-based positioning, navigation, collision avoidance an.
Israni Vijaya S. ; Ashby Richard A. ; Bouzide Paul M. ; Jasper John C. ; Fernekes Robert P. ; Nyczak Gregory M. ; Smith Nicholas E. ; Lampert David S. ; Meek James A. ; Crane Aaron I., System and method for use and storage of geographic data on physical media.
Israni Vijaya S. ; Ashby Richard A. ; Nyczak Gregory M. ; Smith Nicholas E., System and method for use and storage of geographic data on physical media.
Honey Stanley K. (Newark CA) Zavoli Walter B. (Palo Alto CA) Milnes Kenneth A. (Fremont CA) Phillips Alan C. (Los Altos CA) White ; Jr. Marvin S. (Palo Alto CA) Loughmiller ; Jr. George E. (Cupertino, Vehicle navigational system and method.
Uhlmann, Eugenie V.; O'Farrell, Desmond J.; Schofield, Kenneth; Lynam, Niall R., Vehicle-based navigation system with smart map filtering, portable unit home-base registration and multiple navigation system preferential use.
Flynn, John; Buddemeier, Ulrich; Stewenius, Henrik C.; Neven, Hartmut; Brucher, Fernando; Adam, Hartwig, Matching an approximately located query image against a reference image set.
Flynn, John; Buddemeier, Ulrich; Stewenius, Henrik; Neven, Hartmut; Brucher, Fernando; Adam, Hartwig, Matching an approximately located query image against a reference image set.
Anguelov, Dragomir; Flynn, John; McClendon, Brian, Matching an approximately located query image against a reference image set using cellular base station and wireless access point information.
Arfvidsson, Joakim; Thrun, Sebastian, System and process for projecting location-referenced panoramic images into a 3-D environment model and rendering panoramic images from arbitrary viewpoints within the 3-D environment model.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.