Aspects of the disclosure relate generating navigation paths between images. A first image taken from a first location and a second image taken from a second location may be selected. A position of the first location in relation to the second location may be determined. First and second frames for t
Aspects of the disclosure relate generating navigation paths between images. A first image taken from a first location and a second image taken from a second location may be selected. A position of the first location in relation to the second location may be determined. First and second frames for the first and second images may be selected based on the position. First and second sets of visual features for each of the first and second image frames may be identified. Matching visual features between the first set of visual features and the second set of visual features may be determined. A confidence level for a line-of-sight between the first and second images may be determined by evaluating one or more positions of the matching visual features. Based on at least the confidence level, a navigation path from the first image to the second image is generated.
대표청구항▼
1. A method comprising: selecting, by one or more processors, a first image taken from a first location and a second image taken from a second location;determining, by the one or more processors, a position of the first location in relation to the second location;selecting, by the one or more proces
1. A method comprising: selecting, by one or more processors, a first image taken from a first location and a second image taken from a second location;determining, by the one or more processors, a position of the first location in relation to the second location;selecting, by the one or more processors, a first frame on the first image and a second frame on the second image based on the position;identifying, by the one or more processors, a first set of visual features of the first image in the first frame and a second set of visual feature of the second image in the second frame;determining, by the one or more processors, a number of matching visual features between the first set of visual features and the second set of visual features;determining, by the one or more processors, a confidence level for a line-of-sight between the first image and the second image by evaluating one or more positions of the matching visual features; andgenerating, by the one or more processors, based on at least the confidence level, a navigation path from the first image to the second image. 2. The method of claim 1, wherein the first frame is in a direction from the first location and the second frame is in the direction from the second location and the first and second frames are centered around a straight-line path between the first location and the second location. 3. The method of claim 1, wherein determining the position further comprises determining pose information of the first image and the second image, the pose information including orientation information of the first image and the second image with respect to cardinal directions. 4. The method of claim 1, further comprising for a given pair of matching first and second visual features evaluating positions of the first and second matching visual features: casting a first ray from the first location to the first matching visual feature in the first panoramic image;casting a second ray from the second location to the second matching visual feature in the second panoramic image; anddetermining whether the first ray and the second ray come closest to each other in an area between the first panoramic image and the second panoramic image,wherein the confidence level is determined further based on the determination of whether the first ray and the second ray come closest to each other in the area between the first panoramic image and the second panoramic image. 5. The method of claim 1, wherein determining the confidence level further comprises assigning a weight to each pair of matching visual features, the weight corresponding to at least one of: (1) reprojection error of the given matching visual features;(2) angular distances of each of the given matching visual features from a straight-line path between the first location and the second location; and(3) visual similarities between the given matching visual features. 6. The method of claim 5, wherein the confidence level is determined according to at least one of: (1) a percentage of identified visual features that are matching visual features;(2) the weight assigned to each pair of matching visual features;(3) a distance between the first image and the second image; and(4) a residual error of the estimated locations of the matching visual features. 7. The method of claim 5, wherein the estimated location of the matching visual features are determined by: casting a first ray from the first location to a first feature in the first set of visual features;casting a second ray from the second location to a second feature in the second set of visual features, the first feature and the second feature being a pair of matching visual features; andwhen the first ray and the second ray are within a predetermined distance of one another, setting a point closest to where the first ray and second ray come closest to one another as the estimated location of the first feature and the second feature. 8. The method of claim 7, further comprising, when the first ray and the second ray are not within the predetermined distance or diverge, removing the pair of matching visual features from the number of matching visual features. 9. The method of claim 1, wherein the navigation path is further generated according to one or more constraints. 10. The method of claim 9, wherein the one or more constraints comprises at least one of minimum spanning tree constraints, Delaunay Triangulation constraints, setting number of edges per vertex, setting a maximum distance for edges, permitting only one layer of redundancy, and minimizing a distance between a pair of vertices. 11. A method comprising: selecting, by one or more processors, a plurality of images;for every pair of images in the plurality of images, determining, by the one or more processors, a confidence level for connectivity between each pair of images by: determining, by the one or more processors, a position of a first image of the pair taken at a first location in relation to a second image of the pair taken at a second location;projecting, by the one or more processors, a frame from the first location along a straight-line path between the first location and the second location and onto the first image and the second image;identifying, by the one or more processors, a first set of visual features of the first image within the projection of the frame on the first image;identifying, by the one or more processors, a second set of visual features of the second image within the projection of the frame on the second image; anddetermining, by the one or more processors, matching visual feature between the first set of visual features and the second set of visual features;determining a confidence level for a line-of-sight between the first image and the second image based on at least the matching visual features; andgenerating, by the one or more processors, navigation paths between one or more pairs of images according to the confidence level for each pair of images. 12. The method of claim 11, further comprising generating, by the one or more processors, a connection graph, wherein each image is a vertex in the connection graph and each navigation path is an edge in the connection graph. 13. The method of claim 12, wherein generating the connection graph further comprises removing at least one edge by applying one or more constraints. 14. The method of claim 13, wherein the one or more constraints comprises at least one of minimum spanning tree constraints, Delaunay Triangulation constraints, setting number of edges per vertex, setting a maximum distance for edges, permitting only one layer of redundancy, and minimizing a distance between a pair of vertices. 15. A system comprising: memory storing a first image taken from a first location and a second image taken from a second location;one or more computing devices having one or more processors configured to: determine a position of the first location in relation to the second location;select a first frame on the first image and a second frame on the second image based on the position;identity a first set of visual features of the first image in the first frame and a second set of visual feature of the second image in the second frame;determine a number of matching visual features between the first set of visual features and the second set of visual features;determine a confidence level for a line-of-sight between the first image and the second image by evaluating one or more positions of the matching visual features; andgenerate based on at least the confidence level, a navigation path from the first image to the second image. 16. The system of claim 15, wherein the first frame is in a direction from the first location and the second frame is in the direction from the second location, and the first and second frames are centered around a straight-line path between the first location and the second location. 17. The system of claim 15, wherein the one or more processors are further configured to determine the position by further determining pose information of the first image and the second image, the pose information including orientation information of the first image and the second image with respect to cardinal directions. 18. The system of claim 15, wherein the one or more processors are further configured to, for a given pair of matching first and second visual features, evaluate positions of the first and second matching visual features by: casting a first ray from the first location to the first matching visual feature in the first panoramic image;casting a second ray from the second location to the second matching visual feature in the second panoramic image;determining whether the first ray and the second ray come closest to each other in an area between the first panoramic image and the second panoramic image, anddetermining the confidence level further based on the determination of whether the first ray and the second ray come closest to each other in the area between the first panoramic image and the second panoramic image. 19. The system of claim 15, wherein the one or more processors are further configured to determine the confidence level by further assigning a weight to each pair of matching visual features, the weight corresponding to at least one of: (1) reprojection error of the given matching visual features;(2) angular distances of each of the given matching visual features from a straight-line path between the first location and the second location; and(3) visual similarities between the given matching visual features. 20. The system of claim 19, wherein the one or more processors are further configured to determine the estimated location of the matching visual features by: casting a first ray from the first location to a first feature in the first set of visual features;casting a second ray from the second location to a second feature in the second set of visual features, the first feature and the second feature being a pair of matching visual features; andwhen the first ray and the second ray are within a predetermined distance of one another, setting a point closest to where the first ray and second ray come closest to one another as the estimated location of the first feature and the second feature.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (2)
Smyth Christopher C. (Fallston MD), Apparatus for measuring eye gaze and fixation duration, and method therefor.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.