Object path identification for navigating objects in scene-aware device environments
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06T-019/00
G02B-027/01
G06T-007/20
G06T-007/60
출원번호
US-0936480
(2015-11-09)
등록번호
US-9928648
(2018-03-27)
발명자
/ 주소
Ambrus, Anthony James
Kohler, Jeffrey
출원인 / 주소
Microsoft Technology Licensing, LLC
대리인 / 주소
Shook, Hardy & Bacon, L.L.P.
인용정보
피인용 횟수 :
0인용 특허 :
4
초록▼
In various embodiments, computerized methods and systems for identifying object paths to navigate objects in scene-aware device environments are provided. An object path identification mechanism supports identifying object paths. In operation, a guide path for navigating an object from the start poi
In various embodiments, computerized methods and systems for identifying object paths to navigate objects in scene-aware device environments are provided. An object path identification mechanism supports identifying object paths. In operation, a guide path for navigating an object from the start point to the end point in a scene-aware device environment is identified. A guide path can be predefined or recorded in real time. A visibility check, such as a look-ahead operation, is performed based on the guide path. Based on performing the visibility check, a path segment to advance the object from the start point towards the end point is determined. The path segment can be optionally modified or refined based on several factors. The object is caused to advance along the path segment. Iteratively performing visibility checks and traverse actions moves the object from the start point to the end point. The path segments define the object path.
대표청구항▼
1. A computer-implemented method for identifying object paths to navigate objects in scene-aware device environments, the method comprising: identifying a guide path for navigating an object from a start point to an end point in a scene-aware device environment comprising a virtual representation of
1. A computer-implemented method for identifying object paths to navigate objects in scene-aware device environments, the method comprising: identifying a guide path for navigating an object from a start point to an end point in a scene-aware device environment comprising a virtual representation of a real-world environment, wherein the guide path is a first path captured by a scene-aware device as a guide for determining an object path, wherein the object path is a second path for navigating the object through the scene-aware device environment;determining a path segment of the object path based on the guide path to advance the object from the start point towards the end point, wherein the path segment comprises a first point and a second point, and wherein the second point is identified on the guide path by iteratively executing a visibility check operation; andcausing the object to advance along the path segment in the scene-aware device environment. 2. The method of claim 1, wherein identifying the guide path is captured by tracking in real-time a movement of the scene-aware device to define a real time guide path from the start point to the end point. 3. The method of claim 2, wherein the real time guide path comprises an anterior path portion, a central path portion, and posterior path portion, wherein the anterior path portion and the posterior path portion are portions of the real time guide path that are not explicitly traversed by the movement of the scene-aware device. 4. The method of claim 1, wherein the identifying of the guide path is based on receiving a user indicated guide path for the scene-aware environment from the start point to the end point. 5. The method of claim 1, wherein the identifying of the guide path is based on receiving the start point and the end point and selecting the guide path is based on comparing a set of coordinates of the guide path to a set of coordinates of the start point and a set of coordinates of the end point in a scene-aware device environment. 6. The method of claim 1, wherein the executing of the visibility check comprises: determining that a look-ahead point on the guide path is an un-occluded point from a current position of the object, wherein the look-ahead point is proximate to an occluded point on the guide path; andselecting the current position as the first point of the path segment and the look-ahead point as the second point of the path segment. 7. The method of claim 1, wherein the iteratively executing the visibility check operation comprising: determining a lower bound that represents an un-occluded point on the guide path from a current position of the object;determining an upper bound that represents a subsequent occluded point on the guide path from the current position of the object; andidentifying a selected point between the lower bound and the upper bound, wherein when the selected point is un-occluded the selected point is designated as a new lower bound and when the selected point is occluded the selected point is designated as a new upper bound. 8. The method of claim 1, further comprising relaxing the path segment based on executing a relaxation operation comprising: selecting an adjustment pivot point on the guide path;determining a height of a previous point of the guide path;determining a height of the next point of the guide path;calculating an adjusted height based on an average of the height of the previous point and the height of the next point;determining that the adjustment pivot point at the adjusted height, the previous point, and the next point are each un-occluded; anddesignating the adjusted height as a new height for the next point. 9. The method of claim 1, further comprising motion adapting the path segment based on executing a motion adaptation operation comprising: determining a motion feature corresponding to movement associated with the object; andapplying the motion feature, based on attributes of the motion feature, to the path segment such that the object path is adapted to incorporate the motion feature. 10. The method of claim 1, further comprising: detecting an occlusion for the path segment previously identified as un-occluded; andcommunicating an indication that the path segment is occluded to facilitate redefining the path segment as a new path segment that is un-occluded. 11. One or more computer storage media having computer-executable instructions embodied thereon that, when executed, by one or more processors, causes the one or more processors to execute operations for identifying object paths to navigate objects in a scene-aware device environment, the operations comprising: identifying a guide path for navigating an object from a start point to an end point in a scene-aware device environment comprising a virtual representation of a real-world environment, wherein the guide path is a first path selected based on a set of coordinates of the start point and a set of coordinates of the end point received by a scene-aware device;determining a path segment of a second path to advance the object from the start point towards the end point, wherein the path segment comprises a first point and a second point, and wherein the second point is identified on the guide path by executing the visibility check operation that determines that a look-ahead point on the guide path is an un-occluded point;causing the object to advance along the path segment in the scene-aware device environment;while causing the object to advance along the path segment towards the second point, determining a next second point by executing the visibility check operation based on a current location of the object and adjusting the path segment based on the next second point; andcausing the object to advance along the adjusted path segment in the scene-aware device environment. 12. The media of claim 11, wherein the executing of the visibility check operation comprises casting based on attributes of an anticipated object utilizing the guide path. 13. The media of claim 11, wherein the executing of the visibility check comprises: determining that the look-ahead point on the guide path is un-occluded from a current position of the object, the visibility check operation comprising casting in the scene-aware environment based on attributes of the object traversing a path segment of the second path; andselecting the look-ahead point as the second point of the path segment. 14. The media of claim 11, further comprising iteratively executing both a smoothing operation to smoothen the path segment and a relaxation operation to relax a height of the path segment. 15. The media of claim 11, wherein causing the object to advance along the path segment further comprises: casting in the scene-aware environment based on attributes of the object corresponding to movement associated with the object;applying the motion feature, based on attributes of the motion feature, to the path segment such that the object path is adapted to incorporate the motion feature; andcausing the object to traverse along an adapted path segment based on applying the motion feature. 16. A system for identifying object paths to navigate objects in scene-aware device environments, the system comprising: a processor and a memory configured for providing computer program instructions to the processor;a path-navigation component configured to: identify, with a scene aware device, a guide path for navigating an object from a start point to an end point in a scene-aware device environment comprising a virtual representation of a real-world environment, wherein the guide path is a first path;determine a path segment on a second path based on the guide path to advance the object from the start point towards the end point, wherein the path segment comprises a first point and a second point, wherein the path-navigation component is configured to identify the second point on the guide path by iteratively executing the visibility check operation; andcause the object to advance along the path segment in the scene-aware device environment. 17. The system of claim 16, further comprising a capture component of the scene aware device configured to capture a real world environment, and capture the guide path by tracking in real-time a movement of the capture component, wherein the tracking is configured to define a real time guide path from the start point to the end point, wherein the real time guide path comprises an anterior path portion, a central path portion, and posterior path portion, wherein the anterior path portion and the posterior path portion are portions of the real time guide path that are not explicitly traversed by the movement of the capture component. 18. The system of claim 16, wherein the path-navigation component is configured to identify the guide path based on a selection among a plurality of predefined guide paths for the scene-aware device environment, wherein the plurality of guide paths include one or more guide paths having been manually defined. 19. The system of claim 16, wherein the path-navigation component is configured to execute supplementary operations, wherein the supplementary operations comprise at least one of: a smoothing operation comprising: determining a lower bound that represents an un-occluded point on the guide path from a current position of the object;determining an upper bound that represents a subsequent occluded point on the guide path from the current position of the object; andidentifying a selected point between the lower bound and the upper bound, wherein when the selected point is un-occluded the selected point is designated as a new lower bound and when the selected point is occluded the selected point is designated as a new upper bound; ora relaxation operation comprising: selecting an adjustment pivot point on the guide path;determining a height of a previous point of the guide path;determining a height of the next point of the guide path;calculating an adjusted height based on an average of theheight of the previous point and the height of the next point;determining that the adjustment pivot at the adjusted height, the previous point, and the next point are each un-occluded; anddesignating the adjusted height as a new height for the adjustment pivot point. 20. The system of claim 16, wherein the path-navigation component is configured to: detect an occlusion for the path segment previously identified as un-occluded; andcommunicate an indication that the path segment is occluded to facilitate redefining the path segment as a new path segment that is un-occluded.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (4)
Flaks, Jason; Bar-Zeev, Avi; Margolis, Jeffrey Neil; Miles, Chris; Kipman, Alex Aben-Athar; Fuller, Andrew John; Crocco, Jr., Bob, Fusing virtual content into real content.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.