Method for representing virtual information in a real environment
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06G-005/00
G06F-003/01
G06T-015/20
G06T-019/00
출원번호
US-0391589
(2010-08-13)
등록번호
US-8896629
(2014-11-25)
우선권정보
DE-10 2009 037 835 (2009-08-18)
국제출원번호
PCT/EP2010/061841
(2010-08-13)
§371/§102 date
20120322
(20120322)
국제공개번호
WO2011/020793
(2011-02-24)
발명자
/ 주소
Meier, Peter
Angermann, Frank
출원인 / 주소
Metaio GmbH
인용정보
피인용 횟수 :
2인용 특허 :
26
초록▼
The invention relates to a method for ergonomically representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at le
The invention relates to a method for ergonomically representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and of a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device, ascertaining a position and orientation of at least one part of the system setup relative to at least one component of the real environment, subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region, and blending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup, wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in in the view of the real environment.
대표청구항▼
1. A method for representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup
1. A method for representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device;determining a position and orientation of at least one part of the system setup relative to at least one component of the real environment;subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region; andblending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup;wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in the view of the real environment; andwherein the at least one item of virtual information is transferable from one of said regions to another one of said regions by the user selecting the virtual information and transferring the same by a transfer action. 2. The method of claim 1, wherein the at least one item of virtual information is blended in the view of the real environment in the first region with the correct perspective corresponding to the position and orientation of the display device relative to the real environment, in particular in various sizes in accordance with the perspective positioning of the virtual information in the view. 3. The method of claim 1, wherein the at least one virtual information is blended in the view of the real environment in the second region in a uniform size. 4. The method of claim 1, wherein the plurality of regions into which said at least part of the view of the real environment is subdivided, in addition to said first region and said second region, comprise a third region within which objects of the real environment are placed closer to the system setup than objects of the real environment within said first region, and wherein the at least one item of virtual information is shown differently in the third region than in the first region with respect to the type of blending in the view of the real environment. 5. The method of claim 4, wherein the at least one item of virtual information is blended in the view of the real environment in the location region in non-movable manner. 6. The method of claim 1, wherein several items of different virtual information are combined to form a virtual object and the virtual object is displayed in the view instead of the several items of virtual information. 7. The method of claim 1, wherein the first region is separated from the second region by a boundary, said boundary being calculated dynamically. 8. The method of claim 7, wherein the boundary is altered when virtual information is transferred by the user from one of said regions to another one of said regions. 9. The method of claim 7, wherein the boundary is calculated in accordance with the number of items of virtual information within a particular sector of the view. 10. The method of claim 7, wherein the boundary is calculated in accordance with the two-dimensional density of several items of virtual information within a particular sector of the view. 11. The method of claim 7, wherein the boundary is calculated in accordance with several items of virtual information which together constitute a cluster. 12. The method of claim 1, wherein the display device used is a video display device in which the view of the real environment is augmented or replaced by an edge image for enhanced contrast. 13. The method of claim 1, wherein, by determining the position and orientation of said at least one part of the system setup relative to said at least one component of the real environment, depth information with respect to at least one real object contained in the view is calculated or loaded, said depth information being used for calculating a boundary between first region and second region. 14. The method of claim 1, wherein said at least one item of virtual information is adapted to be displayed in at least three stages, a first stage comprising a body merely as a local hint to the virtual information, a second stage comprising a hint to the virtual information in the form of a label with inscription, and a third stage comprising an extract-like preview of the virtual information. 15. The method of claim 14, wherein in a first part of the second region, virtual information is displayed in the first stage only, in a second part of the second region having real objects placed closer to the display device than in the first part of the second region and in a first part of the first region, virtual information is displayed in the second stage, and in a second part of the first region having real objects placed closer to the display device than in the first part of the first region, the virtual information is displayed in the third stage. 16. The method of claim 1, wherein dynamic weather information with respect to the real environment is considered in blending in the at least one item of virtual information in the view of the real environment. 17. The method of claim 1, wherein a boundary of the third region increases with increasing measurement uncertainty of position detection. 18. The method of claim 1, wherein a boundary determining which virtual information is to be displayed at all, is dependent upon the current speed or the average speed of the user or the distance that may be covered by the user within a specific period of time, using public transport, a vehicle or bicycle or the like. 19. A method for representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device;determining a position and orientation of at least one part of the system setup relative to at least one component of the real environment;subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region; andblending in at least one item of virtual infon iation on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup;wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in the view of the real environment; andwherein the display device displays an edge indicating a coverage of the first region, with a boundary of the first region being adapted to be altered by user action, in particular by dragging the boundary. 20. The method of claim 19, wherein the at least one item of virtual information is blended in the view of the real environment in the first region with the correct perspective corresponding to the position and orientation of the display device relative to the real environment, in particular in various sizes in accordance with the perspective positioning of the virtual information in the view. 21. The method of claim 19, wherein the at least one virtual information is blended in the view of the real environment in the second region in a unifoim size. 22. The method of claim 19, wherein the plurality of regions into which said at least part of the view of the real environment is subdivided, in addition to said first region and said second region, comprise a third region within which objects of the real environment are placed closer to the system setup (20, 30) than objects of the real environment within said first region, and wherein the at least one item of virtual information is shown differently in the third region than in the first region with respect to the type of blending in the view of the real environment. 23. The method of claim 22, wherein the at least one item of virtual information is blended in the view of the real environment in the location region in non-movable manner 24. A method for representing virtual information in a real environment, comprising the following steps: providing at least one view of a real environment and a system setup for blending in virtual information for superimposing with the real environment in at least part of the view, the system setup comprising at least one display device;determining a position and orientation of at least one part of the system setup relative to at least one component of the real environment;subdividing at least part of the view of the real environment into a plurality of regions comprising a first region and a second region, with objects of the real environment within the first region being placed closer to the system setup than objects of the real environment within the second region; andblending in at least one item of virtual information on the display device in at least part of the view of the real environment, considering the position and orientation of said at least one part of the system setup;wherein the virtual information is shown differently in the first region than in the second region with respect to the type of blending in the view of the real environment; andwherein a boundary between first region and second region is calculated in accordance with a size of the blended in virtual information, a resolution of the display device and/or a resolution of a camera used for generating the view. 25. The method of claim 24, wherein the at least one item of virtual information is blended in the view of the real environment in the first region with the correct perspective corresponding to the position and orientation of the display device relative to the real environment, in particular in various sizes in accordance with the perspective positioning of the virtual information in the view. 26. The method of claim 24, wherein the at least one virtual information is blended in the view of the real environment in the second region in a uniform size. 27. The method of claim 24, wherein the plurality of regions into which said at least part of the view of the real environment is subdivided, in addition to said first region and said second region, comprise a third region within which objects of the real environment are placed closer to the system setup (20, 30) than objects of the real environment within said first region, and wherein the at least one item of virtual information is shown differently in the third region than in the first region with respect to the type of blending in the view of the real environment. 28. The method of claim 27, wherein the at least one item of virtual information is blended in the view of the real environment in the location region in non-movable manner.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (26)
Buermann,Dale H.; Gonzalez Banos,Hector H.; Mandella,Michael J.; Carl,Stewart R., Apparatus and method for determining an inclination of an elongate object contacting a plane surface.
Zhang,Guanghua G.; Buermann,Dale H.; Mandella,Michael J.; Gonzalez Banos,Hector H.; Carl,Stewart R., Apparatus and method for determining orientation parameters of an elongate object.
Buermann,Dale H.; Mandella,Michael J.; Carl,Stewart R.; Zhang,Guanghua G.; Gonzalez Banos,Hector H., Method and apparatus for determining absolute position of a tip of an elongate object on a plane surface with invariant features.
Dicke, Ronald Anthony; Johnson, Eric, Methods and apparatus for retrieving and displaying map-related data for visually displayed maps of mobile communication devices.
Laumeyer, Robert A.; Retterath, Jamie E., Network-based navigation system having virtual drive-thru advertisements integrated with actual imagery from along a physical route.
Klassen, Gerhard Dietrich; Kalougina, Tatiana; Wisebourt, Shaul; Devenyi, Peter John; Boudreau, Jesse Joseph; Johnson, Eric, User interface methods and apparatus for controlling the visual display of maps having selectable map elements in mobile communication devices.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.