[미국특허]
Presenting a view within a three dimensional scene
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-015/20
G06T-015/00
G09G-003/00
출원번호
US-0268613
(2014-05-02)
등록번호
US-9202306
(2015-12-01)
발명자
/ 주소
Vesely, Michael A.
Gray, Alan S.
출원인 / 주소
zSpace, Inc.
대리인 / 주소
Meyertons Hood Kivlin Kowert & Goetzel, P.C.
인용정보
피인용 횟수 :
3인용 특허 :
132
초록▼
Presenting a view based on a virtual viewpoint in a three dimensional (3D) scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. A virtua
Presenting a view based on a virtual viewpoint in a three dimensional (3D) scene. The 3D scene may be presented by at least one display, which includes displaying at least one stereoscopic image of the 3D scene by the display(s). The 3D scene may be presented according to a first viewpoint. A virtual viewpoint may be determined within the 3D scene that is different than the first viewpoint. The view of the 3D scene may be presented on the display(s) according to the virtual viewpoint and/or the first view point. The presentation of the view of the 3D scene is performed concurrently with presenting the 3D scene.
대표청구항▼
1. A non-transitory computer readable memory medium storing program instructions for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, wherein the program instructions are executable by a processor to: track a user position via a position input device, wherein the use
1. A non-transitory computer readable memory medium storing program instructions for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, wherein the program instructions are executable by a processor to: track a user position via a position input device, wherein the user position comprises position and orientation in physical space;determine a user viewpoint based on said tracking;determine a user perspective relative to at least one display surface, wherein the user perspective comprises a mapping between angle and orientation of the at least one display surface and a render plane to the user viewpoint, wherein the mapping is based on the tracking of the position of the user, and wherein to determine the user perspective, the program instructions are executable by a processor to: determine a first user eyepoint based on user position; andcorrelate user position to a position of the at least one display surface, wherein the correlation is relative to an angle and orientation of the at least one display surface;render and display the 3D scene within a virtual space by at least one display, wherein the at least one display comprises the at least one display surface, where said rendering and displaying is based on a projection in virtual space to the render plane, wherein the render plane has a correlation to the position and orientation of the at least one display, and wherein the correlation is based on the user perspective;determine a first virtual viewpoint, wherein the first virtual viewpoint is controlled by a first position, angle and orientation of at least a portion of a hand of the user in the physical space without use of hand-held tools and corresponds to a first position, angle and orientation in the virtual space;establish a first field of view and first view volume of the 3D scene, wherein the first field of view and first view volume are based on the first virtual viewpoint;store the first field of view and first view volume;determine a second virtual viewpoint, wherein the second virtual viewpoint is controlled by a second position, angle and orientation of the at least a portion of the hand of the user in the physical space without use of hand-held tools and corresponds to a second position, angle and orientation in the virtual space;establish a second field of view and second view volume of the 3D scene, wherein the second field of view and second view volume are based on the second virtual viewpoint; andstore the second field of view and second view volume. 2. The non-transitory computer readable memory medium of claim 1, wherein the first position, angle, and orientation of the at least a portion of the hand of the user is in open space. 3. The non-transitory computer readable memory medium of claim 1, wherein the second position, angle, and orientation of the at least a portion of the hand of the user is in open space. 4. The non-transitory computer readable memory medium of claim 1, wherein the user position comprises at least one of: a user head position;a user eye position; anda user eye pair position. 5. The non-transitory computer readable memory medium of claim 1, wherein the second virtual viewpoint is different than the first virtual viewpoint. 6. The non-transitory computer readable memory medium of claim 1, wherein the user perspective further comprises an oblique viewpoint, wherein the render plane corresponds to the oblique viewpoint, wherein the render plane has a first oblique render plane angle, and wherein an additional render plane corresponds to the oblique viewpoint wherein the additional render plan has a second oblique render plane angle. 7. The non-transitory computer readable memory medium of claim 6, wherein the 3D scene renders in stereo and the display surface is a stereo 3D display surface. 8. The non-transitory computer readable memory medium of claim 1, wherein the program instructions are further executable by a processor to: present, via the at least one display, the 3D scene according to one of the stored first field of view and first view volume and the stored second field of view and second view volume; wherein said presenting the 3D scene according to one of the stored first field of view and first view volume and the stored second field of view and second view volume is performed concurrently with said rendering and displaying. 9. The non-transitory computer readable memory medium of claim 1, wherein the 3D scene renders in stereo and the display surface is a stereo 3D display surface. 10. The non-transitory computer readable memory medium of claim 1, wherein the first virtual viewpoint is in open space. 11. The non-transitory computer readable memory medium of claim 1, wherein the first virtual viewpoint is in inner space. 12. A non-transitory computer readable memory medium storing program instructions for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, wherein the program instructions are executable by a processor to: determine a first user viewpoint by tracking a first user position via a position input device in the physical space;determine a first user perspective to a display surface of at least one display, wherein said determining comprises assessing a first user eyepoint based on the first user position;render and display the 3D scene on the at least one display based on a first projection in virtual space to a first render plane in physical space according to the first user perspective and the first position, angle and orientation of the display surface, wherein the first user perspective correlates to a corresponding first frustum and the first render plane of the 3D scene;determine a virtual viewpoint in a virtual space, wherein the virtual viewpoint corresponds to a first virtual position, virtual angle, and virtual orientation in the virtual space and is controlled by a first position, angle, and orientation of at least a portion of a hand of the user input device in physical space without use of hand-held tools;establish a first field of view and first view volume of the 3D scene, wherein the first field of view and first view volume are based on the virtual viewpoint;store the first field of view and first view volume;determine a second user viewpoint by tracking a second user position via the position input device in the physical space;determine a second user perspective to the display surface, wherein said determining comprises assessing a second user eyepoint based on the second user position; anddetermine that the first position, angle, and orientation of the at least a portion of the hand of the user in the physical space has not changed. 13. The non-transitory computer readable memory medium of claim 12, wherein the first position, angle, and orientation of the at least a portion of the hand of the user is in open space. 14. The non-transitory computer readable memory medium of claim 12, wherein the user position comprises at least one of: a user head position;a user eye position; anda user eye pair position. 15. The non-transitory computer readable memory medium of claim 12, wherein the first user perspective comprises a first perpendicular and first oblique viewpoint, wherein the first perpendicular viewpoint corresponds to the first render plane with a first render plane angle, and wherein the first oblique viewpoint corresponds to a second render plane with a second render plane angle; andwherein the second user perspective comprises a second perpendicular and second oblique viewpoint, wherein the second perpendicular viewpoint corresponds to a third render plane with a third render plane angle, and wherein the second oblique viewpoint corresponds to a fourth render plane with a fourth render plane angle. 16. The non-transitory computer readable memory medium of claim 15, wherein the 3D scene renders in stereo and the display surface is a stereo 3D display surface. 17. The non-transitory computer readable memory medium of claim 12, wherein the program instructions are further executable by a processor to: present, via the at least one display, the 3D scene according to the stored first field of view and first view volume;wherein said presenting the 3D scene according to the stored first field of view and first view volume is performed concurrently with said rendering and displaying. 18. The non-transitory computer readable memory medium of claim 12, wherein the 3D scene comprises a stereoscopic 3D scene. 19. The non-transitory computer readable memory medium of claim 12, wherein to track the first user position the program instructions are further executable by a processor to assess first X, Y, and Z coordinates, angle, and orientation of the first user position in physical space;wherein to track the second user position the program instructions are further executable by a processor to assess second X, Y, and Z coordinates, angle, and orientation of the second user position in physical space; andwherein to determine the virtual viewpoint in the virtual space, the program instructions are further executable by a processor to assess third X, Y, and Z coordinates, angle, and orientation of the at least a portion of the hand of the user in physical space. 20. The non-transitory computer readable memory medium of claim 19, wherein the virtual viewpoint in virtual space corresponds to the third X, Y, and Z coordinates, angle, and orientation of the at least a portion of the hand of the user in physical space. 21. The non-transitory computer readable memory medium of claim 12, wherein the first user perspective correlates to the display surface oriented at a display surface angle, wherein the first perspective comprises a first mapping between angle and orientation of the display surface and the first render plane to the first user viewpoint, wherein the first mapping is based on the tracking of the first user position. 22. The non-transitory computer readable memory medium of claim 21, wherein the second user perspective correlates to the display surface oriented at the display surface angle, wherein the second perspective additionally correlates to a corresponding second frustum and second render plane at a second render plane angle of the 3D scene, wherein the second perspective comprises a second mapping between angle and orientation of the display and the second render plane to the second user viewpoint, and wherein the second mapping is based on the tracking of the second user position. 23. The non-transitory computer readable memory medium of claim 12, wherein the program instructions are further executable by a processor to: retain the first field of view and first view volume in response to determining that the first position, angle, and orientation of the user input device has not changed. 24. The non-transitory computer readable memory medium of claim 23, wherein the retained first field of view and first view volume may be of the 3D scene. 25. The non-transitory computer readable memory medium of claim 23, wherein the retained first field of view and first view volume may be of a change of the 3D scene. 26. A method for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, comprising: tracking a user position via a position input device, wherein the user position comprises position and orientation in physical space;determining a user viewpoint based on said tracking;determining a user perspective relative to at least one display surface, wherein the user perspective comprises a mapping between angle and orientation of the at least one display surface and a render plane to the user viewpoint, wherein the mapping is based on the tracking of the position of the user, and wherein said determining the user perspective comprises: determining a first user eyepoint based on user position; andcorrelating user position to a position of the at least one display surface, wherein the correlation is relative to an angle and orientation of the at least one display surface;rendering and displaying the 3D scene within a virtual space by at least one display, wherein the at least one display comprises the at least one display surface, and where said rendering and displaying is based on a projection in virtual space to the render plane, wherein the render plane has a correlation to the position and orientation of the at least one display, and wherein the correlation is based on the user perspective;determine a first virtual viewpoint, wherein the first virtual viewpoint is controlled by a first position, angle and orientation of at least a portion of a hand of the user in the physical space without use of hand-held tools and corresponds to a first position, angle and orientation in the virtual space;establishing a first field of view and first view volume of the 3D scene, wherein the first field of view and first view volume are based on the first virtual viewpoint;storing the first field of view and first view volume;determine a second virtual viewpoint, wherein the second virtual viewpoint is controlled by a second position, angle and orientation of the at least a portion of the hand of the user in the physical space without use of hand-held tools and corresponds to a second position, angle and orientation in the virtual space;establishing a second field of view and second view volume of the 3D scene, wherein the second field of view and second view volume are based on the second virtual viewpoint; andstoring the second field of view and second view volume. 27. A method for presenting a view based on a virtual viewpoint in a three dimensional (3D) scene, comprising: determining a first user viewpoint by tracking a first user position via a position input device in the physical space;determining a first user perspective to a display surface of at least one display, wherein said determining comprises assessing a first user eyepoint based on the first user position;rendering and displaying the 3D scene on the at least one display based on a first projection in virtual space to a first render plane in physical space according to the first user perspective and the first position, angle and orientation of the display surface, wherein the first user perspective correlates to a corresponding first frustum and the first render plane of the 3D scene;determine a virtual viewpoint in a virtual space, wherein the virtual viewpoint corresponds to a first virtual position, virtual angle, and virtual orientation in the virtual space and is controlled by a first position, angle, and orientation of at least a portion of a hand of the user input device in physical space without use of hand-held tools;establishing a first field of view and first view volume of the 3D scene, wherein the first field of view and first view volume are based on the virtual viewpoint;storing the first field of view and first view volume;determining a second user viewpoint by tracking a second user position via the position input device in the physical space;determining a second user perspective to the display surface, wherein said determining comprises assessing a second user eyepoint based on the second user position; anddetermining that the first position, angle, and orientation of the at least a portion of the hand of the user in the physical space has not changed.
Duluk ; Jr. Jerome F. (Mountain View CA) Kasle David B. (Mountain View CA), Bounding box and projections detection of hidden polygons in three-dimensional spatial databases.
Trent, Jr., Raymond A.; Shaw, Scott J.; Gillespie, David W.; Heiny, Christopher; Huie, Mark A., Closed-loop sensor on a solid-state object position detector.
Roberts Andrew F. (Charlestown MA) Sachs Emanuel M. (Somerville MA) Stoops David R. (Cambridge MA) Ulrich Karl T. (Belmont MA) Siler Todd L. (Cambridge MA) Gossard David C. (Andover MA) Celniker Geor, Computer aided drawing in three dimensions.
Ellery Y. Chan ; Timothy B. Faulkner, Computer-resident mechanism for manipulating, navigating through and mensurating displayed image of three-dimensional geometric model.
Horvitz Eric J. ; Sonntag Martin L. ; Markley Michael E., Display system and method for displaying windows of an operating system to provide a three-dimensional workspace for a c.
Faris Sadeg M. (Pleasantville NY), Electro-optical display system for visually displaying polarized spatially multiplexed images of 3-D objects for use in.
Aughey, John H.; Rohr, Michael V.; Swaine, Steven D.; Vorst, Carl J., Gaze tracking system, eye-tracking assembly and an associated method of calibration.
Van Hook, Timothy J.; Cheng, Howard H.; DeLaurier, Anthony P.; Gossett, Carroll P.; Moore, Robert J.; Shepard, Stephen J.; Anderson, Harold S.; Princen, John; Doughty, Jeffrey C.; Pooley, Nathan F.; , High performance low cost video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing.
Redmann William G. (Simi Valley CA) Watson Scott F. (Glendale CA), Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession.
Rosenberg Louis B. ; Schena Bruce M. ; Jackson Bernard G., Method and apparatus for tracking the position and orientation of a stylus and for digitizing a 3-D object.
Berkley, Jeffrey J.; Kim, Seahak; Hong, Sungkwan, Method, apparatus, and article for force feedback based on tension control and tracking through cables.
Ranta,John F.; Aviles,Walter A.; Donoff,R. Bruce; Nelson,Linda P., Methods and apparatus for simulating dental procedures and for training dental students.
Mackinlay Jock (Palo Alto CA) Robertson George G. (Palo Alto CA) Card Stuart K. (Los Altos Hills CA), Moving viewpoint with respect to a target in a three-dimensional workspace.
Sato, Seiji; Sekizawa, Hidehiko, Multiple-screen simultaneous displaying apparatus, multi-screen simultaneous displaying method, video signal generating device, and recorded medium.
Berlin ; Jr. Edwin P. (Berkeley CA) Gardner Geoffrey Y. (Centerport NY) Gelman Robert M. (Great Neck NY) Gershowitz Michael N. (Plainview NY), Non-edge computer image generation system.
Rosenberg Louis B. ; Schena Bruce M. ; Jackson Bernard G., Probe apparatus and method for tracking the position and orientation of a stylus and controlling a cursor.
Pretzer,John D.; Sexton,Robert D.; Still,Reford R.; Ellenberger,John C., System and method for tracking objects and obscuring fields of view under video surveillance.
Snyder, John Michael; Whitted, John Turner; Blank, William Thomas; Olynyk, Kirk, Systems and methods for providing image rendering using variable rate source sampling.
Shih,Loren; Aviles,Walter A.; Massie,Thomas H.; Shannon, III,Walter C., Systems and methods for sculpting virtual objects in a haptic virtual reality environment.
Petitto Tony (150 W. 51st St. ; Suite 2007 New York NY 10019) Loth Stanislaw (44 Normandy Village - 14 Nanuet NY 10954), Technique for depth of field viewing of images with improved clarity and contrast.
Miyamoto Shigeru,JPX ; Nishida Yasunari,JPX ; Kawagoe Takumi,JPX ; Koizumi Yoshiaki,JPX, Three-dimensional image processing apparatus with enhanced automatic and user point of view control.
De La Riviere, Jean-Baptiste; Chartier, Christophe; Hachet, Martin; Bossavit, Benoit; Casiez, Gery, System for colocating a touch screen and a virtual object, and device for manipulating virtual objects implementing such a system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.