An application running on a computing platform that employs three-dimensional (3D) modeling is extended using a virtual viewport into which 3D holograms are rendered by a mixed-reality head mounted display (HMD) device. The HMD device user can position the viewport to be rendered next to a real worl
An application running on a computing platform that employs three-dimensional (3D) modeling is extended using a virtual viewport into which 3D holograms are rendered by a mixed-reality head mounted display (HMD) device. The HMD device user can position the viewport to be rendered next to a real world 2D monitor and use it as a natural extension of the 3D modeling application. For example, the user can interact with modeled objects in mixed-reality and move objects between the monitor and the viewport. The 3D modeling application and HMD device are configured to exchange scene data for modeled objects (such as geometry, lighting, rotation, scale) and user interface parameters (such as mouse and keyboard inputs). The HMD device implements head tracking to determine where the user is looking so that user inputs are appropriately directed to the monitor or viewport.
대표청구항▼
1. A head mounted display (HMID) device operable by a user in a physical environment, comprising: one or more processors;a sensor package configured to dynamically provide sensor data used to determine a pose of the user's head in the physical environment;a network connection configured to support a
1. A head mounted display (HMID) device operable by a user in a physical environment, comprising: one or more processors;a sensor package configured to dynamically provide sensor data used to determine a pose of the user's head in the physical environment;a network connection configured to support an exchange of extensibility data between the HID device and a remote computing device, in which the remote computing device executes an application that renders a three-dimensional (3D) model on a two-dimensional (2D) desktop;a see-through display configured for rendering a mixed-reality environment to the user, a view position of the user for the rendered mixed-reality environment being variable depending at least in part on the determined user head pose; andone or more memory devices storing computer-readable instructions which, when executed by the one or more processors, cause the HMID device to: determine user head pose in the physical environment using the sensor data,implement a 3D virtual viewport on the see-through display,exchange extensibility data between the HMID device and the remote computing device, the extensibility data including scene data for the 3D model and user interface data for the 3D model, and wherein the extensibility data is configured to support multiple different rendering modes including 2D rendering on the desktop and 3D rendering on the viewport, in which the desktop and viewport each utilize the exchanged extensibility data to facilitate rendering of the 3D model in the different respective 2D rendering and 3D rendering supported by the desktop and the viewport, respectively,selectively utilize one or more of the scene data or interface data to render the 3D model as a hologram in the viewport based on the view position,enable user interaction with the hologram in the viewport, andoperate the see-through display based on the view position to enable user interaction with the 3D model on the desktop. 2. The HMD device of claim 1 further including providing a control to the user to control viewport characteristics including at least one of viewport location in the mixed-reality environment, viewport size, or viewport shape. 3. The HMD device of claim 1 further including clipping the 3D model to constrain the 3D model to an extent of the viewport. 4. The HMD device of claim 1 further including a rendering engine configured to render the hologram in the viewport, and in which the executed instructions further cause the HMD device to utilize the exchanged extensibility data in the rendering engine. 5. The HMD device of claim 1 in which the executed instructions further cause the HMD device to consume user inputs to the remote computing device based on the view position. 6. The HMD device of claim 1 in which the executed instructions further cause the HMD device to update the extensibility data based on user interactions with the hologram in the viewport, in which the updated extensibility data is utilized by the remote computing device to render the 3D model on the desktop. 7. The HMD of claim 1 further including obtaining sensor data from the sensor package, the sensor data associated with a physical environment adjoining a user of the HMD device;using the sensor data, reconstructing a geometry of the physical environment including any real world object located therein; andusing the reconstructed geometry to determine a location of a monitor that is coupled to the remote computing device within the physical environment, wherein the monitor is configured to render the 2D desktop. 8. The HMD device of claim 7 further including tracking the user's head in the physical environment using the reconstructed geometry to determine the view position, rendering a mouse cursor in the viewport when user input causes the mouse cursor to move off the 2D desktop supported by the monitor coupled to the remote computing device, and consuming keyboard inputs when a ray projected from the view position intersects with the viewport. 9. The HMD device of claim 8 further including discontinuing the rendering of the mouse cursor in the viewport when the projected ray indicates that the mouse cursor has transitioned to the desktop supported by the monitor. 10. The HMD device of claim 1 further including enabling the 3D model to be transferred between the 2D desktop and the viewport using a mouse or keyboard that is coupled to the remote computing device. 11. The HMD device of claim 1 in which the sensor data includes depth data and further including generating the sensor data using a depth sensor and wherein the executed instructions further cause the HMD to apply surface reconstruction techniques to determine the user head pose. 12. A method performed by a head mounted display (IMD) device supporting a mixed-reality environment including virtual objects and real objects, the method comprising: implementing a three-dimensional (3D) virtual viewport on a display of the HMD device;receiving extensibility data from a remote computing device over a network connection, the extensibility data including scene data describing a 3D model supported by an application executing on the computing device, the computing device being associated with a monitor that supports a desktop, and further including user interface (UI) data describing user inputs to the computing device, and wherein the extensibility data is configured to support multiple different rendering modes including two-dimensional (2D) rendering on the desktop and 3D rendering on the viewport, in which the desktop and viewport each utilize the exchanged extensibility data to facilitate rendering of the 3D model in the different respective 2D rendering and 3D rendering supported by the desktop and the viewport, respectively; anddynamically rendering the 3D model in the viewport using the received extensibility data. 13. The method of claim 12 further including rendering a mouse cursor in the 3D virtual viewport based on mouse messages included in the UI data. 14. The method of claim 12 further including controlling rendering of the 3D model in the viewport using keyboard messages included in the UI data. 15. The method of claim 14 further including utilizing sensor data to determine a view position of a user of the HMD device and transitioning a cursor back to the desktop supported by a monitor coupled to the computing device when a ray projected from the view position intersects the monitor. 16. The method of claim 15 further including modeling a physical environment in which the HMD device is located using a surface reconstruction data pipeline that implements a volumetric method creating multiple overlapping surfaces that are integrated and using the modeled physical environment at least in part to determine the view position or to determine a location of the monitor that is coupled to the computing device within the physical environment. 17. The method of claim 12 in which the scene data includes at least one of 3D model data, environmental data, or camera parameters. 18. A computing device, comprising: one or more processors;an interface to a monitor, the monitor displaying a two-dimensional (2D) desktop;a mouse interface for connecting to a mouse and receiving signals from the mouse indicating mouse movement and inputs to mouse controls from a user of the computing device;a keyboard interface for connecting to a keyboard and receiving signals from the keyboard indicating keyboard inputs from the user;a network interface for communicating with a remote head mounted display (HMD) device over a network connection, the HMD device being configured to support a three-dimensional (3D) virtual viewport; andone or more memory devices storing computer-readable instructions which, when executed by the one or more processors implement a three-dimensional (3D) modeling application and a user interface (UI) server configured for tracking mouse messages that describe the mouse movements and inputs,tracking keyboard messages that describe the keyboard inputs,when a mouse movement indicates that a cursor associated with the mouse is moving beyond an edge of the monitor, taking control of the mouse messages and preventing propagation of the mouse messages to systems operating on the computing device,sending the mouse messages to the HMD device over the network connection,sending the keyboard messages to the HMD device over the network connection, andsending extensibility data to the HMD device over the network, the extensibility data including scene data describing a 3D model supported by an application executing on the computing device, and wherein the extensibility data is configured to support multiple different rendering modes including 2D rendering on the desktop and 3D rendering on the viewport, in which the desktop and viewport each utilize the exchanged extensibility data to facilitate rendering of the 3D model in the different respective 2D rendering and 3D rendering supported by the desktop and the viewport, respectively. 19. The computing device of claim 18 further including tracking the mouse messages and keyboard messages by interacting with an operating system executing on the computing device. 20. The computing device of claim 18 further including receiving a message from the HMD device that the mouse cursor has transitioned to the desktop and calculating an initial cursor position on the desktop using a last reported position of the mouse cursor in the viewport.
Yanagawa,Shingo; Yamauchi,Yasunobu; Taira,Kazuki; Fukushima,Rieko; Hirayama,Yuzo, Apparatus for and method of generating image, and computer program product.
Clanton,Charles H.; Ventrella,Jeffrey J.; Paiz,Fernando J., Cinematic techniques in avatar-centric communication during a multi-user online simulation.
Baudisch, Patrick M.; Cutrell, Edward B.; Hinckley, Kenneth P.; Gruen, Robert W., Displaying visually correct pointer movements on a multi-monitor display system.
Glover, Robert Michael; Ray, Arris Eugene; Cassel, DJ Jonathan; Madsen, Nels Howard; McLaughlin, Thomas Michael, Navigating through a virtual environment having a real-world elevation characteristics using motion capture.
Horvitz Eric J. ; Markley Michael E. ; Sonntag Martin L., System and method for resizing an input position indicator for a user interface of a computer system.
Cardoso Lopes, Gonçalo; Gomes da Silva Frazão, João Pedro; Rui Soares Pereira de Almeida, André; Sequeira Cardoso, Nuno Ricardo; de Almeida Soares Franco, Ivan; Moura e Silva Cruces, Nuno, Various methods and apparatuses for achieving augmented reality.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.