A method, including presenting, by a computer, multiple interactive items on a display coupled to the computer, receiving an input indicating a direction of a gaze of a user of the computer. In response to the gaze direction, one of the multiple interactive items is selected, and subsequent to the o
A method, including presenting, by a computer, multiple interactive items on a display coupled to the computer, receiving an input indicating a direction of a gaze of a user of the computer. In response to the gaze direction, one of the multiple interactive items is selected, and subsequent to the one of the interactive items being selected, a sequence of three-dimensional (3D) maps is received containing at least a hand of the user. The 3D maps are analyzed to detect a gesture performed by the user, and an operation is performed on the selected interactive item in response to the gesture.
대표청구항▼
1. A method, comprising: presenting, by a computer, multiple interactive items on a display coupled to the computer;projecting a light toward a scene that includes a user of the computer;capturing and processing the projected light returned from the scene so as to reconstruct an initial three-dimens
1. A method, comprising: presenting, by a computer, multiple interactive items on a display coupled to the computer;projecting a light toward a scene that includes a user of the computer;capturing and processing the projected light returned from the scene so as to reconstruct an initial three-dimensional (3D) map containing at least a head of the user of the computer;capturing and processing a two dimensional (2D) image containing reflections of the projected light from a fundus and a cornea of an eye of the user;extracting, from the initial 3D map, 3D coordinates of the head;identifying, based on the 3D coordinates of the head and the reflections of the projected light from the fundus and the cornea of the eye, a direction of a gaze of the user;selecting, in response to the gaze direction, one of the multiple interactive items;subsequent to selecting the one of the interactive items, receiving a sequence of three-dimensional (3D) maps containing at least a hand of the user;analyzing the 3D maps to detect a gesture performed by the user; andperforming an operation on the selected interactive item in response to the gesture. 2. The method according to claim 1, wherein the detected gesture comprises a two-dimensional gesture performed by the hand while the hand is in contact with a physical surface. 3. The method according to claim 2, wherein the physical surface is configured as a virtual surface selected from a list comprising a touchpad, a touchscreen, a keyboard and a mouse. 4. The method according to claim 2, and further comprising projecting an image on the surface in response to the two-dimensional gesture. 5. The method according to claim 1, and further comprising presenting, on a display, context information for the selected interactive item in response to the detected gesture comprising a Press and Hold gesture. 6. The method according to claim 1, and further comprising performing an operation associated with the selected interactive item, in response to the detected gesture comprising a Tap gesture. 7. The method according to claim 1, and further comprising scrolling, on a display, the selected interactive item in response to the detected gesture comprising a Slide to Drag gesture. 8. The method according to claim 1, and further comprising task switching to an application associated with the selected interactive item in response to the detected gesture being selected from a list comprising a Swipe gesture and a Select gesture. 9. The method according to claim 1, and further comprising changing, on a display, a size of the selected interactive item in response to the detected gesture being selected from a list comprising a Pinch gesture and a Grab gesture. 10. The method according to claim 1, and further comprising switching between executing applications in response to the detected gesture comprising a Swipe From Edge gesture. 11. The method according to claim 1, and further comprising presenting, on a display, a hidden menu in response to the detected gesture comprising a Swipe From Edge gesture. 12. The method according to claim 1, and further comprising presenting, on a display, a rotation of the selected interactive item in response to the detected gesture comprising a Rotate gesture. 13. The method according to claim 1, and further comprising identifying a color of an object held by the hand of the user, and using the color for presenting content on a display. 14. The method according to claim 1, wherein the sequence of three-dimensional maps contains at least a physical surface, one or more physical objects positioned on the physical surface, and the hand positioned in proximity to the physical surface, and comprising projecting, onto the physical surface, an animation in response to the gesture, and incorporating the one or more physical objects into the animation. 15. The method according to claim 14, and further comprising projecting a respective contour image encompassing each of the one or more physical objects, and incorporating the respective contour image into the animation. 16. An apparatus, comprising: a sensing device, comprising: an illumination subassembly, which is configured to project a light toward a scene that includes a user of a computer;an imaging subassembly, which is configured to capture the projected light returned from the scene, including reflections of the projected light from a fundus and a cornea of an eye of the user; anda processor, which is configured to generate, based on the captured light, three dimensional (3D) maps containing at least a head and a hand of a user, and a two dimensional (2D) image containing reflections of the projected light from a fundus and a cornea of an eye of the user, to extract 3D coordinates of the head from the initial 3D map, and to identify, based on the 3D coordinates of the head and the reflections of the projected light from the fundus and the cornea of the eye, a direction of a gaze of the user;a display; andthe computer, which is coupled to the sensing device and the display, and configured to present, on the display, multiple interactive items, to receive an input indicating a direction of a gaze performed by a user of the computer, to select, in response to the gaze direction, one of the multiple interactive items, to receive, subsequent to selecting the one of the interactive items, a sequence of the three-dimensional (3D) maps containing at least the hand of the user, to analyze the 3D maps to detect a gesture performed by the user, and to perform an operation on the selected one of the interactive items in response to the gesture. 17. The apparatus according to claim 16, wherein the detected gesture comprises a two-dimensional gesture performed by the hand while the hand is in contact with a physical surface. 18. The apparatus according to claim 17, wherein the physical surface is configured as a virtual surface selected from a list comprising a touchpad, a touchscreen, a keyboard and a mouse. 19. The apparatus according to claim 17, and further comprising a projector coupled to the computer and configured to project an image on the surface in response to the two-dimensional gesture. 20. The apparatus according to claim 16, wherein the computer is configured to present, on the display, context information for the selected interactive item in response to the detected gesture comprising a Press and Hold gesture. 21. The apparatus according to claim 16, wherein the computer is configured to perform an operation associated with the selected interactive item, in response to the detected gesture comprising a Tap gesture. 22. The apparatus according to claim 16, wherein the computer is configured to scroll, on the display, the selected interactive item in response to the detected gesture comprising a Slide to Drag gesture. 23. The apparatus according to claim 16, wherein the computer is configured to task switch to an application associated with the selected interactive item in response to the detected gesture being selected from a list comprising a Swipe gesture and a Select gesture. 24. The apparatus according to claim 16, wherein the computer is configured to change a size of the selected interactive item on the display in response to the detected gesture being selected from a list comprising a Pinch gesture and a Grab gesture. 25. The apparatus according to claim 16, wherein the computer is configured to switch between executing applications in response to the detected gesture comprising a Swipe From Edge gesture. 26. The apparatus according to claim 16, wherein the computer is configured to present, on the display, a hidden menu in response to the detected gesture comprising a Swipe From Edge gesture. 27. The apparatus according to claim 16, wherein the computer is configured to present, on the display, a rotation of the selected interactive item in response to the detected gesture comprising a Rotate gesture. 28. The apparatus according to claim 16, wherein the computer is configured to identify a color of an object held by the hand of the user, and to use the color for presenting content on a display. 29. The apparatus according to claim 16, and further comprising a projector coupled to the computer, and wherein the sequence of three-dimensional maps contains at least a physical surface, one or more physical objects positioned on the physical surface, and the hand positioned in proximity to the physical surface, and wherein the computer is configured to project, using the projector, an animation onto the physical surface in response to the gesture, and to incorporate the one or more physical objects into the animation. 30. The apparatus according to claim 29, wherein the computer is configured to project, using the projector, a respective contour image encompassing each of the one or more physical objects on the physical surface, and to incorporate the respective contour image into the animation.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (91)
Li Man Li HK; Derek Louie HK; Chi Hong Chan HK, 3D sculpturing input device.
Wee, Susie J.; Baker, Henry Harlyn; Bhatti, Nina T.; Covell, Michele; Harville, Michael, Communication and collaboration system using rich media environments.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Controlling resource access based on user gesturing in a 3D captured image stream of the user.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Honda,Tadashi, Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Lanier Jaron Z. (Palo Alto CA) Grimaud Jean-Jacques G. (Portola Valley CA) Harvill Young L. (San Mateo CA) Lasko-Harvill Ann (San Mateo CA) Blanchard Chuck L. (Palo Alto CA) Oberman Mark L. (Mountain, Method and system for generating objects for a multi-person virtual world using data flow networks.
Rafii, Abbas; Bamji, Cyrus; Sze, Cheng-Feng; Torunoglu, Iihami, Methods for enhancing performance and data acquired from three-dimensional image systems.
Bang, Won chul; Kim, Dong yoon; Chang, Wook; Kang, Kyoung ho; Choi, Eun seok, Spatial motion recognition system and method using a virtual handwriting plane.
Segawa,Hiroyuki; Hiraki,Norikazu; Shioya,Hiroyuki; Abe,Yuichi, Three-dimensional model processing device, three-dimensional model processing method, program providing medium.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Ellenby, John; Ellenby, Thomas; Ellenby, Peter, Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.