Three-dimensional user interface session control
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-003/01
G06F-003/00
G06F-003/03
출원번호
US-0055997
(2013-10-17)
등록번호
US-9035876
(2015-05-19)
발명자
/ 주소
Galor, Micha
Pokrass, Jonathan
Hoffnung, Amir
출원인 / 주소
Apple Inc.
대리인 / 주소
D. Kligler I.P. Services Ltd.
인용정보
피인용 횟수 :
1인용 특허 :
99
초록▼
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state.
대표청구항▼
1. A method, comprising: receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture comprising a first motion in a
1. A method, comprising: receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture comprising a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis; andtransitioning the non-tactile 3D user interface from a first state to a second state upon detecting completion of the gesture. 2. The method according to claim 1, wherein the selected axis is selected from a list consisting of a depth axis and a horizontal axis. 3. The method according to claim 1, wherein the first state comprises a not tracked state and the second state comprises a tracked state. 4. The method according to claim 1, wherein the first state comprises a locked state and the second state comprises an unlocked state. 5. The method according to claim 1, wherein the first state comprises an inactive state and the second state comprises an active state. 6. The method according to claim 1, and comprising conveying a first visual feedback to the user prior to the first motion, conveying a second visual feedback subsequent to the first motion, and conveying a third visual feedback subsequent to the second motion. 7. The method according to claim 6, wherein the first visual feedback, the second visual feedback and the third visual feedback comprise illuminating or darkening one or more visual feedback devices coupled to the computer. 8. The method according to claim 6, wherein the first visual feedback, the second visual feedback and the third visual feedback comprise altering a feedback item presented on a display coupled to the computer. 9. The method according to claim 1, wherein the hand performs each of the first and the second motions for at least a focus gesture distance at a minimum focus gesture speed. 10. The method according to claim 9, wherein the focus gesture distance comprises 10 centimeters, and the minimum focus gesture speed comprises 10 centimeters per second. 11. A method, comprising: associating, in a computer executing a non-tactile three dimensional (3D) user interface, multiple regions, comprising at least first and second regions, within a field of view of a sensing device coupled to the computer with respective states of the non-tactile 3D user interface, comprising at least first and second states associated respectively with the first and second regions;conveying visual feedback to a user of the computer on a display having a vertical orientation;receiving a set of multiple 3D coordinates representing a vertical hand movement from the first region to the second region; andresponsively to the vertical hand movement, transitioning the non-tactile 3D user interface from the first state to the second state. 12. The method according to claim 11, wherein the states of the non-tactile 3D user interface are selected from a list consisting of a tracked and active state, a tracked and inactive state, and a not tracked and inactive state. 13. The method according to claim 11, wherein the visual feedback to the user indicates a current state of the non-tactile 3D user interface. 14. The method according to claim 11, and comprising adjusting boundaries of the multiple regions responsively to recent movements of the hand. 15. An apparatus, comprising: a three dimensional (3D) optical sensor having a field of view and coupled to a computer executing a non-tactile three dimensional (3D) user interface; andan illumination element that when illuminated, is configured to be visible to a user when the user is positioned within the field of view of the 3D optical sensor so as to convey visual feedback to the user indicating the user's position relative to the field of view. 16. The apparatus according to claim 15, wherein the field of view comprises multiple regions and illumination element is configured to present a current state of the non-tactile 3D user interface to the user positioned in one of the multiple regions. 17. The apparatus according to claim 16, wherein the computer is configured to select the state of the non-tactile 3D user interface from a list consisting of tracked, not tracked, locked, not locked, active and inactive. 18. The apparatus according to claim 17, wherein each of the states is associated with a specific color and the illumination element is configured to present the current state by illuminating in the specific color associated with the current state of the non-tactile 3D user interface. 19. The apparatus according to claim 15, and comprising a conical shaft, wherein the one or more illumination elements are positioned in proximity to an apex of the conical shaft. 20. An apparatus, comprising: a sensing device; anda computer executing a non-tactile three dimensional (3D) user interface and configured to receive, from the sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture comprising a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis, and to transition the non-tactile 3D user interface from a first state to a second state upon detecting completion of the gesture. 21. The apparatus according to claim 20, wherein the computer is configured to select the axis is selected from a list consisting of a depth axis and a horizontal axis. 22. The apparatus according to claim 20, wherein the first state comprises a not tracked state and the second state comprises a tracked state. 23. The apparatus according to claim 20, wherein the first state comprises a locked state and the second state comprises an unlocked state. 24. The apparatus according to claim 20, wherein the first state comprises an inactive state and the second state comprises an active state. 25. The apparatus according to claim 20, wherein the computer is configured to convey a first visual feedback to the user prior to the first motion, to convey a second visual feedback subsequent to the first motion, and to convey a third visual feedback subsequent to the second motion. 26. The apparatus according to claim 25, and comprising one or more visual feedback devices, wherein the computer is configured to convey the first visual feedback, the second visual feedback and the third visual feedback by illuminating or darkening the one or more visual feedback devices. 27. The apparatus according to claim 25, and comprising a display, wherein the computer is configured to convey first visual feedback, the second visual feedback and the third visual feedback by altering a feedback item presented on the display. 28. The apparatus according to claim 20, wherein the hand performs each of the first and the second motions for at least a focus gesture distance at a minimum focus gesture speed. 29. The apparatus according to claim 28, wherein the focus gesture distance comprises 10 centimeters, and the minimum focus gesture speed comprises 10 centimeters per second. 30. An apparatus, comprising: a sensing device;a display having a vertical orientation; anda computer coupled to drive the display to convey visual feedback to a user of the computer while executing a non-tactile three dimensional (3D) user interface and configured to associate multiple regions, comprising at least first and second regions, within a field of view of the sensing device with respective states of the non-tactile 3D user interface, comprising at least first and second states associated respectively with the first and second regions, to receiving a set of multiple 3D coordinates representing a vertical hand movement from the first region to the second region, and responsively to the vertical hand movement, to transition the non-tactile 3D user interface from the first state to the second state. 31. The apparatus according to claim 30, wherein the computer is configured to select the states of the non-tactile 3D user interface from a list consisting of a tracked and active state, a tracked and inactive state, and a not tracked and inactive state. 32. The apparatus according to claim 30, wherein the visual feedback to the user indicates a current state of the non-tactile 3D user interface. 33. The apparatus according to claim 30, wherein the computer is configured to adjust boundaries of the multiple regions responsively to recent movements of the hand. 34. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a non-tactile user interface, cause the computer to receive, from a sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture comprising a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis, and to transition the non-tactile 3D user interface from a first state to a second state upon detecting completion of the gesture. 35. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer coupled to drive a display having a vertical orientation to convey visual feedback to a user of the computer and executing a non-tactile user interface, cause the computer to associate multiple regions, comprising at least first and second regions, within a field of view of a sensing device with respective states of the non-tactile 3D user interface, comprising at least first and second states associated respectively with the first and second regions, to receive a set of multiple 3D coordinates representing a vertical hand movement from the first region to the second region, and responsively to the vertical hand movement, to transition the non-tactile 3D user interface from the first state to the second state.
Kazama Hisashi,JPX ; Onoguchi Kazunori,JPX ; Yuasa Mayumi,JPX ; Fukui Kazuhiro,JPX, Apparatus and method for controlling an electronic device with user action.
Wee, Susie J.; Baker, Henry Harlyn; Bhatti, Nina T.; Covell, Michele; Harville, Michael, Communication and collaboration system using rich media environments.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Controlling resource access based on user gesturing in a 3D captured image stream of the user.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Honda,Tadashi, Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Lanier Jaron Z. (Palo Alto CA) Grimaud Jean-Jacques G. (Portola Valley CA) Harvill Young L. (San Mateo CA) Lasko-Harvill Ann (San Mateo CA) Blanchard Chuck L. (Palo Alto CA) Oberman Mark L. (Mountain, Method and system for generating objects for a multi-person virtual world using data flow networks.
Latypov Nurakhmed Nurislamovich,SUX ; Latypov Nurulla Nurislamovich,SUX, Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods.
Rafii, Abbas; Bamji, Cyrus; Sze, Cheng-Feng; Torunoglu, Iihami, Methods for enhancing performance and data acquired from three-dimensional image systems.
Bang, Won chul; Kim, Dong yoon; Chang, Wook; Kang, Kyoung ho; Choi, Eun seok, Spatial motion recognition system and method using a virtual handwriting plane.
Murray, Paul; Troy, James J.; Erignac, Charles A.; Wojcik, Richard H.; Finton, David J.; Margineantu, Dragos D., System and method for controlling swarm of remote unmanned vehicles through human gestures.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Backlund, Erik Johan Vendel; Bengtsson, Henrik; Heringslack, Henrik; Sassi, Jari; Thörn, Ola Karl; Åberg, Peter, User interface with three dimensional user input.
Ellenby, John; Ellenby, Thomas; Ellenby, Peter, Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time.
Chirakan, Jason; Hanthorn, Douglas; Herring, Dean F.; Singh, Ankit, Systems and methods for implementing retail processes based on machine-readable images and user gestures.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.