Three dimensional user interface session control
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G09G-005/00
G06F-003/00
G06F-003/01
G06F-003/03
출원번호
US-0314210
(2011-12-08)
등록번호
US-8933876
(2015-01-13)
발명자
/ 주소
Galor, Micha
Pokrass, Jonathan
Hoffnung, Amir
출원인 / 주소
Apple Inc.
대리인 / 주소
D Kligler I.P. Services Ltd
인용정보
피인용 횟수 :
29인용 특허 :
103
초록▼
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state.
대표청구항▼
1. A method, comprising: receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture comprising a rising motion alo
1. A method, comprising: receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture comprising a rising motion along a vertical axis in space wherein the hand performs the rising motion for at least an unlock gesture distance at a minimum unlock gesture speed; andtransitioning the non-tactile 3D user interface from a locked state to an unlocked state upon detecting completion of the gesture. 2. The method according to claim 1, and comprising conveying a first visual feedback to the user prior to the gesture, and conveying a second visual feedback subsequent to the gesture. 3. The method according to claim 2, wherein the first and the second visual feedbacks comprise illuminating or darkening one or more visual feedback devices coupled to the computer. 4. The method according to claim 2, wherein the first and the second visual feedbacks comprise altering a feedback item presented on a display coupled to the computer. 5. The method according to claim 1, wherein the unlock gesture distance comprises 20 centimeters, and the minimum unlock gesture speed comprises four centimeters per second. 6. An apparatus, comprising: a sensing device; anda computer executing a non-tactile three dimensional (3D) user interface and configured to receive, from the sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture comprising a rising motion along a vertical axis in space wherein the hand performs the rising motion for at least an unlock gesture distance at a minimum unlock gesture speed, and to transition the non-tactile 3D user interface from a locked state to an unlocked state upon detecting completion of the gesture. 7. The apparatus according to claim 6, wherein the computer is configured to convey a first visual feedback to the user prior to the gesture, and to convey a second visual feedback subsequent to the gesture. 8. The apparatus according to claim 7, and comprising one or more visual feedback devices, wherein the computer is configured to convey the first and the second visual feedbacks by illuminating or darkening the one or more visual feedback devices. 9. The apparatus according to claim 7, and comprising a display, wherein the computer is configured to convey the first and the second visual feedbacks by altering a feedback item presented on the display. 10. The apparatus according to claim 6, wherein the unlock gesture distance comprises 20 centimeters, and the minimum unlock gesture speed comprises four centimeters per second. 11. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a non-tactile user interface, cause the computer to receive, from a sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture comprising a rising motion along a vertical axis in space wherein the hand performs the rising motion for at least an unlock gesture distance at a minimum unlock gesture speed, and to transition the non-tactile 3D user interface from a locked state to an unlocked state upon detecting completion of the gesture. 12. A method, comprising: receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture comprising a rising motion along a vertical axis in space;determining whether the gesture of the hand included a rising of the hand by at least 20 centimeters; andtransitioning the non-tactile 3D user interface from a locked state to an unlocked state upon detecting completion of the gesture,wherein the transitioning of the user interface from a locked state to an unlocked state is performed only if the upward gesture included a rise of the hand by at least 20 centimeters.
Kazama Hisashi,JPX ; Onoguchi Kazunori,JPX ; Yuasa Mayumi,JPX ; Fukui Kazuhiro,JPX, Apparatus and method for controlling an electronic device with user action.
Wee, Susie J.; Baker, Henry Harlyn; Bhatti, Nina T.; Covell, Michele; Harville, Michael, Communication and collaboration system using rich media environments.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Controlling resource access based on user gesturing in a 3D captured image stream of the user.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Honda,Tadashi, Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Lanier Jaron Z. (Palo Alto CA) Grimaud Jean-Jacques G. (Portola Valley CA) Harvill Young L. (San Mateo CA) Lasko-Harvill Ann (San Mateo CA) Blanchard Chuck L. (Palo Alto CA) Oberman Mark L. (Mountain, Method and system for generating objects for a multi-person virtual world using data flow networks.
Latypov Nurakhmed Nurislamovich,SUX ; Latypov Nurulla Nurislamovich,SUX, Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods.
Rafii, Abbas; Bamji, Cyrus; Sze, Cheng-Feng; Torunoglu, Iihami, Methods for enhancing performance and data acquired from three-dimensional image systems.
Bang, Won chul; Kim, Dong yoon; Chang, Wook; Kang, Kyoung ho; Choi, Eun seok, Spatial motion recognition system and method using a virtual handwriting plane.
Murray, Paul; Troy, James J.; Erignac, Charles A.; Wojcik, Richard H.; Finton, David J.; Margineantu, Dragos D., System and method for controlling swarm of remote unmanned vehicles through human gestures.
Segawa,Hiroyuki; Hiraki,Norikazu; Shioya,Hiroyuki; Abe,Yuichi, Three-dimensional model processing device, three-dimensional model processing method, program providing medium.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Backlund, Erik Johan Vendel; Bengtsson, Henrik; Heringslack, Henrik; Sassi, Jari; Thörn, Ola Karl; Åberg, Peter, User interface with three dimensional user input.
Ellenby, John; Ellenby, Thomas; Ellenby, Peter, Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time.
Gribetz, Meron; Mann, W. Steve G., Extramissive spatial imaging digital eye glass apparatuses, methods and systems for virtual or augmediated vision, manipulation, creation, or interaction with objects, materials, or other entities.
Gribetz, Meron; Mann, W. Steve G., Extramissive spatial imaging digital eye glass apparatuses, methods and systems for virtual or augmediated vision, manipulation, creation, or interaction with objects, materials, or other entities.
Zhou, Dong; Weber, Jason Robert; Bell, Matthew Paul; Polansky, Stephen Michael; Strutt, Guenael Thomas; Noble, Isaac Scott, Interface selection approaches for multi-dimensional input.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.