Gesture-based interface with enhanced features
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-003/048
G06F-003/0481
G06F-003/01
G06F-003/0482
G06F-003/0488
G06F-003/0484
출원번호
US-0904052
(2013-05-29)
등록번호
US-9459758
(2016-10-04)
발명자
/ 주소
Berenson, Adi
Galor, Micha
Pokrass, Jonathan
Shani, Ran
Shein, Daniel
Weissenstern, Eran
Frey, Martin
Hoffnung, Amir
Metuki, Nili
출원인 / 주소
APPLE INC.
대리인 / 주소
D. Kligler IP Services Ltd.
인용정보
피인용 횟수 :
2인용 특허 :
131
초록▼
A method includes presenting, on a display coupled to a computer, an image of a keyboard comprising multiple keys, and receiving a sequence of three-dimensional (3D) maps including a hand of a user positioned in proximity to the display. An initial portion of the sequence of 3D maps is processed to
A method includes presenting, on a display coupled to a computer, an image of a keyboard comprising multiple keys, and receiving a sequence of three-dimensional (3D) maps including a hand of a user positioned in proximity to the display. An initial portion of the sequence of 3D maps is processed to detect a transverse gesture performed by a hand of a user positioned in proximity to the display, and a cursor is presented on the display at a position indicated by the transverse gesture. While presenting the cursor in proximity to the one of the multiple keys, one of the multiple keys is selected upon detecting a grab gesture followed by a pull gesture followed by a release gesture in a subsequent portion of the sequence of 3D maps.
대표청구항▼
1. A method, comprising: presenting, on a display coupled to a computer, an image of a keyboard comprising multiple keys;receiving a sequence of three-dimensional (3D) maps including a hand of a user positioned in proximity to the display;processing an initial portion of the sequence of 3D maps to d
1. A method, comprising: presenting, on a display coupled to a computer, an image of a keyboard comprising multiple keys;receiving a sequence of three-dimensional (3D) maps including a hand of a user positioned in proximity to the display;processing an initial portion of the sequence of 3D maps to detect a transverse gesture performed by the hand of the user positioned in proximity to the display, the transverse gesture including a movement of the hand in a direction that is parallel to a plane of the display;presenting, on the display, a cursor at a position indicated by the transverse gesture; andselecting, while presenting the cursor in proximity to one of the multiple keys, the one of the multiple keys upon detecting a grab gesture followed by a pull gesture followed by a release gesture in a subsequent portion of the sequence of 3D maps. 2. The method according to claim 1, and comprising presenting the selected key in a text input area on the display. 3. The method according to claim 2, and comprising performing a search based on the selected key, and presenting, in a results area on the display, a result of the search. 4. The method according to claim 1, wherein a given one of the multiple keys comprises a space key and multiple alphanumeric keys, and comprising presenting the space key within a row of the multiple keys. 5. The method according to claim 1, and comprising selecting the one of the multiple keys upon detecting the grab gesture in the subsequent portion of the sequence of 3D maps. 6. An apparatus, comprising: a sensing device;a display; anda computer coupled to the sensing device and the display, and configured to present, on the display, an image of a keyboard comprising multiple keys, to receive a sequence of three-dimensional (3D) maps including a hand of a user positioned in proximity to the display coupled to the computer, to process an initial portion of the sequence of 3D maps to detect a transverse gesture performed by the hand of the user positioned in proximity to the display, the transverse gesture including a movement of the hand in a direction that is parallel to a plane of the display, to present, on the display, a cursor at a position indicated by the transverse gesture, and to select, while presenting the cursor in proximity to one of the multiple keys, the one of the multiple keys upon detecting a grab gesture followed by a pull gesture followed by a release gesture in a subsequent portion of the sequence of 3D maps. 7. The apparatus according to claim 6, wherein the computer is configured to present the selected key in a text input area on the display. 8. The apparatus according to claim 7, wherein the computer is configured to perform a search based on the selected key, and to present, in a results area on the display, a result of the search. 9. The apparatus according to claim 6, wherein a given one of the multiple keys comprises a space key and multiple alphanumeric keys, and wherein the computer is configured to present the space key within a row of the multiple keys. 10. The apparatus according to claim 6, wherein the computer is configured to select the one of the multiple keys upon detecting the grab gesture in the subsequent portion of the sequence of 3D maps. 11. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a user interface, cause the computer to present, on a display coupled to a computer, an image of a keyboard comprising multiple keys, to receive a sequence of three-dimensional (3D) maps including a hand of a user positioned in proximity to the display, to process an initial portion of the sequence of 3D maps to detect a transverse gesture performed by the hand of the user positioned in proximity to the display, the transverse gesture including a movement of the hand in a direction that is parallel to a plane of the display, to present, on the display, a cursor at a position indicated by the transverse gesture, and to select, while presenting the cursor in proximity to one of the multiple keys, the one of the multiple keys upon detecting a grab gesture followed by a pull gesture followed by a release gesture in a subsequent portion of the sequence of 3D maps. 12. A method, comprising: receiving, by a computer, a sequence of three-dimensional (3D) maps containing at least a hand of a user positioned in proximity to a display coupled to the computer;detecting, in the 3D maps, a pointing gesture directed toward a region external to the display and adjacent to an edge of the display, the pointing gesture including a pointing of a finger of the hand; andpresenting, in response to the pointing gesture, one or more interactive objects on the display. 13. The method according to claim 12, wherein presenting the one or more interactive objects comprises presenting the one or more interactive objects along the edge of the display. 14. An apparatus, comprising: a sensing device;a display; anda computer coupled to the sensing device and the display, and configured to receive a sequence of three-dimensional (3D) maps containing at least a hand of a user positioned in proximity to the display, to detect, in the 3D maps, a pointing gesture directed toward a region external to the display and adjacent to an edge of the display, the pointing gesture including a pointing of a finger of the hand, and to present, in response to the pointing gesture, one or more interactive objects on the display. 15. The apparatus according to claim 14, wherein the computer is configured to present the one or more interactive objects by presenting the one or more interactive objects along the edge of the display. 16. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a user interface, cause the computer to receive a sequence of three-dimensional (3D) maps containing at least a hand of a user positioned in proximity to a display coupled to the computer, to detect, in the 3D maps, a pointing gesture directed toward a region external to the display and adjacent to an edge of the display, the pointing gesture including a pointing of a finger of the hand, and to present, in response to the pointing gesture, one or more interactive objects on the display. 17. A method, comprising: detecting, by a computer at least two hands of at least one user of the computer;assigning, based on a respective position of each of the hands, a respective ranking value to each of the hands that indicates an intention to use the hand to interact with the computer;selecting a hand from among the at least two hands responsively to the respective ranking values;receiving a sequence of three-dimensional (3D) maps containing at least the selected hand positioned in proximity to a display coupled to the computer; andanalyzing the 3D maps to detect a gesture performed by the selected hand. 18. The method according to claim 17, wherein detecting the at least two hands comprises receiving a two-dimensional (2D) image, and identifying the at least two hands in the 2D image. 19. The method according to claim 17, wherein detecting the at least two hands comprises receiving, prior to receiving the sequence of 3D map, an initial set of 3D maps, and detecting the at least two hands in the initial sequence of 3D maps. 20. An apparatus, comprising: a sensing device;a display; anda computer coupled to the sensing device and the display, and configured to detect at least two hands of at least one user of the computer, to assign, based on a respective position of each of the hands, a respective ranking value to each of the hands that indicates an intention to use the hand to interact with the computer, to select a hand from among the at least two hands responsively to the respective ranking values, to receive a sequence of three-dimensional (3D) maps containing at least the selected hand positioned in proximity to a display coupled to the computer, and to analyze the 3D maps to detect a gesture performed by the selected hand. 21. The apparatus according to claim 20, wherein the computer is configured to detect the at least two hands by receiving a two-dimensional (2D) image, and to identify the at least two hands in the 2D image. 22. The apparatus according to claim 20, wherein the computer is configured to detect the at least two hands by receiving, prior to receiving the sequence of 3D map, an initial set of 3D maps, and to detect the at least two hands in the initial sequence of 3D maps. 23. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a user interface, cause the computer to detect at least two hands of at least one user of the computer, to assign, based on a respective position of each of the hands, a respective ranking value to each of the hands that indicates an intention to use the hand to interact with the computer, to select a hand from among the at least two hands responsively to the respective ranking values, to receive a sequence of three-dimensional (3D) maps containing at least the selected hand positioned in proximity to a display coupled to the computer, and to analyze the 3D maps to detect a gesture performed by the selected hand.
Kazama Hisashi,JPX ; Onoguchi Kazunori,JPX ; Yuasa Mayumi,JPX ; Fukui Kazuhiro,JPX, Apparatus and method for controlling an electronic device with user action.
Zaman, Nazia; Garside, Adrian J.; Bush, Christopher T.; Barcheck, Lindsey R.; Leonard, Chantal M.; Satterfield, Jesse Clay, Application reporting in an application-selectable user interface.
Wee, Susie J.; Baker, Henry Harlyn; Bhatti, Nina T.; Covell, Michele; Harville, Michael, Communication and collaboration system using rich media environments.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Controlling resource access based on user gesturing in a 3D captured image stream of the user.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Honda,Tadashi, Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing.
Wong, Tsz Yan; Satterfield, Jesse Clay; Sundelin, Nils A.; Anderson, Bret P.; Miner, Patrice L.; Sareen, Chaitanya Dev; Jarrett, Robert J.; Nan, Jennifer; Worley, Matthew I., Managing an immersive interface in a multi-application immersive environment.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Lanier Jaron Z. (Palo Alto CA) Grimaud Jean-Jacques G. (Portola Valley CA) Harvill Young L. (San Mateo CA) Lasko-Harvill Ann (San Mateo CA) Blanchard Chuck L. (Palo Alto CA) Oberman Mark L. (Mountain, Method and system for generating objects for a multi-person virtual world using data flow networks.
Latypov Nurakhmed Nurislamovich,SUX ; Latypov Nurulla Nurislamovich,SUX, Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods.
Rafii, Abbas; Bamji, Cyrus; Sze, Cheng-Feng; Torunoglu, Iihami, Methods for enhancing performance and data acquired from three-dimensional image systems.
Zaman, Nazia; Flynn, Sean L.; Deutsch, Rebecca; Leonard, Chantal M.; Satterfield, Jesse Clay; Machaj, David A., Presenting an application change through a tile.
Bang, Won chul; Kim, Dong yoon; Chang, Wook; Kang, Kyoung ho; Choi, Eun seok, Spatial motion recognition system and method using a virtual handwriting plane.
Murray, Paul; Troy, James J.; Erignac, Charles A.; Wojcik, Richard H.; Finton, David J.; Margineantu, Dragos D., System and method for controlling swarm of remote unmanned vehicles through human gestures.
Bensoussan, Pierre; De Jaegere, Antoine, System and method for instant consolidation, enrichment, delegation and reporting in a multidimensional database.
MacIntyre, James W.; Scherer, David; Rosenthal, David Alan, System, method, and computer program product for processing and visualization of information.
Segawa,Hiroyuki; Hiraki,Norikazu; Shioya,Hiroyuki; Abe,Yuichi, Three-dimensional model processing device, three-dimensional model processing method, program providing medium.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Backlund, Erik Johan Vendel; Bengtsson, Henrik; Heringslack, Henrik; Sassi, Jari; Thörn, Ola Karl; Åberg, Peter, User interface with three dimensional user input.
Ellenby, John; Ellenby, Thomas; Ellenby, Peter, Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time.
Chirakan, Jason; Hanthorn, Douglas; Herring, Dean F.; Singh, Ankit, Systems and methods for implementing retail processes based on machine-readable images and user gestures.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.