A user interface method includes defining an interaction surface containing an interaction region in space. A sequence of depth maps is captured over time of at least a part of a body of a human subject. The depth maps are processed in order to detect a direction and speed of movement of the part of
A user interface method includes defining an interaction surface containing an interaction region in space. A sequence of depth maps is captured over time of at least a part of a body of a human subject. The depth maps are processed in order to detect a direction and speed of movement of the part of the body as the part of the body passes through the interaction surface. A computer application is controlled responsively to the detected direction and speed.
대표청구항▼
1. A user interface method, comprising: displaying an object on a display screen;defining an interaction surface containing an interaction region in space, and mapping the interaction surface to the display screen;capturing a sequence of depth maps over time of at least a part of a body of a human s
1. A user interface method, comprising: displaying an object on a display screen;defining an interaction surface containing an interaction region in space, and mapping the interaction surface to the display screen;capturing a sequence of depth maps over time of at least a part of a body of a human subject;processing the depth maps in order to detect a direction and speed of movement of the part of the body and to predict a touch point of the part of the body, responsively to the movement, wherein the touch point indicates a location in the interaction surface where the part of the body penetrates the interaction surface; controlling a computer application so as to change the displayed object on the screen responsively to the mapping and to the predicted touch point; and wherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space. 2. The method according claim 1, and comprising: defining a visualization surface containing a visualization region in the space, such that the interaction surface is within the visualization region; andprocessing the depth maps in order to identify the part of the body that is located within the visualization region,wherein displaying the object comprises presenting on the display screen a representation of the part of the body that is located within the visualization region. 3. The method according claim 1, wherein defining the interaction surface comprises specifying dimensions of the interaction surface, wherein the interaction surface is mapped to the display screen responsively to the specified dimensions. 4. The method according to claim 1, wherein processing the depth maps comprises applying a three-dimensional connected component analysis to the depth maps in order to identify the part of the body. 5. The method according to claim 1, wherein processing the depth maps comprises identifying, responsively to the detected movement, a gesture made by the human subject. 6. The method according to claim 5, wherein identifying the gesture comprises learning the gesture during a training phase, and thereafter detecting the learned gesture in order to control the computer application. 7. The method according to claim 1, wherein processing the depth maps comprises identifying a posture of at least the part of the body, and controlling the computer application responsively to the posture. 8. User interface apparatus, comprising: a display screen, which is configured to display an object;a sensing device, which is configured to capture a sequence of depth maps over time of at least a part of a body of a human subject;a processor, which is configured to define an interaction surface, which contains an interaction region in space, and to map the interaction surface to the display screen, and to process the depth maps in order to detect a direction and speed of movement of the part of the body and to predict a touch point of the part of the body,responsively to the movement, wherein the touch point indicates a location in the interaction surface where the part of the body penetrates the interaction surface and to control a computer application so as to change the displayed object on the screen responsively to the mapping and to the predicted touch point; andwherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space. 9. The apparatus according claim 8, wherein the processor is configured to process the depth maps in order to identify the part of the body that is located within a visualization region contained within a predefined visualization surface, such that the interaction surface is within the visualization region, and to present on the display screen a representation of the part of the body that is located within the visualization region. 10. The apparatus according claim 8, wherein the processor is configured to accept a specification of dimensions of the interaction surface, and to map the interaction surface to the display screen responsively to the dimensions. 11. The apparatus according to claim 8, wherein the processor is configured to apply a three-dimensional connected component analysis to the depth maps in order to identify the part of the body. 12. The apparatus according to claim 8, wherein the processor is configured to identify, responsively to the detected movement, a gesture made by the human subject. 13. The apparatus according to claim 12, wherein the processor is configured to learn the gesture during a training phase, and thereafter to detect the learned gesture in order to control the computer application. 14. A computer software product, comprising a computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to display an object on a display screen,to define an interaction surface, which contains an interaction region in space, and to map the interaction surface to the display screen,to process a sequence of depth maps created over time of at least a part of a body of a human subject in order to detect a direction and speed of movement of the part of the body and to predict a touch point of the part of the body,responsively to the movement, wherein the touch point indicates a location in the interaction surface where the part of the body penetrates the interaction surface,to control a computer application so as to change the displayed object on the screen responsively to the mapping and to the predicted touch point, andwherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space. 15. The method according to claim 1, wherein defining the interaction surface comprises receiving an input from a user of the computer application, and defining the interaction surface responsively to the input. 16. A user interface method, comprising: displaying an object on a display screen;defining, responsively to an input received from a user of the computer application, an interaction surface containing an interaction region in space for a computer application while specifying, based on the input received from the user, dimensions in space of the interaction region that correspond to an area of the display screen;capturing a sequence of depth maps over time of at least a part of a body of a human subject;processing the depth maps in order to detect a movement of the part of the body as the part of the body passes through the interaction surface;controlling the computer application so as to change the displayed object on the screen responsively to the movement of the part of the body within the specified dimensions of the interaction region; andwherein processing the depth maps comprises identifying, responsively to the detected movement, a collision induced by the movement with a predefined three-dimensional shape in space. 17. The method according to claim 16, wherein the input received from the user specifies a depth dimension of the interaction surface. 18. The method according to claim 17, wherein the input received from the user also specifies transverse dimensions of the interaction surface. 19. The method according to claim 16, wherein specifying the dimensions in space comprises defining a zoom factor that maps transverse dimensions of the interaction surface to corresponding dimensions of the computer display screen.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Honda,Tadashi, Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Bang, Won chul; Kim, Dong yoon; Chang, Wook; Kang, Kyoung ho; Choi, Eun seok, Spatial motion recognition system and method using a virtual handwriting plane.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Ellenby, John; Ellenby, Thomas; Ellenby, Peter, Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time.
Galor, Micha; Pokrass, Jonathan; Hoffnung, Amir; Or, Ofir, Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface.
Scott, Steven J.; Nguyen, Thong T.; Vasko, David A.; Gasperi, Michael; Brandt, David D., Recognition-based industrial automation control with confidence-based decision support.
Scott, Steven J.; Nguyen, Thong T.; Brandt, David D.; Gibart, Tony; Dotson, Gary D., Recognition-based industrial automation control with person and object discrimination.
Scott, Steven J.; Nguyen, Thong T.; Brandt, David D.; Gibart, Tony; Dotson, Gary D., Recognition-based industrial automation control with person and object discrimination.
Scott, Steven J.; Nguyen, Thong T.; Vasko, David A.; Wishart, Marco; Tidwell, Travis, Recognition-based industrial automation control with position and derivative decision reference.
Scott, Steven J.; Nguyen, Thong T.; Nair, Suresh; Roback, Timothy; Brandt, David D., Recognition-based industrial automation control with redundant system input support.
Osterhout, Ralph F.; Lohse, Robert; Nortrup, Edward H.; Border, John N.; Haddick, John D.; Shams, Nima L.; Sanchez, Manuel Antonio, Spatial location presentation in head worn computing.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.