Gesture-controlled interfaces for self-service machines and other applications
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06F-003/033
출원번호
UP-0326540
(2008-12-02)
등록번호
US-7668340
(2010-04-09)
발명자
/ 주소
Cohen, Charles J.
Beach, Glenn J.
Cavell, Brook
Foulk, Eugene
Jacobus, Charles J.
Obermark, Jay
Paul, George V.
출원인 / 주소
Cybernet Systems Corporation
대리인 / 주소
Gifford, Krass, Sprinkle, Anderson & Citkowski, P.C.
인용정보
피인용 횟수 :
335인용 특허 :
56
초록▼
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestu
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.
대표청구항▼
We claim: 1. A control method, comprising the steps of: storing, in a memory, information relating to a plurality of gestures, the stored information including a plurality of geometric templates associated with static gestures and a plurality of dynamic models associated with dynamic gestures; rece
We claim: 1. A control method, comprising the steps of: storing, in a memory, information relating to a plurality of gestures, the stored information including a plurality of geometric templates associated with static gestures and a plurality of dynamic models associated with dynamic gestures; receiving information from an image sensor about the position, orientation or x-y movement of a gesture-making target; and providing at least one processor to perform the following operations: a) identify the gesture as a static gesture or no gesture if the x-y movement is below a threshold amount, or identify the gesture as a dynamic gesture if the x-y movement is above the threshold amount; b) compare the position or orientation information of the gesture-making target to the geometric templates to determine if a particular static gesture is being made or, if the gesture is identified as a dynamic gesture, compare the position, orientation or x-y movement information to the stored dynamic models to determine if a particular dynamic gesture is being made; and c) output a control signal if a particular static or dynamic gesture is determined to control a computer or machine. 2. The method of claim 1, wherein the target is a human hand. 3. The method of claim 1, further including the step of generating a bounding box around the target. 4. The method of claim 1, wherein the geometric template information relating to a plurality of static gestures includes edge information. 5. The method of claim 1, wherein at least a portion of the information received about the position, orientation or x-y movement of the gesture-making target is derived by imaging the target. 6. The method of claim 1, wherein at least a portion of the information received about the position, orientation or x-y movement of the gesture-making target is delved through an accelerometer. 7. The method of claim 1, wherein at least a portion of the information received about the position, orientation or x-y movement of the gesture-making target is derived through an inertial system. 8. The method of claim 1, wherein at least a portion of the information received about the position, orientation or x-y movement of the gesture-making target is derived through a radio frequency communication. 9. The method of claim 1, wherein at least a portion of the information received about the position, orientation or x-y movement of the gesture-making target is derived through acoustic tracking. 10. The method of claim 1, wherein at least a portion of the information received about the position, orientation or x-y movement of the gesture-making target is derived through a mechanical linkage. 11. The method of claim 1, including the step of: providing a plurality of predictor bins, each containing a dynamic system model with parameters preset to a specific dynamic gesture. 12. The method of claim 11, including separate predictor bins for movements along the x and y axes.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (56)
Cipolla Roberto (Cambridge GBX) Okamoto Yasukazu (Chiba-ken JPX) Kuno Yoshinori (Osaka-fu JPX), 3D human interface apparatus using motion recognition based on dynamic image processing.
Kramer James P. (Stanford CA) Lindener Peter (E. Palo Alto CA) George William R. (Palo Alto CA), Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove.
Steffens Johannes Bernhard ; Elagin Egor Valerievich ; Nocera Luciano Pasquale Agostino ; Maurer Thomas ; Neven Hartmut, Face recognition from video images.
Beernink Ernest H. (San Carlos CA) Foster Gregg S. (Woodside CA) Capps Stephen P. (San Carlos CA), Gesture sensitive buttons for graphical user interfaces.
Tannenbaum Alan R. (Washington Grove MD) Zetts John M. (Falls Church VA) An Yu L. (Vienna VA) Arbeitman Gordon W. (Gaithersburg MD) Greanias Evon C. (Boca Raton FL) Verrier Guy F. (Reston VA), Graphical user interface with gesture recognition in a multiapplication environment.
Foxlin Eric M. (Cambridge MA), Inertial orientation tracker apparatus having automatic drift compensation for tracking human head and other similarly s.
Kuzunuki Soshiro,JPX ; Arai Toshifumi,JPX ; Kitamura Tadaaki,JPX ; Shojima Hiroshi,JPX, Interactive information processing system responsive to user manipulation of physical objects and displayed images.
Kuzunuki Soshiro,JPX ; Arai Toshifumi,JPX ; Kitamura Tadaaki,JPX ; Shojima Hiroshi,JPX, Interactive information processing system responsive to user manipulation of physical objects and displayed images.
Favot Jean-Jacques (Martignas en Jalles FRX) Perbet Jean-Noel (Eysines FRX) Barbier Bruno (Bouscat FRX) Lach Patrick (Bordeaux FRX), Management method for a man-machine interaction system.
Wilcox Lynn D. ; Chiu Patrick ; Golovchinsky Gene ; Schilit William N. ; Sullivan Joseph W., Method and apparatus for dynamically grouping a plurality of graphic objects.
Numazaki Shunichi,JPX ; Doi Miwako,JPX ; Morishita Akira,JPX ; Umeki Naoko,JPX ; Miura Hiroki,JPX, Method and apparatus for generating information input using reflected light image of target object.
Redmann William G. (Simi Valley CA) Watson Scott F. (Glendale CA), Method and apparatus for providing animation in a three-dimensional computer generated virtual world using a succession.
Maes Pattie E. (Somerville MA) Blumberg Bruce M. (Pepperell MA) Darrell Trevor J. (Cambridge MA) Starner Thad E. (Somerville MA) Johnson Michael P. (Cambridge MA) Russell Kenneth B. (Boston MA) Pentl, Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual e.
Rosenberg Louis B. ; Brave Scott B., Method for providing force feedback to a user of an interface device based on interactions of a controlled cursor with graphical elements in a graphical user interface.
Paul,George V.; Beach,Glenn J.; Cohen,Charles J.; Jacobus,Charles J., Tracking and gesture recognition system particularly suited to vehicular control applications.
Small Ian S. ; Chen Michael ; Zarakov Eric L. ; Mander Richard L. ; Vertelney Laurie J. ; Mander Amanda R. ; Arent Michael A. ; Faris James P. ; Tycz Jeffrey E. ; Knapp Lewis C., User interface system having programmable user interface elements.
Balan, Alexandru; Siddiqui, Matheen; Geiss, Ryan M.; Kipman, Alex Aben-Athar; Williams, Oliver Michael Christian; Shotton, Jamie, Classification of posture states.
Shotton, Jamie Daniel Joseph; Fitzgibbon, Andrew William; Taylor, Jonathan James; Cook, Matthew Darius, Computing pose and/or shape of modifiable entities.
Hulten, Geoffrey J; Mendhro, Umaimah A.; Krum, Kyle J.; Conrad, Michael J.; Remington, Darren B., Controlling a media program based on a media reaction.
Willoughby, Christopher Harley; Evertt, Jeffrey Jesus; Clark, Justin Avram; Hindle, Ben John; Sarrett, Peter Glenn; Deaguero, Joel, Controlling objects in a virtual environment.
Conrad, Michael J.; Hulten, Geoffrey J.; Krum, Kyle J.; Mendhro, Umaimah A.; Remington, Darren B., Determining a future portion of a currently presented media program.
Conrad, Michael J.; Hulten, Geoffrey J; Krum, Kyle J.; Mendhro, Umaimah A.; Remington, Darren B., Determining a future portion of a currently presented media program.
Conrad, Michael J.; Hulten, Geoffrey J.; Krum, Kyle J.; Mendhro, Umaimah A.; Remington, Darren B., Determining audience state or interest using passive sensor data.
Conrad, Michael J.; Hulten, Geoffrey J; Krum, Kyle J.; Mendhro, Umaimah A.; Remington, Darren B., Determining audience state or interest using passive sensor data.
Polzin, R. Stephen; Kipman, Alex A.; Finocchio, Mark J.; Geiss, Ryan Michael; Perez, Kathryn Stone; Tsunoda, Kudo; Bennett, Darren Alexander, Device for identifying and tracking multiple humans over time.
Polzin, R. Stephen; Kipman, Alex A.; Finocchio, Mark J.; Geiss, Ryan Michael; Perez, Kathryn Stone; Tsunoda, Kudo; Bennett, Darren Alexander, Device for identifying and tracking multiple humans over time.
Polzin, R. Stephen; Kipman, Alex A.; Finocchio, Mark J.; Geiss, Ryan Michael; Perez, Kathryn Stone; Tsunoda, Kudo; Bennett, Darren Alexander, Device for identifying and tracking multiple humans over time.
Lim, Stephen; Sisler, Christine J.; Badimon, Mathieu R.; D'Amore, Massimo F.; Durham, Mikel; Flohr, Camiel; Vail, John W., Dispensing system and user interface.
Lim, Stephen; Sisler, Christine J.; Badimon, Mathieu R.; d'Amore, Massimo F.; Durham, Mikel; Flohr, Camiel; Vail, John W., Dispensing system and user interface.
Wäller, Christoph; Bachfischer, Katharina; Bendewald, Lennart; Wengelnik, Heino; Heimermann, Matthias; Dalchow, Jan-Lars, Display and control device for a motor vehicle and method for operating the same.
Latta, Stephen; Bennett, Darren; Geisner, Kevin; Markovic, Relja; Tsunoda, Kudo; Snook, Greg; Willoughby, Christopher H.; Sarrett, Peter; Osborn, Daniel Lee, First person shooter control with virtual skeleton.
Evertt, Jeffrey Jesus; Clark, Justin Avram; Middleton, Zachary Tyler; Puls, Matthew J; Mihelich, Mark Thomas; Osborn, Dan; Campbell, Andrew R; Martin, Charles Everett; Hill, David M, Generation of avatar reflecting player appearance.
Kim, David; Hilliges, Otmar D.; Izadi, Shahram; Olivier, Patrick L.; Shotton, Jamie Daniel Joseph; Kohli, Pushmeet; Molyneaux, David G.; Hodges, Stephen E.; Fitzgibbon, Andrew W., Gesture recognition techniques.
Kim, David; Hilliges, Otmar D.; Izadi, Shahram; Olivier, Patrick L.; Shotton, Jamie Daniel Joseph; Kohli, Pushmeet; Molyneaux, David G.; Hodges, Stephen E.; Fitzgibbon, Andrew W., Gesture recognition techniques.
Shotton, Jamie Daniel Joseph; Izadi, Shahram; Hilliges, Otmar; Kim, David; Molyneaux, David Geoffrey; Cook, Matthew Darius; Kohli, Pushmeet; Criminisi, Antonio; Girshick, Ross Brook; Fitzgibbon, Andrew William, Human body pose estimation.
Geisner, Kevin; Markovic, Relja; Latta, Stephen G.; Mihelich, Mark T.; Willoughby, Christopher; Steed, Jonathan T.; Bennett, Darren; Wright, Shawn C.; Coohill, Matt, Interacting with a computer based application.
Markovic, Relia; Snook, Gregory N.; Latta, Stephen; Geisner, Kevin; Lee, Johnny; Lanygridge, Adam Jethro, Method to control perspective for a camera-controlled computer.
Markovic, Relja; Snook, Gregory N.; Latta, Stephen; Geisner, Kevin; Lee, Johnny; Langridge, Adam Jethro, Method to control perspective for a camera-controlled computer.
Markovic, Relja; Snook, Gregory N.; Latta, Stephen; Geisner, Kevin; Lee, Johnny; Langridge, Adam Jethro, Method to control perspective for a camera-controlled computer.
Lee, Johnny Chung; Leyand, Tommer; Stachniak, Simon Piotr; Peeper, Craig; Liu, Shao, Methods and systems for determining and tracking extremities of a target.
Lee, Johnny Chung; Leyvand, Tommer; Stachniak, Szymon Piotr; Peeper, Craig; Liu, Shao, Methods and systems for determining and tracking extremities of a target.
Newcombe, Richard; Izadi, Shahram; Molyneaux, David; Hilliges, Otmar; Kim, David; Shotton, Jamie Daniel Joseph; Kohli, Pushmeet; Fitzgibbon, Andrew; Hodges, Stephen Edward; Butler, David Alexander, Real-time camera tracking using depth maps.
Newcombe, Richard; Izadi, Shahram; Molyneaux, David; Hilliges, Otmar; Kim, David; Shotton, Jamie Daniel Joseph; Kohli, Pushmeet; Fitzgibbon, Andrew; Hodges, Stephen Edward; Butler, David Alexander, Real-time camera tracking using depth maps.
Markovic, Relja; Latta, Stephen G; Geisner, Kevin A; Steed, Jonathan T; Bennett, Darren A; Vance, Amos D, Recognizing user intent in motion capture system.
Bamji, Cyrus; Mehta, Swati, System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed.
Bamji, Cyrus; Mehta, Swati, System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed.
Bamji, Cyrus; Mehta, Swati, System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed.
Kipman, Alex A.; Perez, Kathryn Stone; Finocchio, Mark J.; Geiss, Ryan Michael; Tsunoda, Kudo, Systems and methods for estimating a non-visible or occluded body part.
Mathe, Zsolt; Marais, Charles Claudius; Peeper, Craig; Bertolami, Joe; Geiss, Ryan Michael, Systems and methods for processing an image for target tracking.
Reid, C. Shane; Ashmore, Chad; Gotschall, Robert H.; Bures, Martin, Systems and methods for three-dimensional interaction monitoring in an EMS environment.
Markovic, Relja; Latta, Stephen G.; Geisner, Kevin A.; Hill, David; Bennett, Darren A.; Haley, Jr., David C.; Murphy, Brian S.; Wright, Shawn C., Tracking groups of users in motion capture system.
Santos, Oscar Omar Garza; Haigh, Matthew; Vuchetich, Christopher; Hindle, Ben; Bennett, Darren A., Translating user motion into multiple object responses.
Krum, Kyle James; Conrad, Michael John; Hulten, Geoffrey John; Mendhro, Umaimah A., User interface presenting an animated avatar performing a media reaction.
Pulsipher, Jon D.; Mohadjer, Parham; ElDirghami, Nazeeh Amin; Liu, Shao; Cook, Patrick Orville; Foster, James Chadon; Forbes, Jr., Ronald Omega; Stachniak, Szymon P.; Leyvand, Tommer; Bertolami, Joseph; Janney, Michael Taylor; Huynh, Kien Toan; Marais, Charles Claudius; Perreault, Spencer Dean; Fitzgerald, Robert John; Bisson, Wayne Richard; Peeper, Craig Carroll, Validation analysis of human target.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.