최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0371460 (1999-08-10) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 584 인용 특허 : 44 |
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestu
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.
1. A method of dynamic gesture recognition, comprising the steps of: storing a dynamic motion model composed of a set of differential unions, each differential equation describing a particular dynamic gesture to be recognized of the form: x=.function.(x,θ) where x is vector describing position
1. A method of dynamic gesture recognition, comprising the steps of: storing a dynamic motion model composed of a set of differential unions, each differential equation describing a particular dynamic gesture to be recognized of the form: x=.function.(x,θ) where x is vector describing position and velocity components, and θ is a tunable parameter; capturing the motion to be recognized along with the tunable parameters associated with a gesture-making target; extracting the position and velocity components of the captured motion; and identifying the dynamic gesture by determining which differential equation is solved using the extracted components and the tunable parameters. 2. The method of claim 1, wherein the target is a human hand, human head, full body, any body part, or any object in the motion capturing device's field of view. 3. The method of claim 2, further including the step of generating a bounding box around the object. 4. The method of claim 1, further including the step of using an operator to find the edges of the target. 5. The method of claim 1, further including the step of treating a dynamic gesture as one or more one or multi-dimensional oscillation. 6. The method of claim 5, further including the step of creating a circular motion as a combination of repeating motions in one, two, or three dimensions having the same magnitude and frequency of oscillation. 7. The method of claim 5, further including the step of deriving complex dynamic gestures by varying phase and magnitude relationships. 8. The method of claim 5, further including the step of deriving a multi-gesture lexicon based upon clockwise and counter-clockwise large and small circles done-dimensional lines. 9. The method of claim 5, further including the step of comparing to the next position and velocity of each gesture to one or more predictor bins to determine a gesture's future position and velocity. 10. The method of claim 9, further including the use of a velocity damping model to discriminate among non-circular dynamic gestures. 11. The method of claim 5, further including e use of dynamic system representation to discriminate among dynamic motion gestures. 12. A gesture-controlled interface for self-service machines and other applications, comprising: a sensor module for capturing and analyzing a gesture made by a human or machine, and outputting gesture descriptive data including position and velocity information associated with the gesture; an identification module operative to identify the gesture based upon sensor data output by the sensor module; and a transformation module operative to generate a command based upon the gesture identified by the identification module. 13. The interface of claim 12, further including a system response module operative to apply to the command from the transformation module to the device or software program to be controlled. 14. The interface of claim 13, wherein the device is a virtual-reality simulator or game. 15. The interface of claim 13, wherein the device is a self-service machine. 16. The interface of claim 13, wherein the device forms part of a robot. 17. The interface of claim 13, wherein the device forms part of a commercial appliance.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.