Overcoming motion effects in gesture recognition
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-003/041
G06F-003/048
출원번호
US-0198008
(2011-08-04)
등록번호
US-10088924
(2018-10-02)
발명자
/ 주소
Ivanchenko, Volodymyr V.
출원인 / 주소
Amazon Technologies, Inc.
대리인 / 주소
Polsinelli LLP
인용정보
피인용 횟수 :
0인용 특허 :
30
초록▼
A user can provide input to an electronic device by performing a specific motion or gesture that can be detected by the device. At least one imaging or detection element captures information including the motion or gesture, such that one or more dwell points can be determined in two or three dimensi
A user can provide input to an electronic device by performing a specific motion or gesture that can be detected by the device. At least one imaging or detection element captures information including the motion or gesture, such that one or more dwell points can be determined in two or three dimensions of space. The dwell points can correspond to any point where the motion pauses for at least a minimum amount of time, such as at an endpoint or a point where the motion significantly changes or reverses direction. The set of dwell points, and the order in which those dwell points occur, can be compared against a set of gestures to attempt to match a gesture associated with a particular input. Such an approach is useful for devices with image capture elements or other components that are not able to accurately capture motion or determine movements, etc.
대표청구항▼
1. A computer-implemented method of providing input to a computing device, comprising: detecting motion of a feature of a user within a period of time corresponding to the feature being free from physical contact with the computing device;capturing image information within the period of time using a
1. A computer-implemented method of providing input to a computing device, comprising: detecting motion of a feature of a user within a period of time corresponding to the feature being free from physical contact with the computing device;capturing image information within the period of time using an image capture element of the computing device;analyzing a first portion of the image information to identify a first position of the feature in which the feature is substantially at the first position for at least a minimum amount of time;analyzing a second portion of the image information to identify a second position of the feature in which the feature moved from the first position to the second position in less than the minimum amount of time and in which the feature is substantially at the second position for at least the minimum amount of time;analyzing a third portion of the image information to identify a third position of the feature in which the feature moved from the second position to the third position in less than the minimum amount of time and in which the feature is substantially at the third position for at least the minimum amount of time;determining that the first position, the second position, and the third position match, within a minimum level of certainty, a first stored position, a second stored position, and a third stored position associated with first stored gesture information stored on the computing device; andperforming an action on the computing device associated with the first stored gesture information. 2. The computer-implemented method of claim 1, further comprising: analyzing a fourth portion of the image information to identify a fourth position of the feature in which the feature moved from the third position to the fourth position in less than the minimum amount of time and in which the feature is substantially at the fourth position for a second minimum amount of time; anddetermining an end of the period of time. 3. The computer-implemented method of claim 1, further comprising: prompting the user to perform a gesture to be associated with the action;capturing second image information within a second period of time corresponding to the user performing the gesture;analyzing a first portion of the second image information to identify the first stored position in which the feature is substantially at the first stored position for at least the minimum amount of time;analyzing a second portion of the second image information to identify the second stored position in which the feature moved from the first stored position to the second stored position in less than the minimum amount of time and in which the feature is substantially at the second stored position for at least the minimum amount of time;analyzing a third portion of the second image information to identify the third stored position in which the feature moved from the second stored position to the first stored position in less than the minimum amount of time and in which the feature is substantially at the third stored position for at least the minimum amount of time; anddetermining that the first stored position, the second stored position, and the third stored position do not match a fourth stored position, a fifth stored position, and a sixth stored position associated with previously stored gesture information;storing the first stored position, the second stored position, and the third stored position as the first stored gesture information; andstoring an association of the first stored gesture information and the action. 4. The computer-implemented method of claim 1, further comprising: prompting the user to perform a gesture to be associated with a second action;capturing second image information within a second period of time corresponding to the user performing the gesture;analyzing a first portion of the second image information to identify a fourth stored position in which the feature is substantially at the fourth stored position for at least the minimum amount of time;analyzing a second portion of the second image information to identify a fifth stored position in which the feature moved from the fourth stored position to the fifth stored position in less than the minimum amount of time and in which the feature is substantially at the fifth stored position for at least the minimum amount of time;analyzing a third portion of the second image information to identify a sixth stored position in which the feature moved from the fifth stored position to the sixth stored position in less than the minimum amount of time and in which the feature is substantially at the sixth stored position for at least the minimum amount of time;determining that the fourth stored position, the fifth stored position, and the sixth stored position match the first stored position, the second stored position, and the third stored position; andprompting the user to perform a different gesture to be associated with the second action. 5. The computer-implemented method of claim 1, wherein the first position corresponds to at least one of a start point, an endpoint, a transition point, or a point of reversal. 6. A computer-implemented method, comprising: obtaining image information captured using at least one image capture element of a computing device;determining, from a first portion of the image information, that at least one object is substantially at a first position for at least a minimum period of time;determining, from a second portion of the image information, a second position of the at least one object, in which the at least one object moved from the first position to the second position in less than the minimum period of time and in which the at least one object is substantially at the second position for at least the minimum period of time;determining, from a third portion of the image information, a third position of the at least one object, in which the at least one object moved from the second position to the third position in less than the minimum period of time and in which the at least one object is substantially at the third position for at least the minimum period of time; andbased at least in part on determining that the first position, the second position, and the third position correspond to a first stored position, a second stored position, and a third stored position associated with first stored gesture information, performing an action associated with the first stored gesture information. 7. The computer-implemented method of claim 6, further comprising: determining that a first ordering of the first position, the second position, and the third position corresponds to a second ordering of the first stored position, the second stored position, and the third stored position,wherein the action is performed further based at least in part on the first ordering corresponding to the second ordering. 8. The computer-implemented method of claim 6, further comprising: capturing the image information,wherein the image information includes ambient light image information and reflected infrared image information. 9. The computer-implemented method of claim 8, further comprising: subtracting a weighted amount of the ambient light image information from the reflected infrared image information in order to substantially remove background information from the reflected infrared image information. 10. The computer-implemented method of claim 6, further comprising: performing at least one of image recognition, proximity detection, or intensity analysis using the first portion of the image information. 11. The computer-implemented method of claim 6, further comprising: storing the first stored position, the second stored position, the third stored position, and a fourth stored position in which the fourth stored position is substantially different relative to the first stored position, the second stored position, and the third stored position as second stored gesture information; orstoring a different ordering of the first stored position, the second stored position, and the third stored position as the second stored gesture information. 12. The computer-implemented method of claim 11, further comprising: receiving action information for a specified action to be associated with the second stored gesture information; andassociating the second stored gesture information with the specified action. 13. The computer-implemented method of claim 6, further comprising: determining the first position in two dimensions or three dimensions. 14. The computer-implemented method of claim 6, further comprising: obtaining second image information captured using the at least one image capture element;determining, from a first portion of the second image information that the at least one object is substantially at a fourth position for at least the minimum period of time;determining, from a second portion of the second image information, that the at least one object moved from the fourth position to a fifth position in less than the minimum period of time and that the at least one object is substantially at the fifth position for at least the minimum period of time;determining, from a third portion of the second image information, that the at least one object moved from the fifth position to a sixth position in less than the minimum period of time and that the at least one object is substantially at the sixth position for at least the minimum period of time; andbased at least in part on determining that the fourth position, the fifth position, and the sixth position do not correspond to the first stored position, the second stored position, and the third stored position, prompting for a repeat of a gesture. 15. The computer-implemented method of claim 6, wherein the at least one object includes at least one of a hand, a finger, an eye, an elbow, an arm, or a held object. 16. The computer-implemented method of claim 6, further comprising: activating at least one illumination element at a time of capture of the image information by the at least one image capture element. 17. The computer-implemented method of claim 6, further comprising: deactivating a gesture input mode if no gesture is detected within a specified period of inactivity. 18. A computing device, comprising: a processor;at least one image capture element; anda memory device including instructions that, when executed by the processor, cause the computing device to: obtain image information captured using the at least one image capture element;determine, from a first portion of the image information, that at least one object is substantially at a first position for at least a minimum period of time;determine, from a second portion of the image information, a second position of the at least one object the at least one object, in which the at least one object moved from the first position to the second position in less than the minimum period of time and in which the at least one object is substantially at the second position for at least the minimum period of time;determining, from a third portion of the image information, a third position of the at least one object, in which the at least one object moved from the second position to the third position in less than the minimum period of time and in which the at least one object is substantially at the third position for at least the minimum period of time; andbased at least in part on a determination that the first position, the second position, and the third position correspond to a first stored position, a second stored position, and a third stored position associated with first stored gesture information, perform an action associated with the first stored gesture information. 19. The computing device of claim 18, further comprising: at least one source of illumination, wherein the instructions when executed further cause the computing device to provide, using the at least one source of illumination, at least one of white light or infrared radiation within a period of time when the image information is captured. 20. The computing device of claim 18, further comprising: a rolling data buffer, wherein the instructions when executed further cause the computing device to: store to the rolling data buffer the first portion of the image information; andoverwrite the first portion of the image information in the rolling data buffer with the second portion of the image information. 21. The computing device of claim 18, wherein the instructions when executed further cause the computing device to: determine that a first ordering of the first position, the second position, and the third position corresponds to a second ordering of the first stored position, the second stored position, and the third stored position, wherein the action is performed further based at least in part on the first ordering corresponding to the second ordering. 22. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor of a computing device, cause the computing device to: obtain image information captured using at least one image capture element of the computing device;determine, from a first portion of the image information, that at least one object is substantially at a first position for at least a minimum period of time;determine, from a second portion of the image information, a second position of the at least one object, in which the at least one object moved from the first position to the second position in less than the minimum period of time and in which the at least one object is substantially at the second position for at least the minimum period of time;determine, from a third portion of the image information, a third position of the at least one object, in which the at least one object moved from the second position to the third position in less than the minimum period of time and in which the at least one object is substantially at the third position for at least the minimum period of time; andbased at least in part on a determination that the first position, the second position, and the third position correspond to a first stored position, a second stored position, and a third stored position associated with first stored gesture information, perform an action associated with the first stored gesture information. 23. The non-transitory computer-readable storage medium of claim 22, wherein the instructions when executed further cause the computing device to: determine that a first ordering of the first position, the second position, and the third position corresponds to a second ordering of the first stored position, the second stored position, and the third stored position,wherein the action is performed further based at least in part on the first ordering corresponding to the second ordering. 24. The non-transitory computer-readable storage medium of claim 22, wherein the instructions when executed further cause the processor to: subtract a weighted amount of ambient light image information included in the image information from reflected infrared image information included in the image information in order to substantially remove background information from the reflected infrared image information. 25. The non-transitory computer-readable storage medium of claim 22, wherein the instructions when executed further cause the computing device to: determine the first position in two dimensions or three dimensions.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (30)
Pryor,Timothy R., Camera based man machine interfaces.
Maes Pattie E. (Somerville MA) Blumberg Bruce M. (Pepperell MA) Darrell Trevor J. (Cambridge MA) Starner Thad E. (Somerville MA) Johnson Michael P. (Cambridge MA) Russell Kenneth B. (Boston MA) Pentl, Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual e.
Okuda, Nobuya; Kobayashi, Tatsuya; Fujimoto, Hirofumi; Matsuyama, Shigenobu, Method for controlling movement of viewing point of simulated camera in 3D video game, and 3D video game machine.
McCloud Seth R. (16201 Jordan Rd. Arlington WA 98223), Method of communication using pointing vector gestures and mnemonic devices to assist in learning point vector gestures.
Dehlin, Joel P.; Chen, Christina Summer; Wilson, Andrew D.; Robbins, Daniel C.; Horvitz, Eric J.; Hinckley, Kenneth P.; Wobbrock, Jacob O., Recognizing gestures and using gestures for interacting with software applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.