최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | UP-0485790 (2006-07-13) |
등록번호 | US-7701439 (2010-05-20) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 346 인용 특허 : 11 |
A gesture recognition simulation system and method is provided. In one embodiment, a gesture recognition simulation system includes a three-dimensional display system that displays a three-dimensional image of at least one simulated object having at least one functional component. A gesture recognit
A gesture recognition simulation system and method is provided. In one embodiment, a gesture recognition simulation system includes a three-dimensional display system that displays a three-dimensional image of at least one simulated object having at least one functional component. A gesture recognition interface system is configured to receive an input gesture associated with a sensorless input object from a user. The gesture recognition simulation system further comprises a simulation application controller configured to match a given input gesture with a predefined action associated with the at least one functional component. The simulation application controller could invoke the three dimensional display system to display a simulated action on at least a portion of the at least one simulated object associated an input gesture and a predefined action match.
What is claimed is: 1. A gesture recognition simulation system comprising: a three-dimensional display system that displays a three-dimensional image of at least one simulated object, the three-dimensional image appearing to occupy three-dimensional space to a user and having at least one functiona
What is claimed is: 1. A gesture recognition simulation system comprising: a three-dimensional display system that displays a three-dimensional image of at least one simulated object, the three-dimensional image appearing to occupy three-dimensional space to a user and having at least one functional component that is a portion of the three-dimensional image of the at least one simulated object which is reactive to interaction by the user independent of remaining portions of the three-dimensional image of the at least one simulated object; a gesture recognition interface system configured to receive an input gesture associated with a sensorless input object from the user, the input gesture being determined by changes in at least one of a three-dimensional shape and a physical location of the sensorless input object relative to a physical location of the at least one functional component in three-dimensional space; and a simulation application controller configured to match a given input gesture with a predefined action associated with the at least one functional component, and invoke the three-dimensional display system to display a simulated action associated with the predefined action on at least a portion of the at least one simulated object associated with the at least one functional component. 2. The system of claim 1, wherein the gesture recognition interface system comprises: a plurality of light sources positioned to illuminate a background surface; and at least one camera configured to receive a first plurality of images based on a first reflected light contrast difference between the background surface and the sensorless input object caused by a first of the plurality of light sources and a second plurality of images based on a second reflected light contrast difference between the background surface and the sensorless input object caused by a second of the plurality of light sources, such that the three-dimensional shape and the physical location of the sensorless input object are determined based on a comparison of shape and location of corresponding images of the first plurality of images and the second plurality of images. 3. The system of claim 2, wherein the plurality of light sources are infrared (IR) light sources, such that the at least one camera comprises an IR filter. 4. The system of claim 1, further comprising a gesture library that stores a plurality of predefined gestures, the simulation application controller employs the gesture library to determine if a given input gesture matches a predefined action. 5. The system of claim 1, further comprising an object library, accessible by the simulation application controller, that stores data associated with a plurality of simulated objects including three-dimensional image information, information associated with at least one functional component of the respective simulated object, and at least one predefined action associated with the at least one functional component. 6. The system of claim 5, wherein the object library is further configured to store at least one predefined gesture for a given predefined action. 7. The system of claim 1, wherein the three-dimensional display system is further configured to display a three-dimensional image of at least one simulated tool that is reactive to a user's hand, such that the sensorless input object comprises the user's hand and the at least one simulated tool. 8. The system of claim 1, wherein the at least one sensorless input object comprises at least one of both hands of a user, at least one hand from multiple users, and at least one tool held by one or more users. 9. The system of claim 1, wherein the three-dimensional display system is a holograph projector, such that the three-dimensional image of the at least one simulated device is a holographic image. 10. The system of claim 1, wherein the input gesture comprises concurrent gestures associated with a plurality of users. 11. The system of claim 1, wherein the simulated action comprises one of removing, moving and assembling at least a portion of the at least one simulated object associated with the at least one functional component. 12. The system of claim 1, further comprising an output system configured to provide an additional output in response to the simulated action, the output being at least one of an audio signal, a video signal, and a control signal. 13. The system of claim 1, wherein the input gesture comprises pointing with a finger to define a one-dimensional ray in three-dimensional space, such that multiple pointed fingers define a plurality of one-dimensional rays in three-dimensional space that further define an input gesture based on a geometric shape in three-dimensional space. 14. A method of interacting with a simulated device, the method comprising: generating a three-dimensional image of at least one simulated object the three-dimensional image appearing to occupy three-dimensional space to a user and having at least one functional component that is a portion of the three-dimensional image of the at least one simulated object which is reactive to interaction by a user independent of remaining portions of the three-dimensional image of the at least one simulated object; illuminating a background surface with a plurality of light sources; generating a first plurality of images associated with a sensorless input object based on a reflected light contrast between the sensorless input object and the illuminated background surface caused by one of the plurality of light sources; generating a second plurality of images associated with the sensorless input object based on a reflected light contrast between the sensorless input object and the illuminated background surface caused by another one of the plurality of light sources; determining changes in at least one of a three-dimensional shape and a physical location of the sensorless input object based on a comparison of corresponding images of the first and second plurality of images; determining an input gesture associated with the sensorless input object based on the changes in at least one of a three-dimensional shape and a physical location of the sensorless input object relative to a physical location of the at least one functional component in three-dimensional space; determining if the input gesture matches a predefined action associated with the at least one functional component; and displaying a simulated action associated with a matched predefined action on at least a portion of the at least one simulated object associated with the at least one functional component. 15. The method of claim 14, wherein illuminating the background surface comprises illuminating the background surface with a plurality of infrared (IR) light sources. 16. The method of claim 14, further comprising comparing the determined input gesture with a plurality of predefined gestures stored in a predefined gesture library, each predefined action of the at least one functional component having at least one associated gesture of the plurality of predefined gestures that initiates a simulated action on at least a portion of the at least one simulated object. 17. The method of claim 14, further comprising accessing an object library configured to store data associated with a plurality of simulated objects, the data for each of the plurality of simulated objects including three-dimensional image information, information associated with at least one functional component of the respective simulated object, and at least one predefined action associated the at least one functional component. 18. The method of claim 14, further comprising generating a three-dimensional image of at least one simulated tool, the three-dimensional image of the at least one simulated tool being reactive to a user's hand, such that the simulated action is in response to gestures that are performed both with the user's hand and with the at least one simulated tool. 19. The method of claim 14, wherein determining the input gesture associated with the sensorless input object comprises determining an input gesture associated with at least one of both hands of a user, at least one hand from multiple users, and at least one tool held by one or more users. 20. The method of claim 14, wherein generating the three-dimensional image of the at least one simulated object comprises projecting a three-dimensional holographic image of the at least one simulated object. 21. The method of claim 14, wherein determining the input gesture associated with the sensorless input object comprises determining concurrent input gestures associated with a plurality of users. 22. The method of claim 14, further comprising activating at least one additional output in response to the simulated action, the at least one additional output being at least one of an audio signal, a video signal, and a control signal. 23. The method of claim 14, wherein the simulated action comprises one of removing, moving and assembling at least a portion of the at least one simulated object. 24. A gesture recognition simulation system comprising: means for displaying a three-dimensional image of at least one simulated object, the three-dimensional image appearing to occupy three-dimensional space to a user and having at least one functional component that is a portion of the three-dimensional image of the at least one simulated object which is reactive to interaction by a user independent of remaining portions of the three-dimensional image of the at least one simulated object; means for generating a first plurality of images associated with a sensorless input object based on a reflected light contrast between the sensorless input object and an illuminated background surface caused by a first light source; means for generating a second plurality of images associated with the sensorless input object based on a reflected light contrast between the sensorless input object and the illuminated background surface caused by a second light source; means for determining changes in at least one of a three-dimensional shape and a physical location of the sensorless input object based on a comparison of corresponding images of the first and second plurality of images; means for determining an input gesture associated with the sensorless input object based on the determined changes; means for matching the input gesture to a predefined action associated with the at least one functional component and a physical location of the input gesture relative to a physical location of the at least one functional component in three-dimensional space; and means for displaying a simulated action on at least a portion of the at least one simulated object, the simulated action being associated with the matching of a predefined action to an associated input gesture. 25. The system of claim 24, further comprising means for storing a plurality of predefined gestures, the means for matching employing the means for storing to match the input gesture to the predefined action. 26. The system of claim 24, further comprising means for storing data associated with a plurality of simulated objects, the data for each of the plurality of simulated objects including three-dimensional image information, information associated with at least one functional component of the respective simulated object, and at least one predefined action associated with each of the at least one functional component. 27. The system of claim 24, wherein the sensorless input object comprises at least one of both hands of a user, at least one hand from multiple users, and at least one tool held by one or more users.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.