IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0106091
(2008-04-18)
|
등록번호 |
US-8456419
(2013-06-04)
|
발명자
/ 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
9 인용 특허 :
299 |
초록
▼
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis s
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs.
대표청구항
▼
1. A system for determining a location of a pointing device in three-dimensional space and a location to where the pointing device is directed, the system comprising: one or more infrared light emitting diodes (LEDs);a control system coupled to the one or more infrared LEDs, the control system capab
1. A system for determining a location of a pointing device in three-dimensional space and a location to where the pointing device is directed, the system comprising: one or more infrared light emitting diodes (LEDs);a control system coupled to the one or more infrared LEDs, the control system capable of causing the one or more infrared LEDs to emit infrared light;one or more cameras, capable of capturing one or more frames at substantially the same time as the one or more infrared LEDs are caused to emit the infrared light, at least one of the one or more frames recording an existence of an emission of the infrared light;a processing system, capable of: determining the location of the pointing device in three-dimensional space;determining the location to where the pointing device is directed;computing a similarity between an input sequence of sensor values output by the pointing device and at least one stored prototype sequence, each stored prototype sequence representing a sequence of sensor values that is generated if the pointing device is used to perform a unique gesture; anddetermining if the computed similarity between the input sequence and any stored prototype sequence exceeds a prescribed similarity threshold; anda communication interface, capable of transmitting a message across a transmission medium to a receiving unit, the message including data that can be used to determine the location of the pointing device in three-dimensional space and the location to where the pointing device is directed, and data relating to gesture determination, which the receiving unit can use to manipulate an object displayed on a display screen. 2. The system of claim 1, wherein a portion of the system is coupled with the pointing device. 3. The system of claim 2, wherein the portion of the system is located at a front end of the pointing device. 4. The system of claim 1, wherein the processing system determines the location of the pointing device in three-dimensional space by identifying a high intensity area in one or more of the frames. 5. The system of claim 4, wherein the high intensity area comprises an image coordinate of a pixel having a highest intensity. 6. The system of claim 1, wherein a portion of the system is coupled with the pointing device and a portion of the system is coupled with the receiving unit. 7. The system of claim 1, wherein the processing system determines the location of the pointing device in three-dimensional space by applying a stereo imaging technique. 8. The system of claim 1, wherein the processing system determines the location to where the pointing device is directed using the location of the pointing device in three-dimensional space and an orientation of the pointing device. 9. The system of claim 1, wherein the transmission medium is a wireless transmission medium. 10. A method for determining a location of a pointing device in three-dimensional space, the method comprising: emitting light from a light emitting diode (LED) in response to a control signal;recognizing the emission of the light from the LED with an imaging device having a lens;capturing a frame with the imaging device, wherein the frame includes a representation of the light emitted from the LED;determining a location of the representation in the frame;determining the location of the pointing device in three-dimensional space based upon the location of the representation in the frame;computing a similarity between an input sequence of sensor values output by the pointing device and at least one stored prototype sequence, each stored prototype sequence representing a sequence of sensor values that is generated if the pointing device is used to perform a unique gesture; anddetermining if the computed similarity between the input sequence and any stored prototype sequence exceeds a prescribed similarity threshold. 11. The method of claim 10, further comprising utilizing infrared filters to filter the frame. 12. The method of claim 10, wherein the step of determining the location of the representation in the frame further comprises eliminating at least a portion of background infrared light in the frame. 13. The method of claim 10, wherein the step of determining the location of the representation in the frame further comprises: determining a high intensity area in the frame; anddetermining a coordinate position of the high intensity area. 14. The method of claim 10, further comprising determining an area to which the pointing device is directed based at least on the location of the pointing device in three-dimensional space. 15. The method of claim 14, further comprising implementing a training procedure associated with the area to which the pointing device is directed. 16. The method of claim 14, further comprising providing data to a receiving unit, the data being associated with the area to which the pointing device is directed to, the data being for manipulating a screen object at the area. 17. The method of claim 10, further comprising performing a stereo imaging technique. 18. A system for determining a location of a pointing device in three-dimensional space, comprising: a light emitting diode (LED) in a first position, the LED capable of emitting infrared light;a controller coupled to the LED capable of causing the LED to emit the infrared light for a pre-defined time period;a camera capable of detecting the emitted infrared light; anda processor capable of: determining a first position of the emitted infrared light, wherein the first position corresponds with a first area of high intensity in an image, wherein the location of the pointing device in three-dimensional space is based at least in part on the first position;determining an orientation of the pointing device;computing a similarity between an input sequence of sensor values output by the pointing device and at least one stored prototype sequence, each stored prototype sequence representing a sequence of sensor values that is generated if the pointing device is used to perform a unique gesture; anddetermining if the computed similarity between the input sequence and any stored prototype sequence exceeds a prescribed similarity threshold. 19. The system of claim 18, wherein the camera captures the emitted infrared light contemporaneously with additional data. 20. The system of claim 18, wherein the processor is further capable of determining a display screen region to which the pointing device is directed using the location of the pointing device in three-dimensional space in conjunction with additional orientation related data. 21. The system of claim 18, wherein the processor utilizes a pre-defined coordinate system in conjunction with the image to establish a first coordinate position for the first area of high intensity. 22. The system of claim 21, wherein the processor utilizes triangulation calculations associated with at least the first coordinate position to determine the location of the pointing device in three-dimensional space. 23. The system of claim 21, wherein the processor utilizes stereo imaging techniques associated with at least the first coordinate position to determine the location of the pointing device in three-dimensional space. 24. The system of claim 18, wherein a portion of the system is connected to the pointing device.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.