IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0903140
(2010-10-12)
|
등록번호 |
US-8303411
(2012-11-06)
|
발명자
/ 주소 |
- Marks, Richard L.
- Deshpande, Hrishikesh R.
|
출원인 / 주소 |
- Sony Computer Entertainment Inc.
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
7 인용 특허 :
219 |
초록
▼
Detecting direction pointing direction when interfacing with a computer program is described. Two or more stereo images presented in front of two or more corresponding image capture devices can be captured. Each image capture device having a capture location in a coordinate space. The image capture
Detecting direction pointing direction when interfacing with a computer program is described. Two or more stereo images presented in front of two or more corresponding image capture devices can be captured. Each image capture device having a capture location in a coordinate space. The image capture devices can be synchronized with a strobe signal that is visible to each image capture device. When a person is captured in the image, first and second body parts of the person in the image can be identified and assigned first and second locations in the coordinate space. A relative position that includes a dimension of depth can be identified in coordinate space between the first location and the second location when viewed from the capture location.
대표청구항
▼
1. A method for detecting depth and direction when interfacing with a computer program, comprising: (a) capturing two or more stereo images presented in front of two or more corresponding image capture devices, each image capture device having a capture location in a coordinate space(b) synchronizin
1. A method for detecting depth and direction when interfacing with a computer program, comprising: (a) capturing two or more stereo images presented in front of two or more corresponding image capture devices, each image capture device having a capture location in a coordinate space(b) synchronizing the image capture devices with a strobe signal that is visible to each image capture device;when a person is captured in the image,(c) identifying a first body part of the person in the image and assigning the first body part a first location in the coordinate space;(d) identifying a second body part of the person in the image and assigning the second body part a second location in coordinate space; and(e) identifying a relative position in coordinate space between the first location and the second location when viewed from the capture location, wherein the relative position includes a dimension of depth. 2. The method of claim 1 wherein the relative position defines a pointing direction of the object when viewed by the image capture device. 3. The method of claim 1 wherein the dimension of depth is determined by taking first and second images with first and second image capture devices located at spaced-apart positions and measuring distances of an object in each image relative to a reference in each image. 4. The method of claim 1, wherein the capture location is at a proximate location of a display screen and the display screen is capable of rendering interactive graphics. 5. The method of claim 4, wherein the pointing direction is toward the display screen. 6. The method of claim 5, wherein the interactivity can include one or more of selection of a graphic, shooting of a graphic, touching a graphic, moving of a graphic, activation of a graphic, triggering of a graphic and acting upon or with a graphic. 7. The method of claim 4, further comprising: repeating (a)-(e) continually to update the pointing direction; and displaying the continually updated pointing direction on the display screen. 8. The method of claim 7, further comprising: enabling selection of particular interactive graphics using the displayed pointing direction. 9. The method of claim 8, wherein the selection is in response to a detected trigger event. 10. The method of claim 9, wherein the detected trigger event is identified in the image, the identification comprising, identifying a first characteristic of the object held by the person at a first point in time; and identifying a second characteristic of the object held by the person at a second point in time, wherein the trigger event is activated when a degree of difference is determined to have existed between first characteristic and the second characteristic when examined between the first point in time and the second point in time. 11. The method of claim 10, wherein the trigger even being activated is indicative of interactivity with the interactive graphics. 12. The method of claim 11, wherein the interactivity can include one or more of selection of a graphic, shooting of a graphic, touching a graphic, moving of a graphic, activation of a graphic, triggering of a graphic and acting upon or with a graphic. 13. The method of claim 1 wherein the relative position defines a pointing direction of the second body part when viewed by the image capture device at the capture location that is proximate to the display screen. 14. The method of claim 1, wherein the first body part is a human head and the second body part is a human hand. 15. The method of 1, wherein (a)-(d) is repeated continually during execution of the computer program, and examining a shape of the human hand during the repeating of (a)-(d) to determine particular shape changes. 16. The method of claim 1, wherein particular shape changes trigger interactivity with interactive graphics of the computer program. 17. The method of claim 1, wherein the second body part is identified by way of an object held by the human hand. 18. The method of claim 1, wherein the object includes color. 19. The method of claim 18, wherein the color can switch from on/off states to trigger interactivity with interactive graphics of the computer program. 20. The method of claim 18, wherein the color is capable of switching between states to trigger interactivity with interactive graphics of the computer program. 21. The method of claim 20, wherein additional colors are present on the object, the colors capable of being switched to trigger interactivity with interactive graphics of the computer program. 22. The method of claim 1, wherein the computer program is a video game. 23. The method of claim 1, wherein the first body part is a human head and the relative position is identified by computing an azimuth angle and an altitude angle between a location of the head and the object location in relation to the capture location. 24. The method of claim 1, wherein identifying the human head is processed using template matching in combination with face detection code. 25. The method of claim 1, wherein identifying the object held by the person is facilitated by color tracking of a portion of the object. 26. The method of claim 25, wherein color tracking includes one or a combination of identifying differences in colors and identifying on/off states of colors. 27. The method of claim 25, wherein identifying the object held by the person is facilitated by identification of changes in positions of the object when repeating (a)-(e). 28. The method of claim 1, wherein the computer program is a video game. 29. The method of claim 1 wherein the strobe signal is generated by an array of strobe signal generators that flash in a known sequence. 30. The method of claim 29 wherein (b) includes synchronizing the between the two or more image capture devices based on which strobe signal generator is lit in the two or more images. 31. A system for detecting pointing direction of an object directed toward a display screen that can render graphics of a computer program, comprising: a processor;a memory coupled to the processor, the memory having embodied therein one or more computer executable instructions configured to implement, upon execution, a method for detecting depth and direction when interfacing with a computer program, the method comprising:(a) capturing two or more stereo images presented in front of two or more corresponding image capture devices, each image capture device having a capture location in a coordinate space(b) synchronizing the image capture devices with a strobe signal that is visible to each image capture device;when a person is captured in the image,(c) identifying a first body part of the person in the image and assigning the first body part a first location in the coordinate space;(d) identifying a second body part of the person in the image and assigning the second body part a second location in coordinate space; and(e) identifying a relative position in coordinate space between the first location and the second location when viewed from the capture location, wherein the relative position includes a dimension of depth. 32. A non-transitory computer-readable storage medium having embodied therein one or more computer executable instructions configured to implement, upon execution, a method for detecting depth and direction when interfacing with a computer program, the method comprising: (a) capturing two or more stereo images presented in front of two or more corresponding image capture devices, each image capture device having a capture location in a coordinate space(b) synchronizing the image capture devices with a strobe signal that is visible to each image capture device;when a person is captured in the image,(c) identifying a first body part of the person in the image and assigning the first body part a first location in the coordinate space;(d) identifying a second body part of the person in the image and assigning the second body part a second location in coordinate space; and(e) identifying a relative position in coordinate space between the first location and the second location when viewed from the capture location, wherein the relative position includes a dimension of depth.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.