최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | UP-0507976 (2006-08-21) |
등록번호 | US-7834846 (2011-01-16) |
발명자 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 418 인용 특허 : 95 |
A device allows easy and unencumbered interaction between a person and a computer display system using the person's (or another object's) movement and position as input to the computer. In some configurations, the display can be projected around the user so that that the person's actions are display
A device allows easy and unencumbered interaction between a person and a computer display system using the person's (or another object's) movement and position as input to the computer. In some configurations, the display can be projected around the user so that that the person's actions are displayed around them. The video camera and projector operate on different wavelengths so that they do not interfere with each other. Uses for such a device include, but are not limited to, interactive lighting effects for people at clubs or events, interactive advertising displays, etc. Computer-generated characters and virtual objects can be made to react to the movements of passers-by, generate interactive ambient lighting for social spaces such as restaurants, lobbies and parks, video game systems and create interactive information spaces and art installations. Patterned illumination and brightness and gradient processing can be used to improve the ability to detect an object against a background of video images.
What is claimed is: 1. A method of tracking movement of an object, the method configured for execution by a computing system comprising one or more computing devices, the method comprising: receiving at a computing system a plurality of images from an image acquisition device; identifying an object
What is claimed is: 1. A method of tracking movement of an object, the method configured for execution by a computing system comprising one or more computing devices, the method comprising: receiving at a computing system a plurality of images from an image acquisition device; identifying an object of interest in at least one of the plurality of images; generating an influence image comprising one or more outline areas that each at least partially surround the identified object, wherein each of the outline areas are positioned successively further away from the identified object; mapping the at least one of the images, including the influence image, onto a video image depicting a virtual object; and based at least on an overlap between one or more of the outline areas of the influence image and the virtual object, determining an interaction between the object and the virtual object. 2. The method of claim 1, wherein the object of interest comprises at least a portion of a human. 3. The method of claim 1, wherein the interaction is selected from the group comprising pushing the virtual object, touching the virtual object, deforming the virtual object, or manipulating the virtual object. 4. The method of claim 1, further comprising: generating a combined video image comprising a representation of the object of interest and the virtual object; and projecting the combined video image onto a surface. 5. The method of claim 4, wherein the image acquisition device comprises a video recording device that is sensitive to light that is substantially not visible to humans. 6. A tangible computer readable storage medium having computer-executable instructions stored thereon, the computer-executable instructions readable by a computing system comprising one or more computing devices, wherein the computer-executable instructions are executable on the computing system in order to cause the computing system to perform a method of tracking movement of an object by a computing system comprising: receiving at a computing system a plurality of images from an image acquisition device; identifying an object of interest in at least one of the plurality of images; generating an influence image comprising one or more outline areas that each at least partially surround the identified object, wherein each of the outline areas are positioned successively further away from the identified object; mapping the at least one of the images, including the influence image, onto a video image depicting a virtual object; based at least on an overlap between one or more of the outline areas of the influence image and the virtual object, determining an interaction between the object and the virtual object. 7. The tangible computer readable storage medium of claim 6, wherein the method further comprises: generating a combined video image comprising a representation of the object of interest and the virtual object; and projecting the combined video image onto a surface. 8. A system comprising: a camera system operable to provide images of an object against a background; a display system operable to render video images onto a surface, wherein the video images comprise at least one virtual object; and a computing device configured to control rendering of the video images; and generate an influence image comprising two or more outline areas that each at least partially surround the object, wherein each of the outline areas are positioned successively further away from the object, wherein the outline areas are usable to estimate an interaction with at least one virtual object. 9. The system of claim 8, wherein the computing device is further configured to receive images from the camera system, automatically adapt to changes in the background by repeatedly analyzing changes over time in the images from the camera system, control rendering of the video images in response to adaptations to changes in the background, and to adapt to changes in the background by generating an adaptive model of the background by analyzing changes over time in images from the camera system. 10. The system of claim 8, wherein the computing device is further configured to assign a brightness value to each of the outline areas so that the brightness values of the outline areas decrease as the outline areas are positioned farther away from the object. 11. The system of claim 10, wherein the computing device is further configured to determine an interaction between the object and the at least one virtual object based on one or more of the brightness values associated with the outline areas. 12. The system of claim 11, wherein the interaction includes the object pushing the at least one virtual object. 13. The system of claim 11, wherein the interaction includes the object touching the at least one virtual object. 14. The system of claim 11, wherein the interaction includes the object deforming the at least one virtual object. 15. The system of claim 11 wherein the interaction includes the object manipulating the at least one virtual object. 16. The system of claim 8, wherein the camera system includes a strobed light source. 17. The system of claim 16, wherein the computing device is further configured to process the images from the camera system based on at least the strobed light source. 18. The system of claim 17, wherein the computing device is further configured to suppress a lighting phenomenon due to sources of light other than the camera system light source. 19. The system of claim 17, wherein the computing device is further configured to adapt to shadows due to sources of light other than the camera system light source. 20. The system of claim 9, wherein the computing device is further configured to generate the adaptive model of the background by identifying pixels in the image having a substantially constant brightness over time. 21. The system of claim 9, wherein the computing device is further configured to generate the adaptive model of the background by computing median values over time for the pixels. 22. The system of claim 9, wherein the computing device is further configured to generate the adaptive model of the background by computing median values over time for pixels in the images. 23. The system of claim 9, wherein the computing device is further configured to generate the adaptive model of the background by incorporating the background changes in the background into the adaptive model over time. 24. The system of claim 9, wherein the computing device is further configured to generate the adaptive model of the background by incorporating the background changes in the background that occur due to changes in lighting into the adaptive model. 25. The system of claim 9, wherein the computing device is further configured to classify pixels of the images as one of foreground pixels and background pixels by comparing the images to the adaptive model. 26. The system of claim 9, wherein the computing device is further configured to generate the adaptive model of the background by incorporating the background information relating to at least a portion of the images that has remained substantially stationary for a period of time into the adaptive model. 27. The system of claim 9, wherein the computing device is further configured to generate the adaptive model of the background by computing a weighted average of a current image of the image from the camera system with the adaptive model of the background. 28. The system of claim 27, wherein the computing device is further configured to tune the weighted average to change a rate at which the model of the background adapts to changes in the images from the camera system. 29. The system of claim 9, wherein the computing device is further configured to generate the display item by distinguishing between foreground that corresponds to the object and the background. 30. The system of claim 29, wherein the distinguishing comprises comparing a current image of the images from the camera system with the adaptive model of the background. 31. The system of claim 30, wherein the distinguishing further comprises determining if differences between corresponding pixels in the current image of the images from the camera system and the adaptive model of the background are greater than a threshold to determine a location of the object. 32. The system of claim 8, wherein the camera system comprises two cameras to provide a stereo image, and wherein the computing device is further configured to compute depth data based on the stereo image and to use the depth data to generate the model of the background.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.