An apparatus for processing data includes a projector, which is configured to project content onto at least a part of a scene. A processor is configured to detect a location of an eye of a person in the scene and to control the projector so as to reduce an intensity of the projected content in an ar
An apparatus for processing data includes a projector, which is configured to project content onto at least a part of a scene. A processor is configured to detect a location of an eye of a person in the scene and to control the projector so as to reduce an intensity of the projected content in an area of the eye.
대표청구항▼
1. A method for augmented interaction with a data processing system, comprising the steps of: receiving a depth map of a scene containing 3-dimensional objects meeting first criteria for projection of images thereon;receiving a 2-dimensional image of the scene;using a digital processor, processing t
1. A method for augmented interaction with a data processing system, comprising the steps of: receiving a depth map of a scene containing 3-dimensional objects meeting first criteria for projection of images thereon;receiving a 2-dimensional image of the scene;using a digital processor, processing the depth map to locate the 3-dimensional objects;executing a computer application to analyze the 2-dimensional image to identify a most suitable one of the 3-dimensional objects according to second criteria;projecting a user interface to the computer application as a content-containing image onto the most suitable one of the 3-dimensional objects;using the digital processor, recognizing a gesture;interpreting the gesture as an interaction with the user interface; andcontrolling the computer application responsively to the interaction. 2. The method according to claim 1, wherein processing the depth map comprises identifying a position of the most suitable one of the 3-dimensional objects with six degrees of freedom with respect to a reference system of coordinates, and projecting a user interface comprises compensating for scale, pitch, yaw and angular rotation thereof. 3. The method according to claim 1, wherein processing the depth map comprises referencing a database of 3-dimensional object definitions and comparing the 3-dimensional objects with the definitions in the database. 4. The method according to claim 1, wherein one of the 3-dimensional objects in the depth map is a portion of a humanoid form. 5. A method for augmented interaction with a data processing system, comprising the steps of: receiving a depth map of a scene containing 3-dimensional objects meeting first criteria for projection of images thereon;using a digital processor, processing the depth map to locate the 3-dimensional objects;executing a computer application to identify a most suitable one of the 3-dimensional objects according to second criteria;projecting an image of the most suitable one of the 3-dimensional objects onto a virtual surface;using the digital processor, recognizing a gesture;interpreting the gesture as an interaction with the image of the one object; andcontrolling the computer application responsively to the interaction. 6. The method according to claim 5, wherein processing the depth map comprises identifying a position of the most suitable one of the 3-dimensional objects with six degrees of freedom with respect to a reference system of coordinates, and projecting the image of the one object comprises compensating for scale, pitch, yaw and angular rotation of the one object. 7. The method according to claim 5, wherein processing the depth map comprises referencing a database of 3-dimensional object definitions and comparing the 3-dimensional objects in the depth map with the definitions in the database. 8. The method according to claim 5, wherein the image of the most suitable one of the 3-dimensional objects further comprises a user interface for control of the computer application. 9. A method for augmented interaction with a data processing system, comprising the steps of: receiving a depth map of a scene containing 3-dimensional objects meeting first criteria for projection of images thereon;using a digital processor, processing the depth map to locate the 3-dimensional objects;executing a computer application to identify a most suitable one of the 3-dimensional objects according to second criteria;projecting an image of the one object onto a wearable monitor;using the digital processor, recognizing a gesture of a user;interpreting the gesture as an interaction with the image of the most suitable one of the 3-dimensional objects; andcontrolling the computer application responsively to the interaction. 10. The method according to claim 9, wherein processing the depth map comprises identifying a position of the one object with six degrees of freedom with respect to a reference system of coordinates, and projecting the image of the most suitable one of the 3-dimensional objects comprises compensating for scale, pitch, yaw and angular rotation of the one object. 11. The method according to claim 9, wherein processing the depth map comprises referencing a database of 3-dimensional object definitions and comparing the 3-dimensional objects with the definitions in the database. 12. The method according to claim 9, wherein the image of the most suitable one of the 3-dimensional objects further comprises a user interface for control of the computer application. 13. An apparatus for processing data, comprising: a projector, which comprises: a first radiation source, which emits a beam of infrared radiation;a second radiation source, which emits a visible light beam, which is modulated to form content for projection onto at least a part of a scene; andscanning optics configured to project both the infrared beam and the visible light beam onto the scene simultaneously;a sensing device, which is configured to capture the infrared radiation returned from the scene and output a signal in response to the captured radiation; anda processor, which is configured to process the signal in order to generate a 3-dimensional map of the scene and to process the 3-dimensional map in order to detect a location of an eye of a person in the scene and to control the projector so as to reduce an intensity of the projected content in an area of the eye. 14. The apparatus according to claim 13, wherein the first radiation source is controlled to create a pattern of spots on the scene, and wherein the processor is configured to derive a 3-dimensional map of the scene from the pattern of the spots and to process the 3-dimensional map in order to identify the area of the eye. 15. The apparatus according to claim 13, wherein the sensing device has a field of view that is scanned so as to coincide with projection of the infrared beam and the visible light beam onto the scene. 16. A method for processing data, comprising: projecting content onto at least a part of a scene by scanning a visible light beam over the scene, while modulating the visible light beam to form the content that is projected onto at least the part of the scene;scanning a beam of infrared radiation over the scene simultaneously with the visible light beam, capturing the infrared radiation returned from the scene, processing the captured radiation to generate a 3-dimensional map of the scene, and processing the 3-dimensional map in order to detect a location of an eye of a person in the scene; andcontrolling projection of the content so as to reduce an intensity of the projected content in an area of the eye. 17. The method according to claim 16, wherein scanning the beam of the infrared radiation comprises controlling the beam of the infrared radiation to create a pattern of spots on the scene, and wherein processing the captured radiation comprises deriving a 3-dimensional map of the scene from the pattern of the spots, and processing the 3-dimensional map in order to identify the area of the eye. 18. The method according to claim 16, wherein capturing the infrared radiation comprises scanning a field of view of a sensing device, which senses the infrared radiation, so as to coincide with projection of the infrared beam and the visible light beam onto the scene.
Wee, Susie J.; Baker, Henry Harlyn; Bhatti, Nina T.; Covell, Michele; Harville, Michael, Communication and collaboration system using rich media environments.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Controlling resource access based on user gesturing in a 3D captured image stream of the user.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Honda,Tadashi, Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Lanier Jaron Z. (Palo Alto CA) Grimaud Jean-Jacques G. (Portola Valley CA) Harvill Young L. (San Mateo CA) Lasko-Harvill Ann (San Mateo CA) Blanchard Chuck L. (Palo Alto CA) Oberman Mark L. (Mountain, Method and system for generating objects for a multi-person virtual world using data flow networks.
Rafii, Abbas; Bamji, Cyrus; Sze, Cheng-Feng; Torunoglu, Iihami, Methods for enhancing performance and data acquired from three-dimensional image systems.
Bang, Won chul; Kim, Dong yoon; Chang, Wook; Kang, Kyoung ho; Choi, Eun seok, Spatial motion recognition system and method using a virtual handwriting plane.
Segawa,Hiroyuki; Hiraki,Norikazu; Shioya,Hiroyuki; Abe,Yuichi, Three-dimensional model processing device, three-dimensional model processing method, program providing medium.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Ellenby, John; Ellenby, Thomas; Ellenby, Peter, Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.