Interactive reality augmentation for natural interaction
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-013/00
H04N-013/02
H04N-001/04
G06F-003/01
G06T-019/00
출원번호
US-0726128
(2012-12-23)
등록번호
US-9158375
(2015-10-13)
발명자
/ 주소
Maizels, Aviad
Shpunt, Alexander
Berliner, Tamir
출원인 / 주소
APPLE INC.
대리인 / 주소
D. Kligler I.P. Services Ltd.
인용정보
피인용 횟수 :
8인용 특허 :
84
초록▼
Embodiments of the invention provide apparatus and methods for interactive reality augmentation, including a 2-dimensional camera (36) and a 3-dimensional camera (38), associated depth projector and content projector (48), and a processor (40) linked to the 3-dimensional camera and the 2-dimensional
Embodiments of the invention provide apparatus and methods for interactive reality augmentation, including a 2-dimensional camera (36) and a 3-dimensional camera (38), associated depth projector and content projector (48), and a processor (40) linked to the 3-dimensional camera and the 2-dimensional camera. A depth map of the scene is produced using an output of the 3-dimensional camera, and coordinated with a 2-dimensional image captured by the 2-dimensional camera to identify a 3-dimensional object in the scene that meets predetermined criteria for projection of images thereon. The content projector projects a content image onto the 3-dimensional object responsively to instructions of the processor, which can be mediated by automatic recognition of user gestures.
대표청구항▼
1. An apparatus for processing data, comprising: a sensing element comprising a 3-dimensional camera for acquiring a scene;a 2-dimensional camera for acquiring a 2-dimensional image of the scene;a processor linked to the 3-dimensional camera and the 2-dimensional camera and programmed to produce a d
1. An apparatus for processing data, comprising: a sensing element comprising a 3-dimensional camera for acquiring a scene;a 2-dimensional camera for acquiring a 2-dimensional image of the scene;a processor linked to the 3-dimensional camera and the 2-dimensional camera and programmed to produce a depth map of the scene using an output of the 3-dimensional camera, and making a scene analysis to identify a 3-dimensional object in the scene that meets predetermined criteria for projection of images thereon; anda content projector for projecting an image onto the identified 3-dimensional object responsively to instructions of the processor,wherein the processor is operative for recognizing information relating to the 3-dimensional object in the 2-dimensional image, and is further operative for instructing the content projector to use the information relating to the 3-dimensional object in forming the projected image responsively to the recognized information. 2. The apparatus according to claim 1, wherein forming the image comprises projecting the image onto the 3-dimensional object. 3. The apparatus according to claim 1, wherein forming the image comprises projecting the image onto a retina of a user. 4. The apparatus according to claim 1, wherein forming the image comprises projecting the image onto see-through eyeglasses. 5. The apparatus according to claim 1, wherein forming the image comprises projecting the image into a 3-dimensional virtual space. 6. The apparatus according to claim 1, wherein the instructions of the processor are responsive to the scene analysis, and wherein the processor is cooperative with the content projector for varying at least one of projection parameters and content of the image responsively to the scene analysis. 7. The apparatus according to claim 6, wherein the at least one of the projection parameters comprises an intensity of light in the image, which is varied responsively to the scene analysis. 8. The apparatus according to claim 7, wherein the processor is configured to identify an area of the scene that contains an eye of a person in the scene, and to decrease the intensity in the area responsively to identifying the area. 9. The apparatus according to claim 1, wherein the processor is cooperative with the content projector for varying characteristics of the image responsively to an interaction between a user and the scene. 10. The apparatus according to claim 9, wherein the interaction comprises a variation in a gaze vector of the user toward the 3-dimensional object. 11. The apparatus according to claim 9, wherein the interaction comprises a gesture of the user relating to the 3-dimensional object. 12. The apparatus according to claim 9, wherein varying characteristics of the image comprises varying at least one of a scale and a compensation for distortion. 13. The apparatus according to claim 1, wherein identifying the 3-dimensional object comprises identifying a position of the 3-dimensional object with six degrees of freedom with respect to a reference system of coordinates, wherein the content projector is operative to compensate for scale, pitch, yaw and angular rotation of the 3-dimensional object. 14. The apparatus according to claim 1, wherein identifying the 3-dimensional object comprises referencing a database of 3-dimensional object definitions and comparing the 3-dimensional object with the definitions in the database. 15. The apparatus according to claim 1, further comprising a wearable monitor, wherein the content projector is operative to establish the image as a virtual image in the wearable monitor. 16. The apparatus according to claim 15, wherein the sensing element, the processor and the content projector are incorporated in the wearable monitor. 17. The apparatus according to claim 15, wherein the wearable monitor comprises see-through eyeglasses. 18. The apparatus according to claim 1, wherein the content projector is operative to establish the image onto a virtual surface for user interaction therewith. 19. The apparatus according to claim 1, wherein the processor is operative for controlling a computer application responsively to a gesture and wherein the image comprises a user interface for control of the computer application. 20. The apparatus according to claim 1, wherein the image comprises written content. 21. The apparatus according to claim 1, wherein the content projector comprises: a first radiation source, which emits an infrared beam, which is modulated to create a pattern of spots, which is acquired by the 3-dimensional camera;a second radiation source, which emits a visible light beam, which is modulated to form the image on at least a part of the scene; andscanning optics configured to project both the infrared beam and the visible light beam onto the scene simultaneously. 22. A method for augmented interaction with a data processing system, comprising the steps of: capturing a 3-dimensional image of a scene;capturing a 2-dimensional image of the scene in registration with the 3-dimensional image;using a digital processor, processing the 3-dimensional image to locate a 3-dimensional object therein, and to determine that the 3-dimensional object satisfies predefined criteria; andforming a content-containing image on the 3-dimensional object responsively to a location of the 3-dimensional object;recognizing information relating to the 3-dimensional object in the 2-dimensional image; andvarying the content-containing image responsively to the recognized information. 23. The method according to claim 22, wherein forming the content-containing image comprises projecting the content-containing image onto one of the 3-dimensional objects. 24. The method according to claim 22, wherein forming the content-containing image comprises projecting the content-containing image onto a retina of the user. 25. The method according to claim 22, wherein forming the content-containing image comprises projecting the content-containing image onto see-through eyeglasses. 26. The method according to claim 22, wherein forming the content-containing image comprises projecting the content-containing image into a 3-dimensional virtual space. 27. The method according to claim 22, wherein forming the content-containing image comprises varying at least one of projection parameters and content of the image responsively to processing the 3-dimensional image. 28. The method according to claim 27, wherein the at least one of the projection parameters comprises an intensity of light in the image, which is varied responsively to the content of the image. 29. The method according to claim 28, wherein processing the 3-dimensional image comprises identifying an area of the scene that contains an eye of a person in the scene, and decreasing the intensity in the area responsively to identifying the area. 30. The method according to claim 22, further comprising the steps of varying characteristics of the content-containing image responsively to an interaction between the user and the scene. 31. The method according to claim 30, wherein the interaction comprises a variation in a gaze vector of the user toward one of the 3-dimensional objects. 32. The method according to claim 30, wherein the interaction comprises a gesture of the user relating to one of the 3-dimensional objects. 33. The method according to claim 30, wherein varying characteristics of the content-containing image comprises varying at least one of a scale and a compensation for distortion. 34. The method according to claim 22, further comprising the steps of: recognizing a gesture relating to the content-containing image; andresponsively to the gesture controlling a computer application. 35. The method according to claim 34, wherein the content-containing image comprises a user interface for control of the computer application. 36. The method according to claim 22, wherein processing the 3-dimensional image comprises identifying a position of one of the 3-dimensional objects with six degrees of freedom with respect to a reference system of coordinates, and forming a content-containing image comprises compensating for scale, pitch, yaw and angular rotation of the one 3-dimensional object. 37. The method according to claim 22 wherein processing the 3-dimensional image comprises referencing a database of 3-dimensional object definitions and comparing the 3-dimensional objects with the definitions in the database. 38. The method according to claim 22, wherein one of the 3-dimensional objects is a portion of a humanoid form. 39. The method according to claim 22, wherein forming the content-containing image comprises scanning an infrared beam, which is modulated to create a pattern of spots, which is captured in the 3-dimensional image of the scene, and scanning a visible light beam together with the infrared beam, while modulating the visible light beam so as to project the content-containing image onto at least a part of the scene.
Wee, Susie J.; Baker, Henry Harlyn; Bhatti, Nina T.; Covell, Michele; Harville, Michael, Communication and collaboration system using rich media environments.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Controlling resource access based on user gesturing in a 3D captured image stream of the user.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Honda,Tadashi, Handwriting information processing apparatus, handwriting information processing method, and storage medium having program stored therein for handwriting information processing.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Lanier Jaron Z. (Palo Alto CA) Grimaud Jean-Jacques G. (Portola Valley CA) Harvill Young L. (San Mateo CA) Lasko-Harvill Ann (San Mateo CA) Blanchard Chuck L. (Palo Alto CA) Oberman Mark L. (Mountain, Method and system for generating objects for a multi-person virtual world using data flow networks.
Rafii, Abbas; Bamji, Cyrus; Sze, Cheng-Feng; Torunoglu, Iihami, Methods for enhancing performance and data acquired from three-dimensional image systems.
Bang, Won chul; Kim, Dong yoon; Chang, Wook; Kang, Kyoung ho; Choi, Eun seok, Spatial motion recognition system and method using a virtual handwriting plane.
Segawa,Hiroyuki; Hiraki,Norikazu; Shioya,Hiroyuki; Abe,Yuichi, Three-dimensional model processing device, three-dimensional model processing method, program providing medium.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Ellenby, John; Ellenby, Thomas; Ellenby, Peter, Vision system computer modeling apparatus including interaction with real scenes with respect to perspective and spatial relationship as measured in real-time.
Hazlewood, William R.; Blackburn, Jenny Ann; Galore, Janet Ellen; Ong, Timothy Andrew; Ramos, Gonzalo Alberto, Forming a representation of an item with light.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.