DEPTH PROJECTOR SYSTEM WITH INTEGRATED VCSEL ARRAY
원문보기
IPC분류정보
국가/구분
United States(US) Patent
공개
국제특허분류(IPC7판)
G06K-009/00
H04N-013/02
출원번호
US-0643114
(2009-12-21)
공개번호
US-0051588
(2012-03-01)
발명자
/ 주소
McEldowney, Scott
출원인 / 주소
MICROSOFT CORPORATION
인용정보
피인용 횟수 :
0인용 특허 :
0
초록▼
A projector is disclosed for use in a 3-D imaging device. The projector includes a light source formed of a vertical-cavity surface-emitting laser, or VCSEL array. The VCSEL array provides a light source for illuminating a capture area. Light from the VCSEL array is reflected off of objects in the c
A projector is disclosed for use in a 3-D imaging device. The projector includes a light source formed of a vertical-cavity surface-emitting laser, or VCSEL array. The VCSEL array provides a light source for illuminating a capture area. Light from the VCSEL array is reflected off of objects in the capture area and received within a sensing device such as a 3-D camera. The projector may further include a collimating lens array for focusing the light emitted from each VCSEL in the array, as well as a DOE for patterning the light from the collimating lens array to enable the sensing device to generate a 3-D image of the objects in the capture area.
대표청구항▼
1. A depth capturing device for providing a depth image of a capture area, comprising: a projector having a VCSEL array for providing a light source illuminating the capture area;an imaging device for receiving light reflected back from one or more objects in the capture area; anda processor for pro
1. A depth capturing device for providing a depth image of a capture area, comprising: a projector having a VCSEL array for providing a light source illuminating the capture area;an imaging device for receiving light reflected back from one or more objects in the capture area; anda processor for processing light received by the imaging device into a 3-D image of the one or more objects in the capture area. 2. The depth capturing device of claim 1, the projector further comprising: a lens array, one lens for each VCSEL in the VCSEL array; anda diffractive optical element for patterning the light received from the lens array. 3. The depth capturing device of claim 2, the diffractive optical element patterning the light received from the lens array into a pattern enabling 3-D imaging of the one or more objects by a structured light technique. 4. A system for recognizing, analyzing and tracking at least one user in a capture area, comprising: a projector having a VCSEL array for providing a light source illuminating the capture area;at least one depth camera providing a depth image of the capture area;at least one RGB camera providing an RGB image of the capture area; andat least one processor that receives the depth image and the RGB image and that processes the depth image and the RGB image to recognize the at least one user and to track movement of the at least one user over time. 5. The system of claim 4, the projector further comprising: a lens array, one lens for each VCSEL in the VCSEL array; anda diffractive optical element for patterning the light received from the lens array. 6. The system of claim 5, the at least one processor processing the depth image and the RGB image using one of a structured light process and a time-of-flight process. 7. The system of claim 4, the system further comprising a library for storing predefined gestures, the system capable of matching user movement recognized by the system to a predefined gesture stored in the library 8. The system of claim 4, further comprising a housing that houses the at least one depth camera, the at least one RGB camera, the at least one microphone and the at least one processor. 9. The system of claim 4, wherein the at least one processor generates a skeletal model of the user based at least in part on the depth image. 10. The system of claim 9, wherein the at least one processor uses the skeletal model derived from the depth image to track movement of the user over time. 11. The system of claim 10, wherein when the at least one processor is unable to track movement of the user from the depth image, the processor uses the RGB image to supplement the depth image. 12. The system of claim 4, wherein the movement of the user is tracked over time based at least in part on known mechanics of the human muscular-skeletal system. 13. The system of claim 12, wherein the at least one processor generates a motion capture file of the movements of the user in real-time based on the tracked model. 14. The system of claim 13, wherein the at least one processor applies the motion capture file to an avatar. 15. The system of claim 4, further comprising at least one directional microphone, the at least one microphone comprises a directional microphone, wherein the sound information provided by the at least one microphone is used by the at least one processor to distinguish between a plurality of users in the capture area based on recognition of each user's voice. 16. A method for tracking a human user in a capture area, comprising: illuminating the capture area with light emitted from a VCSEL array and patterned by a diffractive optical element;receiving from at least one depth camera a depth image of the capture area illuminated by the VCSEL array;receiving from at least one RGB camera an RGB image of the capture area; andrecognizing and tracking the movement of the user in the capture area over time based on the depth image and the RGB image. 17. The method of claim 16, wherein the method is performed by at least one processor disposed within a housing together with the at least one VCSEL array, depth camera, the at least one microphone, and the at least one RGB camera. 18. The method of claim 16, further comprising the step of receiving from at least one microphone information about sound emanating from the capture area, and wherein said recognizing and tracking comprises recognizing and tracking the movement of the user in the capture area over time based on a combination of at least two of the depth image, the RGB image and the sound information. 19. The method of claim 16, further comprising the step of generating a skeletal model of the user from the depth image and using the skeletal model to track movement of the user over time based at least in part on known mechanics of the human muscular-skeletal system. 20. The method of claim 19, further comprising the step of recognizing a predefined gesture of the user based the tracked movement of the user and a library of stored, predefined gestures.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.