Lens arrays for pattern projection and imaging
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/232
H04N-005/225
출원번호
US-0311584
(2011-12-06)
등록번호
US-9131136
(2015-09-08)
발명자
/ 주소
Shpunt, Alexander
Pesach, Benny
출원인 / 주소
APPLE INC.
대리인 / 주소
D. Kligler I.P. Services Ltd.
인용정보
피인용 횟수 :
5인용 특허 :
90
초록▼
A method for imaging includes focusing optical radiation so as to form respective first and second optical images of a scene on different, respective first and second regions of an array of detector elements. The focused optical radiation is filtered with different, respective first and second passb
A method for imaging includes focusing optical radiation so as to form respective first and second optical images of a scene on different, respective first and second regions of an array of detector elements. The focused optical radiation is filtered with different, respective first and second passbands for the first and second regions. A difference is taken between respective first and second input signals provided by the detector elements in the first and second regions so as to generate an output signal indicative of the difference.
대표청구항▼
1. An imaging apparatus, comprising: an image sensor, comprising an array of detector elements arranged in rows and columns;objective optics, which are configured to focus optical radiation and are positioned so as to form respective first and second optical images of a scene on different, respectiv
1. An imaging apparatus, comprising: an image sensor, comprising an array of detector elements arranged in rows and columns;objective optics, which are configured to focus optical radiation and are positioned so as to form respective first and second optical images of a scene on different, respective first and second regions of the array comprising respective first and second numbers of columns of the detector elements, such that the first number is greater than the second number, while the first and second regions have equal numbers of rows, so that the first region is wider than and has a different shape from the second region;first and second optical filters, having different respective first and second passbands, which are positioned so as to filter the optical radiation focused by the first and second lenses onto the first and second regions, respectively; anda subtracter, which is coupled to take a difference between respective first and second input signals provided by the detector elements in the first and second regions and to generate an output signal indicative of the difference. 2. The apparatus according to claim 1, wherein the objective optics are arranged so that the first and second optical images contain a common field of view. 3. The apparatus according to claim 1, wherein the objective optics comprise first and second lenses, which are configured to form the first and second optical images, respectively. 4. The apparatus according to claim 1, wherein the subtracter is configured to take the difference by subtracting digital pixel values from the first and second regions. 5. The apparatus according to claim 1, wherein the image sensor comprises an integrated circuit chip, and wherein the subtracter comprises an analog component on the chip. 6. The apparatus according to claim 1, and comprising a projection module, which is configured to project a pattern onto the scene at a wavelength in the first passband, while the optical radiation focused by the objective optics comprises ambient background radiation in both the first and second passbands, whereby the second input signal provides an indication of a level of the ambient background radiation for subtraction from the first input signal. 7. The apparatus according to claim 6, and comprising a processor, which is configured to process the output signal so as to generate a depth map of the scene responsively to the pattern appearing in the first optical image. 8. An imaging apparatus, comprising: an image sensor, comprising an array of detector elements;a plurality of lenses, which are configured to form respective optical images of different, respective portions of a scene on different, respective regions of the array along respective optical axes, wherein the lenses comprise at least: a single first lens, which is configured to form a first optical image on a first region of the array of the detector elements along a first optical axis; anda single second lens, which is configured to form a second optical image on a second region of the array of the detector elements along a second optical axis, which is different from the first optical axis; anddiverting elements, which are fixed to respective surfaces of at least two of the lenses and are configured to deflect the respective optical axes of the at least two of the lenses angularly outward in different, respective directions relative to a center of the image sensor, wherein the diverting elements comprise: a first diverting element, which is fixed to the single first lens and is configured to deflect the first optical axis angularly outward in a first direction relative to a center of the image sensor so as to image a first field of view onto the first region; anda second diverting element, which is fixed to the single second lens and is configured to deflect the second optical axis angularly outward in a different, second direction relative to the center of the image sensor so as to image a second field of view, different from the first field of view, onto the second region. 9. The apparatus according to claim 8, wherein the diverting elements comprise diffractive patterns that are fabricated on the respective surfaces of the at least two of the lenses. 10. The apparatus according to claim 9, wherein the diffractive patterns define Fresnel prisms. 11. The apparatus according to claim 8, wherein the apparatus comprises a processor, which is configured to process an output of the image sensor so as to generate an electronic image having a combined field of view encompassing the different, respective portions of the scene whose optical images are formed by the lenses. 12. The apparatus according to claim 11, and comprising a projection module, which is configured to project a pattern onto the scene, wherein the processor is configured to process the electronic image so as to generate a depth map of the scene responsively to the pattern appearing in the optical images of the respective portions of the scene. 13. The apparatus according to claim 8, wherein the lenses comprise a third lens, which is configured to form a third optical image along a third optical axis on a third region of the array of detector elements, such that the third region is located between the first and second regions and the third optical axis is located between the first and second optical axes, and wherein the third optical axis is undeflected, whereby the third lens images a third field of view, intermediate the first and second fields of view, onto the third region. 14. A method for imaging, comprising: focusing optical radiation so as to form respective first and second optical images of a scene on different, respective first and second regions of an array of detector elements arranged in rows and columns,wherein the first and second regions comprise respective first and second numbers of columns of the detector elements, such that the first number is greater than the second number, while the first and second regions have equal numbers of rows, so that the first region is wider than and has a different shape from the second region;filtering the focused optical radiation with different, respective first and second passbands for the first and second regions; andtaking a difference between respective first and second input signals provided by the detector elements in the first and second regions so as to generate an output signal indicative of the difference. 15. The method according to claim 14, wherein focusing the optical radiation comprises forming the first and second optical images so as to contain a common field of view. 16. The method according to claim 14, wherein focusing the optical radiation comprises positioning first and second lenses to form the first and second optical images, respectively. 17. The method according to claim 14, wherein taking the difference comprises subtracting digital pixel values from the first and second regions. 18. The method according to claim 14, wherein the array of the detector elements is formed on an integrated circuit chip, and wherein taking the difference comprises subtracting the input signals using an analog component on the chip. 19. The method according to claim 14, and comprising projecting a pattern onto the scene at a wavelength in the first passband, while the optical radiation focused by the objective optics comprises ambient background radiation in both the first and second passbands, whereby the second input signal provides an indication of a level of the ambient background radiation for subtraction from the first input signal. 20. The method according to claim 19, and comprising processing the output signal so as to generate a depth map of the scene responsively to the pattern appearing in the first optical image. 21. A method for imaging, comprising: positioning a plurality of lenses to form respective optical images of different, respective portions of a scene on different, respective regions of an array of detector elements along respective optical axes of the lenses, wherein the lenses comprise at least: a single first lens, which is configured to form a first optical image on a first region of the array of the detector elements along a first optical axis; anda single second lens, which is configured to form a second optical image on a second region of the array of the detector elements along a second optical axis, which is different from the first optical axis; andfixing diverting elements to respective surfaces of at least two of the lenses so as to deflect the respective optical axes of the at least two of the lenses angularly outward in different, respective directions relative to a center of the array, wherein the diverting elements comprise: a first diverting element, which is fixed to the single first lens and is configured to deflect the first optical axis angularly outward in a first direction relative to a center of the image sensor so as to image a first field of view onto the first region; anda second diverting element, which is fixed to the single second lens and is configured to deflect the second optical axis angularly outward in a different, second direction relative to the center of the image sensor so as to image a second field of view, different from the first field of view, onto the second region. 22. The method according to claim 21, wherein fixing the diverting elements comprise fabricating diffractive patterns on the respective surfaces of the at least two of the lenses. 23. The method according to claim 22, wherein the diffractive patterns define Fresnel prisms. 24. The method according to claim 21, wherein the method comprises processing an output of the image sensor so as to generate an electronic image having a combined field of view encompassing the different, respective portions of the scene whose optical images are formed by the lenses. 25. The method according to claim 24, and comprising projecting a pattern onto the scene, and processing the electronic image so as to generate a depth map of the scene responsively to the pattern appearing in the optical images of the respective portions of the scene.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (90)
Matsumoto,Yukinori; Fujimura,Kota; Sugimoto,Kazuhide; Oue,Yasuhiro; Kitamura,Toru; Ota,Osamu, 3-D model providing device.
Binns Bruce W. ; Price Rodney M. ; Palm Charles S. ; Weaver Suzanne E., Apparatus for interactive image correlation for three dimensional image production.
Brown, Daniel M.; Erbach, Peter Scott; Pezzaniti, Joseph Larry; Minamitani, Takahisa, Autostereoscopic display with rotated microlens and method of displaying multidimensional images, especially color images.
Rosakis Ares J. ; Singh Ramen P. ; Kolawa Elizabeth ; Moore ; Jr. Nicholas R., Coherent gradient sensing method and system for measuring surface curvature.
Knighton, Mark S.; Agabra, David S.; McKinley, William D.; Zheng, John Z.; Drobnis, David D.; Logan, J. Douglas; Bahhour, Basel F.; Haynie, Jill E.; Vuong, Kevin H.; Tandon, Amit; Sidney, Kent E.; Diaconescu, Peter L., Digitizer using plural capture methods to image features of 3-D objects.
Jin, Young-gu; Cha, Dae-kil; Lee, Seung-hoon; Park, Yoon-dong, Distance measuring sensor including double transfer gate and three dimensional color image sensor including the distance measuring sensor.
Quarendon Peter,GBX, Image processing system and method for generating data representing a number of points in a three-dimensional space from a plurality of two-dimensional images of the space.
Harada Toshiaki,JPX ; Iwaki Tetsuo,JPX ; Yamada Eiji,JPX ; Okuda Tohru,JPX, Imaging apparatus having a spatial filter and image shifting mechanism controller based on an image mode.
Link Hans-Jorg,DEX, Measuring unit for determining dimensions of test pieces, preferably of hollow bodies, in particular, of bores of workpieces, and method for measuring such dimensions.
Greivenkamp ; Jr. John E. (Rochester NY) Palum Russell J. (Rochester NY) Sullivan Kevin G. (Ft. Myers FL), Method and apparatus for absolute Moire distance measurements using a grating printed on or attached to a surface.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Goncharov, Alexander F.; Struzhkin, Viktor V., Optical devices having a wavelength-tunable dispersion assembly that has a volume dispersive diffraction grating.
Michniewicz Mark A. ; Frazer Matthew P., Optical method and system for measuring three-dimensional surface topography of an object having a surface contour.
Berenz, John J.; McIver, George W.; Niesen, Joseph W.; Dunbridge, Barry; Shreve, Gregory A., Optimized human presence detection through elimination of background interference.
Krueger Myron W. (55 Edith Rd. Vernon CT 06066) Hinrichsen Katrin (81 Willington Oaks Storrs CT 06268) Gionfriddo Thomas S. (81 Willington Oaks Storrs CT 06268), Real time perception of and response to the actions of an unencumbered participant/user.
Bass, Leland J.; Quick, Jr., Roy F.; Shah, Ashwin V.; Wickwire, Ralph O., System for electronically displaying portions of several different images on a CRT screen through respective prioritized viewports.
Bieman Leonard H. (Farmington Hills MI) Michniewicz Mark A. (Milford MI), System for optically measuring the surface contour of a part using more fringe techniques.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Tracking a range of body movement based on 3D captured image streams of a user.
Rakuljic George Anthony (Santa Monica CA) Yariv Amnon (San Marino CA) Leyva Victor (Los Angeles CA) Sayano Koichi (Montebello CA) Tyler Charles E. (Sunnyvale CA), Wavelength stabilized laser sources using feedback from volume holograms.
Miller, John Michael; Wills, Gonzalo; Tian, Lu; O'Leary, Michael, Thin film total internal reflection diffraction grating for single polarization or dual polarization.
Zhu, Zhaoming; Trail, Nicholas Daniel; De Nardi, Renzo; Newcombe, Richard Andrew, Tileable structured light projection for wide field-of-view depth sensing.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.