Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G02B-027/10
H04N-005/232
G02B-013/00
G06T-007/00
H04N-005/225
H04N-013/02
G02B-003/00
출원번호
US-0705919
(2015-05-06)
등록번호
US-9578237
(2017-02-21)
발명자
/ 주소
Duparre, Jacques
Lelescu, Dan
Venkataraman, Kartik
출원인 / 주소
FotoNation Cayman Limited
대리인 / 주소
KPPB LLP
인용정보
피인용 횟수 :
28인용 특허 :
173
초록▼
A variety of optical arrangements and methods of modifying or enhancing the optical characteristics and functionality of these optical arrangements are provided. The optical arrangements being specifically designed to operate with camera arrays that incorporate an imaging device that is formed of a
A variety of optical arrangements and methods of modifying or enhancing the optical characteristics and functionality of these optical arrangements are provided. The optical arrangements being specifically designed to operate with camera arrays that incorporate an imaging device that is formed of a plurality of imagers that each include a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. In many optical arrangements the MTF characteristics of the optics allow for contrast at spatial frequencies that are at least as great as the desired resolution of the high resolution images synthesized by the array camera, and significantly greater than the Nyquist frequency of the pixel pitch of the pixels on the focal plane, which in some cases may be 1.5, 2 or 3 times the Nyquist frequency.
대표청구항▼
1. An array camera, comprising: a plurality of cameras, where each camera includes separate optics, and a plurality of light sensing elements;a processor;wherein the optics of each of the plurality of cameras are formed so that each camera has a field of view that is shifted with respect to the fiel
1. An array camera, comprising: a plurality of cameras, where each camera includes separate optics, and a plurality of light sensing elements;a processor;wherein the optics of each of the plurality of cameras are formed so that each camera has a field of view that is shifted with respect to the field-of-views of the other cameras and so that each shift includes a sub-pixel shifted view of the scene;wherein the light sensing elements of a given camera in the plurality of cameras have a pixel pitch defining a camera Nyquist frequency, and where the optics of the given camera have a modular transfer function (MTF) such that the optics optically resolve contrast at spatial frequencies higher than the camera Nyquist frequency (Ny);wherein the optics of each camera in the plurality of cameras comprise a five-surface optical arrangement comprising: a first lens element having a first convex proximal surface and a first concave distal surface, wherein the diameter of the first convex surface is larger than the diameter of the first concave surface;a second lens element having a second concave proximal surface and a second convex distal surface, wherein the diameter of the second concave proximal surface is smaller than the diameter of the second convex surface; anda third lens element having a third concave proximal surface and a third planar distal surface, wherein the diameter of the third concave proximal surface is larger than the diameters of any of the surfaces of the first and second lens elements;wherein the first, second and thirds lens elements are arranged sequentially in optical alignment with an imager positioned at the distal end thereof; andwherein software directs the processor to: obtain a set of low resolution images from the plurality of cameras, where each of the low resolution images include aliasing patterns;determine disparity between pixels in the set of low resolution images to generate a depth map from a reference viewpoint, where the depth map indicates distances to surfaces of scene objects from the reference viewpoint;synthesize a high resolution image using the set of images and the depth map, where the spatial frequency at which the high resolution image displays contrast is greater than the camera Nyquist frequencies (Ny) of the plurality of cameras and less than the spatial frequencies at which the optics of each camera optically resolve contrast. 2. The array camera of claim 1, wherein the aliasing patterns in each image in the set of the low resolution images include differences due to the different sub-pixel shifted views of the scene provided by the optics of the plurality of cameras. 3. The array camera of claim 1, wherein the software further directs the processor to synthesize a high resolution image by: determining scene dependent geometric corrections to apply to the pixels from each of the images within the set of low resolution images to eliminate disparity; andfusing the set of low resolution images using the scene dependent geometric corrections. 4. The array camera of claim 3, wherein the software further directs the processor to perform super resolution processing to reconstruct the high resolution image using the fused image, the scene dependent geometric corrections, and the set of low resolution images. 5. The array camera of claim 1, wherein the MTF of the optics of a given camera in the plurality of cameras is such that the optics optically resolve contrast at spatial frequencies at least 1.5 times the camera Nyquist frequency Ny. 6. The array camera of claim 1, wherein the MTF of the optics a given camera in the plurality of cameras is such that the optics optically resolve contrast at spatial frequencies at least 2 times the camera Nyquist frequency Ny. 7. The array camera of claim 1, wherein the MTF of the optics a given camera in the plurality of cameras is such that the optics optically resolve contrast at spatial frequencies at least 3 times the camera Nyquist frequency Ny. 8. The array camera of claim 1, wherein the MTF of the optics a given camera in the plurality of cameras is such that the optics optically resolve contrast at spatial frequencies at least 10% greater than the camera Nyquist frequency Ny multiplied by the ratio of the resolution of the high resolution image to the resolution of the images in the set of low resolution images. 9. The array camera of claim 1, wherein the MTF of the optics a given camera in the plurality of cameras is such that the optics optically resolve contrast at spatial frequencies at least 20% greater than the camera Nyquist frequency Ny multiplied by the ratio of the resolution of the high resolution image to the resolution of the images in the set of low resolution images. 10. The array camera of claim 1, wherein the MTF of the optics a given camera in the plurality of cameras is such that the optics optically resolve contrast at spatial frequencies at least 30% greater than the camera Nyquist frequency Ny multiplied by the ratio of the resolution of the high resolution image to the resolution of the images in the set of low resolution images. 11. The array camera of claim 1, wherein the camera array is a monolithic integrated module comprising a single semiconductor substrate on which all of the sensor elements are formed, and optics including a plurality of lens elements, where each lens element forms part of the separate optics for one of the cameras. 12. The array camera of claim 1, wherein each of the cameras includes one of a plurality of different types of filter. 13. The array camera of claim 12, wherein cameras having the same type of filter are uniformly distributed about the geometric center of the camera array. 14. The array camera of claim 1, wherein cameras that include different types of filter operate with different operating parameters. 15. An array camera, comprising: a plurality of cameras, where each camera includes separate optics, and a plurality of light sensing elements;a processor;wherein the optics of each of the plurality of cameras are formed so that each camera has a field of view that is shifted with respect to the field-of-views of the other cameras and so that each shift includes a sub-pixel shifted view of the scene;wherein the light sensing elements of a given camera in the plurality of cameras have a pixel pitch defining a camera Nyquist frequency, and where the optics of the given camera have a modular transfer function (MTF) such that the optics optically resolve contrast at spatial frequencies higher than the camera Nyquist frequency (Ny);wherein the optics of each camera in the plurality of cameras comprise a five-surface optical arrangement comprising: a first lens element having a first convex proximal surface and a first concave distal surface, wherein the diameter of the first convex surface is larger than the diameter of the first concave surface;a second lens element having a second concave proximal surface and a second convex distal surface, wherein the diameter of the second concave proximal surface is smaller than the diameter of the second convex surface; anda third lens element having a third concave proximal surface and a third planar distal surface, wherein the diameter of the third concave proximal surface is larger than the diameters of any of the surfaces of the first and second lens elements;wherein the first, second and thirds lens elements are arranged sequentially in optical alignment with an imager positioned at the distal end thereof; andwherein software directs the processor to: obtain a set of low resolution images from the plurality of cameras, where each of the low resolution images includes different aliasing patterns due to the different sub-pixel shifted views of the scene provided by the optics of the plurality of cameras;determine disparity between pixels in the set of low resolution images to generate a depth map from a reference viewpoint, where the depth map indicates distances to surfaces of scene objects from the reference viewpoint; andsynthesize a high resolution image using the set of images and the depth map by: determining scene dependent geometric corrections to apply to the pixels from each of the images within the set of low resolution images to eliminate disparity;fusing the set of low resolution images using the scene dependent geometric corrections; andperforming super resolution processing to reconstruct the high resolution image using the fused image, the scene dependent geometric corrections, and the set of low resolution images;wherein the spatial frequency at which the high resolution image displays contrast is greater than the camera Nyquist frequencies (Ny) of the plurality of cameras and less than the spatial frequencies at which the optics of each camera optically resolve contrast. 16. An array camera, comprising: a plurality of cameras, where each camera includes separate optics, and a plurality of light sensing elements;a processor;wherein the optics of each of the plurality of cameras are formed so that each camera has a field of view that is shifted with respect to the field-of-views of the other cameras and so that each shift includes a sub-pixel shifted view of the scene;wherein the light sensing elements of a given camera in the plurality of cameras have a pixel pitch defining a camera Nyquist frequency, and where the optics of the given camera have a modular transfer function (MTF) such that the optics optically resolve contrast at spatial frequencies higher than the camera Nyquist frequency (Ny);wherein the optics of each camera in the plurality of cameras comprise a three-element monolithic lens optical arrangement comprising: a first lens element having a first convex proximal surface and a first plano distal surface;a second lens element having a second concave proximal surface and a second convex distal surface;a third menisci lens element having a third concave proximal surface and a third convex distal surface;at least one aperture disposed on the first plano distal surface; andwherein the first, second and third lens elements are arranged sequentially in optical alignment with the aperture stop and an imager; andwherein software directs the processor to: obtain a set of low resolution images from the plurality of cameras, where each of the low resolution images includes different aliasing patterns due to the different sub-pixel shifted views of the scene provided by the optics of the plurality of cameras;determine disparity between pixels in the set of low resolution images to generate a depth map from a reference viewpoint, where the depth map indicates distances to surfaces of scene objects from the reference viewpoint; andsynthesize a high resolution image using the set of images and the depth map by: determining scene dependent geometric corrections to apply to the pixels from each of the images within the set of low resolution images to eliminate disparity;fusing the set of low resolution images using the scene dependent geometric corrections; andperforming super resolution processing to reconstruct the high resolution image using the fused image, the scene dependent geometric corrections, and the set of low resolution images;wherein the spatial frequency at which the high resolution image displays contrast is greater than the camera Nyquist frequencies (Ny) of the plurality of cameras and less than the spatial frequencies at which the optics of each camera optically resolve contrast. 17. An array camera, comprising: a plurality of cameras, where each camera includes separate optics, and a plurality of light sensing elements;a processor;wherein the optics of each of the plurality of cameras are formed so that each camera has a field of view that is shifted with respect to the field-of-views of the other cameras and so that each shift includes a sub-pixel shifted view of the scene;wherein the light sensing elements of a given camera in the plurality of cameras have a pixel pitch defining a camera Nyquist frequency, and where the optics of the given camera have a modular transfer function (MTF) such that the optics optically resolve contrast at spatial frequencies higher than the camera Nyquist frequency (Ny);wherein the optics of each camera in the plurality of cameras comprise a three-element monolithic lens optical arrangement comprising: a first lens element having a first convex proximal surface and a first plano distal surface;a second lens element having a second concave proximal surface and a second convex distal surface;a third menisci lens element having a third concave proximal surface and a third convex distal surface;at least one aperture disposed on the first plano distal surface; andwherein the first, second and third lens elements are arranged sequentially in optical alignment with the aperture stop and an imager; andwherein software directs the processor to: obtain a set of low resolution images from the plurality of cameras, where each of the low resolution images include aliasing patterns;determine disparity between pixels in the set of low resolution images to generate a depth map from a reference viewpoint, where the depth map indicates distances to surfaces of scene objects from the reference viewpoint;synthesize a high resolution image using the set of images and the depth map, where the spatial frequency at which the high resolution image displays contrast is greater than the camera Nyquist frequencies (Ny) of the plurality of cameras and less than the spatial frequencies at which the optics of each camera optically resolve contrast.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (173)
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Yamashita,Syugo; Murata,Haruhiko; Iinuma,Toshiya; Nakashima,Mitsuo; Mori,Takayuki, Device and method for converting two-dimensional video to three-dimensional video.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Burt Peter J. (Mercer County NJ) van der Wal Gooitzen S. (Mercer NJ) Kolczynski Raymond J. (Mercer NJ) Hingorani Rajesh (Mercer NJ), Method for fusing images and apparatus therefor.
Burt Peter J. (Princeton NJ) van der Wal Gooitzen S. (Hopewell Borough ; Mercer County NJ) Kolczynski Raymond J. (Hamilton Township ; Mercer County NJ) Hingorani Rajesh (West Windsor Township ; Merce, Method for fusing images and apparatus therefor.
Han, Hee-chul; Choi, Yang-lim; Cho, Seung-ki, Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Doering, Hans-Joachim; Heinitz, Joachim, Multi-beam modulator for a particle beam and use of the multi-beam modulator for the maskless structuring of a substrate.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for performing post capture refocus using images captured by camera arrays.
Ludwig, Lester F., Vignetted optoelectronic array for use in synthetic image formation via signal processing, lensless cameras, and integrated camera-displays.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
Venkataraman, Kartik; Gallagher, Paul; Jain, Ankit K.; Nisenzon, Semyon; Lelescu, Dan; Ciurea, Florian; Molina, Gabriel, Autofocus system for a conventional camera that uses depth information from an array camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Nayar, Shree; Venkataraman, Kartik; Pain, Bedabrata; Lelescu, Dan, Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.