Array cameras, and array camera modules incorporating independently aligned lens stacks are disclosed. Processes for manufacturing array camera modules including independently aligned lens stacks can include: forming at least one hole in at least one carrier; mounting the at least one carrier relati
Array cameras, and array camera modules incorporating independently aligned lens stacks are disclosed. Processes for manufacturing array camera modules including independently aligned lens stacks can include: forming at least one hole in at least one carrier; mounting the at least one carrier relative to at least one sensor so that light passing through the at least one hole in the at least one carrier is incident on a plurality of focal planes formed by arrays of pixels on the at least one sensor; and independently mounting a plurality of lens barrels to the at least one carrier, so that a lens stack in each lens barrel directs light through the at least one hole in the at least one carrier and focuses the light onto one of the plurality of focal planes.
대표청구항▼
1. An array camera, comprising: a processor; memory containing an image capture application; an array camera module, comprising: at least one carrier in which at least one window is formed; at least one sensor mounted relative to the at least one carrier so that light passing through the at least on
1. An array camera, comprising: a processor; memory containing an image capture application; an array camera module, comprising: at least one carrier in which at least one window is formed; at least one sensor mounted relative to the at least one carrier so that light passing through the at least one window in the at least one carrier is incident on a plurality of focal planes formed by at least one array of pixels on the at least one sensor; a plurality of lens barrels mounted to the at least one carrier, so that a lens stack in each lens barrel directs light through the at least one window in the at least one carrier and focuses the light onto one of the plurality of focal planes; and a module cap mounted over the lens barrels, where the module cap includes at least one opening that admits light into the lens stacks contained within the plurality of lens barrels; wherein the image capture application directs the processor to: trigger the capture of image data by the array camera module; obtain and store image data captured by the array camera module, where the image data forms a set of images captured from different viewpoints; select a reference viewpoint relative to the viewpoints of the set of images captured from different viewpoints; normalize the set of images to increase a similarity of corresponding pixels within the set of images; determine depth estimates for pixel locations in an image from the reference viewpoint using at least a subset of the set of images, wherein generating a depth estimate for a given pixel location in the image from the reference viewpoint comprises: identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths; comparing the similarity of the corresponding pixels identified at each of the plurality of depths; and selecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint. 2. The array camera of claim 1, wherein the at least one carrier is a single carrier. 3. The array camera of claim 2, wherein: each of the plurality of sensors is mounted to a first side of the single carrier;each of the plurality of lens barrels is mounted to a second opposite side of the single carrier; andthe plurality of sensors comprises a separate sensor for each of the plurality of lens barrels. 4. The array camera of claim 2, wherein: the plurality of sensors is mounted to a substrate and the single carrier is mounted in a fixed location relative to the substrate; andthe plurality of sensors is positioned proximate a first side of the single carrier and each of the plurality of lens barrels is mounted to a second opposite side of the single carrier. 5. The array camera of claim 2, wherein the at least one sensor is a single sensor. 6. The array camera of claim 5, wherein: the single sensor is mounted to a first side of the single carrier; andeach of the plurality of lens barrels is mounted to a second opposite side of the single carrier. 7. The array camera of claim 5, wherein: the single sensor is mounted to a substrate and the single carrier is mounted in a fixed location relative to the substrate; andthe single sensor is positioned proximate a first side of the single carrier and each of the plurality of lens barrels is mounted to a second opposite side of the single carrier. 8. The array camera of claim 1, wherein: the at least one sensor is mounted to a substrate and each of a plurality of carriers is mounted in a fixed location relative to the substrate; andeach of the plurality of lens barrels is mounted to a separate carrier. 9. The array camera of claim 1, wherein: each lens barrel and corresponding focal plane forms a camera;different cameras within the array camera module image different parts of the electromagnetic spectrum; andthe lens stacks contained within the lens barrels differ depending upon the portion of the electromagnetic spectrum imaged by the camera to which the lens barrel belongs. 10. The array camera of claim 1, wherein each lens stack in the lens barrels has a field of view that focuses light so that the plurality of arrays of pixels that form the focal planes sample the same object space within a scene. 11. The array camera of claim 10, wherein: the pixel arrays of the focal planes define spatial resolutions for each pixel array;the lens stacks focus light onto the focal planes so that the plurality of arrays of pixels that form the focal planes sample the same object space within a scene with sub-pixel offsets that provide sampling diversity; andthe lens stacks have modulation transfer functions that enable contrast to be resolved at a spatial frequency corresponding to a higher resolution than the spatial resolutions of the pixel arrays. 12. The array camera of claim 11, wherein the image capture application further directs the processor to fuse pixels from the set of images using the depth estimates to create a fused image having a resolution that is greater than the resolutions of the images in the set of images by: determining the visibility of the pixels in the set of images from the reference viewpoint by: identifying corresponding pixels in the set of images using the depth estimates; anddetermining that a pixel in a given image is not visible in the reference viewpoint when the pixel fails a photometric similarity criterion determined based upon a comparison of corresponding pixels; andapplying scene dependent geometric shifts to the pixels from the set of images that are visible in an image from the reference viewpoint to shift the pixels into the reference viewpoint, where the scene dependent geometric shifts are determined using the current depth estimates; andfusing the shifted pixels from the set of images to create a fused image from the reference viewpoint having a resolution that is greater than the resolutions of the images in the set of images. 13. The array camera of claim 12, wherein the image capture application further directs the processor to synthesize an image from the reference viewpoint by performing a super-resolution process based upon the fused image from the reference viewpoint, the set of images captured from different viewpoints, the current depth estimates, and visibility information. 14. The array camera of claim 1, wherein at least one spectral filter is mounted within at least one window in the at least one carrier. 15. The array camera of claim 14, wherein the at least one spectral filter is selected from the group consisting of a color filter and an IR-cut filter. 16. The array camera of claim 1, wherein at least one spectral filter is applied to an array of pixels forming a focal plane on at least one of the sensors. 17. The array camera of claim 1, wherein at least one lens stack includes at least one spectral filter. 18. The array camera of claim 1, wherein: the plurality of images comprises image data in multiple color channels; andthe image capture application directs the processor to compare the similarity of pixels that are identified as corresponding at each of the plurality of depths by comparing the similarity of the pixels that are identified as corresponding in each of a plurality of color channels at each of the plurality of depths. 19. The array camera of claim 1, wherein the plurality of lens barrels and the plurality of focal planes form an M×N array of cameras. 20. The array camera of claim 19, wherein the plurality of lens barrels and the plurality of focal planes form a 3×3 array of cameras. 21. The array camera of claim 19, wherein the M×N array of cameras comprises a 3×3 group of cameras comprising: a central reference camera;four cameras that capture image data in a first color channel located in the four corners of the 3×3 group of cameras;a pair of cameras that capture image data in a second color channel located on either side of the central reference camera; anda pair of cameras that capture image data in a third color channel located on either side of the central reference camera. 22. The array camera of claim 21, wherein the reference camera is selected from the group consisting of: a camera including a Bayer filter; and a camera that captures image data in the first color channel. 23. The array camera of claim 1, wherein the array camera module further comprises an interface device in communication with the at least one sensor, where the interface device multiplexes data received from the sensors and provides an interface via which the processor reads multiplexed data and via which the processor controls the imaging parameters of the focal planes formed by the plurality of pixel arrays. 24. The array camera of claim 23, wherein: the interface device is mounted to the carrier and the carrier includes circuit traces that carry signals between the interface device and the at least one sensor; anda common clock signal coordinates the capture of image data by the at least one sensor and readout of the image data from the at least one sensor via the interface device. 25. The array camera of claim 23, wherein: the at least one sensor and the interface device are mounted to a substrate, which includes circuit traces that carry signals between the interface device and the at least one sensor;the at least one carrier is mounted in a fixed location relative to the at least one sensor; anda common clock signal coordinates the capture of image data by the at least one sensor and readout of the image data from the at least one sensor via the interface device. 26. The array camera of claim 1, wherein the module cap is mounted to the at least one carrier so that a small air gap exists between the module cap and the top of the lens barrels and a small bead of adhesive seals the air gaps between the module cap and the lens barrels. 27. The array camera of claim 1, wherein the carrier is constructed from a material selected from the group consisting of ceramic and glass. 28. A array camera, comprising: a processor; memory containing an image capture application; an array camera module, comprising: a carrier in which a plurality of windows are formed; a plurality of sensors each including an array of pixels, where the plurality of sensors are mounted relative to the carrier so that light passing through the plurality of windows is incident on a plurality of focal planes formed by the arrays of pixels; a plurality of lens barrels mounted to the at least one carrier so that a lens stack in each lens barrel directs light through the at least one window in the at least one carrier and focuses the light onto one of the plurality of focal planes; and a module cap mounted over the lens barrels, where the module cap includes at least one opening that admits light into the lens stacks contained within the plurality of lens barrels; wherein the image capture application directs the processor to: trigger the capture of image data by the array camera module; obtain and store image data captured by the array camera module, where the image data forms a set of images captured from different viewpoints; select a reference viewpoint relative to the viewpoints of the set of images captured from different viewpoints; normalize the set of images to increase a similarity of corresponding pixels within the set of images; determine depth estimates for pixel locations in an image from the reference viewpoint using at least a subset of the set of images, wherein generating a depth estimate for a given pixel location in the image from the reference viewpoint comprises: identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths; comparing the similarity of the corresponding pixels identified at each of the plurality of depths; and selecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint. 29. The array camera of claim 28, wherein: the pixel arrays of the focal planes define spatial resolutions for each pixel array;the lens stacks focus light onto the focal planes so that the plurality of arrays of pixels that form the focal planes sample the same object space within a scene with sub-pixel offsets that provide sampling diversity; andthe lens stacks have modulation transfer functions that enable contrast to be resolved at a spatial frequency corresponding to a higher resolution than the spatial resolutions of the pixel arrays; andthe image capture application further directs the processor to fuse pixels from the set of images using the depth estimates to create a fused image having a resolution that is greater than the resolutions of the images in the set of images by: determining the visibility of the pixels in the set of images from the reference viewpoint by: identifying corresponding pixels in the set of images using the depth estimates; anddetermining that a pixel in a given image is not visible in the reference viewpoint when the pixel fails a photometric similarity criterion determined based upon a comparison of corresponding pixels; andapplying scene dependent geometric shifts to the pixels from the set of images that are visible in an image from the reference viewpoint to shift the pixels into the reference viewpoint, where the scene dependent geometric shifts are determined using the current depth estimates; andfusing the shifted pixels from the set of images to create a fused image from the reference viewpoint having a resolution that is greater than the resolutions of the images in the set of images.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (107)
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
Venkataraman, Kartik; Gallagher, Paul; Jain, Ankit K.; Nisenzon, Semyon; Lelescu, Dan; Ciurea, Florian; Molina, Gabriel, Autofocus system for a conventional camera that uses depth information from an array camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view.
Duparre, Jacques, Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Nayar, Shree; Venkataraman, Kartik; Pain, Bedabrata; Lelescu, Dan, Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures.
Lelescu, Dan; Venkataraman, Kartik, Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Venkataraman, Kartik; Huang, Yusong; Jain, Ankit K.; Chatterjee, Priyam, Systems and methods for performing high speed video capture and depth estimation using array cameras.
Lelescu, Dan; Duong, Thang, Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.