Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G03B-013/00
H04N-017/02
G02B-003/00
H04N-005/225
H04N-017/00
G02B-007/00
H04N-005/232
출원번호
US-0004759
(2016-01-22)
등록번호
US-9766380
(2017-09-19)
발명자
/ 주소
Duparre, Jacques
McMahon, Andrew Kenneth John
Lelescu, Dan
출원인 / 주소
FotoNation Cayman Limited
대리인 / 주소
KPPB LLP
인용정보
피인용 횟수 :
13인용 특허 :
202
초록▼
Systems and methods in accordance with embodiments of the invention actively align a lens stack array with an array of focal planes to construct an array camera module. In one embodiment, a method for actively aligning a lens stack array with a sensor that has a focal plane array includes: aligning
Systems and methods in accordance with embodiments of the invention actively align a lens stack array with an array of focal planes to construct an array camera module. In one embodiment, a method for actively aligning a lens stack array with a sensor that has a focal plane array includes: aligning the lens stack array relative to the sensor in an initial position; varying the spatial relationship between the lens stack array and the sensor; capturing images of a known target that has a region of interest using a plurality of active focal planes at different spatial relationships; scoring the images based on the extent to which the region of interest is focused in the images; selecting a spatial relationship between the lens stack array and the sensor based on a comparison of the scores; and forming an array camera subassembly based on the selected spatial relationship.
대표청구항▼
1. A method for actively aligning a lens stack array with a sensor that includes a plurality of focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each focal plane is contained within a region of the imager array that does
1. A method for actively aligning a lens stack array with a sensor that includes a plurality of focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each focal plane is contained within a region of the imager array that does not contain pixels from another focal plane, the method comprising: aligning the lens stack array relative to the sensor in an initial position, where the lens stack array comprises a plurality of lens stacks and the plurality of lens stacks forms separate optical channels for each focal plane in the sensor;varying the spatial relationship between the lens stack array and the sensor;capturing images of a known target using a plurality of active focal planes at different spatial relationships between the lens stack array and the sensor, where the known target comprises a central region of interest and at least one peripheral region of interest;scoring the images captured by the plurality of active focal planes, where the resulting scores provide a direct comparison of the extent to which at least one region of interest is focused in the images, wherein the comparison of scores comprises computing: a first best-fit plane that defines a spatial relationship between the lens stack array and the sensor based on each active focal plane's ability to focus on the central region of interest according to a first predetermined criterion;a second best-fit plane that defines a spatial relationship between the lens stack array and the sensor based on each active focal plane's ability to focus on the at least one peripheral region of interest according to a second predetermined criterion; anda plurality of planes incrementally spaced that lie between the first and second best-fit planes;selecting a spatial relationship between the lens stack array and the sensor based upon a comparison of the scores of images captured by a plurality of the active focal planes; andforming an array camera subassembly in which the lens stack array and the sensor are fixed in the selected spatial relationship. 2. The method of claim 1, wherein scoring the images captured by the plurality of active focal planes, comprises computing modulation transfer function (MTF) scores for the images. 3. The method of claim 1 wherein the comparison of the scores of images captured by a plurality of the active focal planes is based upon: a comparison of the scores of the images captured by a plurality of the active focal planes at the selected spatial relationship to the scores of images captured by the same active focal planes at different spatial relationships; andthe variation between the scores of the images captured by the active focal planes at the selected spatial relationship. 4. The method of claim 1, wherein the comparison of scores comprises omitting from consideration an image captured by an active focal plane, when the score of the image captured by the active focal plane fails to satisfy at least one predetermined criterion. 5. The method of claim 4, wherein the at least one predetermined criterion includes the score of the image captured by the active focal plane being within a predetermined range. 6. The method of claim 4, further comprising deactivating an active focal plane, when the image captured by the active focal plane is omitted from consideration. 7. The method of claim 1, wherein the comparison of scores comprises determining a mathematical relationship for each of a plurality of active focal planes that characterizes the relationship between the scores for the images captured by the respective active focal planes and the spatial relationship between the lens stack array and the sensor. 8. The method of claim 7, wherein the comparison of scores further comprises computing a best-fit plane using the determined mathematical relationships, where the best-fit plane, defines a desirable spatial relationship in accordance with predetermined criterion. 9. The method of claim 8, wherein the predetermined criterion includes maximizing scores while minimizing the variance of the scores. 10. The method of claim 1, wherein: the comparison of scores further comprises determining mathematical relationships for each of a plurality of active focal planes that characterize the relationships between: the scores of the extent to which the central region of interest is focused in the images captured by the respective active focal plane and the spatial relationship between the lens stack array and the sensor; andthe scores of the extent to which the at least one peripheral region of interest is focused in the images captured by the respective active focal plane and the spatial relationship between the lens stack array and the sensor. 11. The method of claim 10, wherein selecting a spatial relationship between the lens stack array and the sensor comprises using at least one predetermined criterion to select one of: a spatial relationship defined by the first best-fit plane, a spatial relationship defined by the second best-fit plane, and a spatial relationship defined by one of the plurality of planes. 12. The method of claim 11, wherein the at least one predetermined criterion is based upon: at each spatial relationship defined by the computed planes, averaging the scores indicative of the extent to which the central region of interest is focused, the scores being averaged across all active focal planes at the respective spatial relationship;at each spatial relationship defined by the computed planes, averaging the scores indicative of the extent to which the at least one peripheral region of interest is focused, the scores being averaged across all active focal planes at the respective spatial relationship; andassessing the variation in the determined average scores between the spatial relationships. 13. The method of claim 1, wherein aligning the lens stack array relative to the sensor in an initial position further comprises: performing an initial sweep of the lens stack array relative to the sensor;capturing an initial set of images of a known target including a central region of interest, at varied spatial relationships along the initial sweep, using a plurality of active focal planes;determining focus scores for the central region of interest in a plurality of the captured images;determining an initial set of mathematical relationships for each of the plurality of active focal planes used to capture the initial set of images, where the mathematical relationships characterize the relationship between the focus scores and the spatial relationship between the lens stack array and the sensor;computing an initial best-fit plane using the initial set of mathematical relationships; andaligning the lens stack array with the computed initial best-fit plane. 14. The method of claim 1, wherein varying the spatial relationship between the lens stack array and the sensor involves sweeping the lens stack array relative to the sensor. 15. The method of claim 14, wherein the lens stack array is swept in a direction substantially normal to the surface of the sensor. 16. A method for actively aligning a lens stack array with a sensor that includes a plurality of focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each focal plane is contained within a region of the imager array that does not contain pixels from another focal plane, the method comprising: aligning the lens stack array relative to the sensor in an initial position, where the lens stack array comprises a plurality of lens stacks and the plurality of lens stacks forms separate optical channels for each focal plane in the sensor;varying the spatial relationship between the lens stack array and the sensor;capturing images of a known target using a plurality of active focal planes at different spatial relationships between the lens stack array and the sensor, where the known target includes a central region of interest and at least one peripheral region of interest;scoring the images captured by the plurality of active focal planes, where the resulting scores provide a direct comparison of the extent to which at least one region of interest is focused in the images; wherein the images are scored such that a score is provided for each region of interest visible in each image, the score being indicative of the extent to which the respective region of interest is focused in the image;selecting a spatial relationship between the lens stack array and the sensor based upon a comparison of the scores of images captured by a plurality of the active focal planes; wherein the comparison of scores comprises: determining mathematical relationships for each of a plurality of active focal planes that characterize the relationships between: the scores of the extent to which the central region of interest is focused in the images captured by the respective active focal plane and the spatial relationship between the lens stack array and the sensor; andthe scores of the extent to which the at least one peripheral region of interest is focused in the images captured by the respective active focal plane and the spatial relationship between the lens stack array and the sensor; andcomputing, using the determined mathematical relationships: a first best-fit plane that defines a spatial relationship between the lens stack array and the sensor based on each active focal plane's ability to focus on a central region of interest according to predetermined criterion;a second best-fit plane that defines a spatial relationship between the lens stack array and the sensor based on each active focal plane's ability to focus on the at least one peripheral region of interest according to predetermined criterion; anda plurality of planes incrementally spaced that lie between the first and second best-fit planesforming an array camera subassembly in which the lens stack array and the sensor are fixed in the selected spatial relationship. 17. The method of claim 16, wherein selecting a spatial relationship between the lens stack array and the sensor comprises using at least one predetermined criterion to select one of: a spatial relationship defined by the first best-fit plane, a spatial relationship defined by the second best-fit plane, and a spatial relationship defined by one of the plurality of planes. 18. The method of claim 17, wherein the at least one predetermined criterion is based upon: at each spatial relationship defined by the computed planes, averaging the scores indicative of the extent to which the central region of interest is focused, the scores being averaged across all active focal planes at the respective spatial relationship;at each spatial relationship defined by the computed planes, averaging the scores indicative of the extent to which the at least one peripheral region of interest is focused, the scores being averaged across all active focal planes at the respective spatial relationship; andassessing the variation in the determined average scores between the spatial relationships. 19. A method for actively aligning a lens stack array with a sensor that includes a plurality of focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each focal plane is contained within a region of the imager array that does not contain pixels from another focal plane, the method comprising: aligning the lens stack array relative to the sensor in an initial position, where the lens stack array comprises a plurality of lens stacks and the plurality of lens stacks forms separate optical channels for each focal plane in the sensor;varying the spatial relationship between the lens stack array and the sensor;capturing images of a known target using a plurality of active focal planes at different spatial relationships between the lens stack array and the sensor, where the known target includes at least one region of interest;scoring the images captured by the plurality of active focal planes, where the resulting scores provide a direct comparison of the extent to which at least one region of interest is focused in the images, wherein scoring the images captured by the plurality of active focal planes comprises: determining preliminary scores for the captured images in accordance with a first criterion;determining scores for a related set of captured images in accordance with a second criterion; andextrapolating the preliminary scores as a function of the spatial relationship between the lens stack array and the sensor based on the scores determined for the related set of captured images;selecting a spatial relationship between the lens stack array and the sensor based upon a comparison of the scores of images captured by a plurality of the active focal planes; andforming an array camera subassembly in which the lens stack array and the sensor are fixed in the selected spatial relationship.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (202)
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by arrays of luma and chroma cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Yamashita,Syugo; Murata,Haruhiko; Iinuma,Toshiya; Nakashima,Mitsuo; Mori,Takayuki, Device and method for converting two-dimensional video to three-dimensional video.
Pertsel, Shimon; Meitav, Ohad; Pozniansky, Eli; Galil, Erez, Digital camera with selectively increased dynamic range by control of parameters during image acquisition.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Hornback,Bert; Harwood,Doug; Boyd,W. Eric; Carlson,Randy, Imaging device with multiple fields of view incorporating memory-based temperature compensation of an uncooled focal plane array.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Burt Peter J. (Mercer County NJ) van der Wal Gooitzen S. (Mercer NJ) Kolczynski Raymond J. (Mercer NJ) Hingorani Rajesh (Mercer NJ), Method for fusing images and apparatus therefor.
Burt Peter J. (Princeton NJ) van der Wal Gooitzen S. (Hopewell Borough ; Mercer County NJ) Kolczynski Raymond J. (Hamilton Township ; Mercer County NJ) Hingorani Rajesh (West Windsor Township ; Merce, Method for fusing images and apparatus therefor.
Han, Hee-chul; Choi, Yang-lim; Cho, Seung-ki, Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Doering, Hans-Joachim; Heinitz, Joachim, Multi-beam modulator for a particle beam and use of the multi-beam modulator for the maskless structuring of a substrate.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, System and methods for measuring depth using an array camera employing a bayer filter.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for generating depth maps using a set of images containing a baseline image.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using light focused on an image sensor by a lens element array.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth in the presence of occlusions using a subset of images.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth using an array of independently controllable cameras.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for performing post capture refocus using images captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image.
Ludwig, Lester F., Vignetted optoelectronic array for use in synthetic image formation via signal processing, lensless cameras, and integrated camera-displays.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.