Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/225
G06T-003/40
H04N-013/00
G06T-005/00
H04N-005/232
출원번호
US-0520269
(2014-10-21)
등록번호
US-9041824
(2015-05-26)
발명자
/ 주소
Lelescu, Dan
Molina, Gabriel
Venkataraman, Kartik
출원인 / 주소
Pelican Imaging Corporation
대리인 / 주소
KPPB LLP
인용정보
피인용 횟수 :
67인용 특허 :
118
초록▼
Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining
Systems and methods in accordance with embodiments of the invention are disclosed that use super-resolution (SR) processes to use information from a plurality of low resolution (LR) images captured by an array camera to produce a synthesized higher resolution image. One embodiment includes obtaining input images, determining an initial estimate of at least a portion of a high resolution image using a plurality of pixels from the input images, and determining a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image. In addition, each forward imaging transformation corresponds to the manner in which each imager generates the input images, and the high resolution image has a resolution that is greater than any of the input images.
대표청구항▼
1. A method of generating a high resolution image of a scene using an imager array including a plurality of imagers that each capture an image of the scene, and a forward imaging transformation for each imager, the method comprising: obtaining input images captured by a plurality of imagers using a
1. A method of generating a high resolution image of a scene using an imager array including a plurality of imagers that each capture an image of the scene, and a forward imaging transformation for each imager, the method comprising: obtaining input images captured by a plurality of imagers using a processor configured by image processing pipeline software;determining an initial estimate of at least a portion of a high resolution image from a plurality of pixels from the input images using the processor configured by the image processing pipeline software;determining a high resolution image that when mapped through a forward imaging transformation matches the input images to within at least one predetermined criterion using the processor configured using the image processing pipeline software based upon the initial estimate of at least a portion of the high resolution image, where: each forward imaging transformation corresponds to the manner in which each imager in the plurality of imagers captures the input images; andthe high resolution image has a resolution that is greater than any of the input images; andgenerating a depth map for the high resolution image using a processor configured by image processing pipeline software;determining a focus map for the high resolution image using the depth map using a processor configured by image processing pipeline software; andperforming dynamic refocus of the high-resolution image using a processor configured by image processing pipeline software by rendering the high resolution image using the focus map. 2. The method of claim 1, wherein generating the depth map further comprises: determining depth information for pixels in the high resolution image based upon the input images, parallax information, and the characteristics of the imager array; andinterpolating the depth information to obtain depth information for every pixel in the high resolution image. 3. The method of claim 1, wherein the focus map identifies pixels having depths in the depth map that are within a specified depth of a defined focal plane. 4. The method of claim 1, wherein the input images capture a scene in which depths of points in the imaged scene vary and each of the input images differs from the other input images due to: scene independent geometric distortions inherent to the optics and manufacturing processes used to fabricate each of the plurality of imagers; andscene dependent geometric displacements due to parallax based upon the depths of the points in the imaged scene. 5. The method of claim 4, further comprising: determining scene dependent parallax information with respect to the input images based upon disparity relative to a reference point of view resulting from the depths of points in the imaged scene using the processor configured by the image processing pipeline software, where the scene dependent parallax information comprises scene dependent geometric transformations; anddetermining a total shift for each of a plurality of pixels relative to the reference point of view, where the total shift of a given pixel location is the combination of a scene independent geometric correction determined for the given pixel location using geometric calibration data and the scene dependent geometric transformation determined for the given pixel location;wherein determining an initial estimate of at least a portion of a high resolution image further comprises determining an initial estimate of at least a portion of a high resolution image from a plurality of pixels from the input images based upon a total shift for each of the plurality of pixels relative to the reference point of view;wherein each forward imaging transformation comprises applying geometric transformations based upon the total shift at a given pixel location, which is the combination of a scene independent geometric correction determined for the given pixel location using geometric calibration data and the scene dependent geometric transformation determined for the given pixel location. 6. The method of claim 5, wherein generating the depth map further comprises: determining depth information for pixels in the high resolution image based upon the input images, scene dependent parallax information with respect to the input images based upon disparity relative to the reference viewpoint resulting from the depths of points in the imaged scene, and the characteristics of the imager array; andinterpolating the depth information to obtain depth information for every pixel in the high resolution image. 7. The method of claim 6, wherein the focus map identifies pixels having depths in the depth map that are within a specified depth of a defined focal plane. 8. The method of claim 5, wherein the forward imaging transformation further comprises applying scene independent geometric transformations related to the different geometries of each of the imagers in the plurality of imagers. 9. The method of claim 5, wherein the forward imaging transformation further comprises applying photometric transformations related to the different photometric characteristics of each of the imagers in the plurality of imagers. 10. The method of claim 5, wherein: determining the initial estimate of at least a portion of the high resolution image comprises using an imaging prior including the geometric calibration data and applying scene independent geometric corrections to the input images using the geometric calibration data to obtain geometrically registered input images; anddetermining the high resolution image that when mapped through the forward imaging transformation matches the input images to at least one predetermined criterion comprises determining the high resolution image that when mapped through the forward imaging transformation matches the geometrically registered input images to within at least one predetermined criterion. 11. The method of claim 5, wherein: the depths of points in the imaged scene vary due to the presence of foreground and background objects;each of the input images also differ from the other input images due to occlusion zones surrounding foreground objects; andthe scene dependent parallax information also includes occlusion maps. 12. The method of claim 5, wherein determining an initial estimate of at least a portion of a high resolution image from a plurality of pixels from the captured images further comprises fusing at least portions of the input images to form the initial estimate of at least one portion of the high resolution image. 13. The method of claim 12, wherein fusing at least portions of the input images to form the initial estimate of at least one portion of the high resolution image comprises: populating a high resolution grid corresponding to the pixel locations of the at least a portion of the initial estimate of the high resolution image with pixels from the input images using the total shift for the pixels; andinterpolating the high resolution grid to obtain filtered pixel values for each pixel in the initial estimate of the high resolution image. 14. The method of claim 5, wherein determining a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image comprises: transforming the initial estimate of at least a portion of the high resolution image using at least one forward imaging transformation;comparing the transformed initial estimate of at least a portion of the high resolution image to at least a portion of at least one input image; andrefining the estimate of the high resolution image based upon the comparison. 15. The method of claim 14, wherein determining a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image further comprises transforming, comparing and refining estimates until the at least one predetermined criterion is satisfied. 16. The method of claim 14, wherein transforming an estimate of at least a portion of the high resolution image using at least one forward imaging transformation, comprises: applying geometric transformations including geometric transformations related to parallax observed in the imaged scene to the pixels of the estimate of at least a portion of the high resolution image;applying a blur function to the pixels of the estimate of at least a portion of the high resolution image; anddecimating the warped and blurred pixels of the estimate of at least a portion of the high resolution image. 17. The method of claim 14, wherein refining the estimate of the high resolution image based upon the comparison comprises: mapping the comparison of the transformed initial estimate of at least a portion of the high resolution image and the at least a portion of at least one input image through a backward imaging transformation, which is the reverse of the forward imaging transformation; andupdating the estimate using at least the transformed comparison. 18. The method of claim 5, wherein determining a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image, comprises: identifying pixels in the initial estimate of at least a portion of the high resolution image corresponding to pixels in at least one input image using at least one forward imaging transformation;comparing the corresponding pixels; andrefining the estimate of the high resolution image based upon the comparison. 19. The method of claim 18, wherein determining a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image, further comprises applying a blur function to pixels in the initial estimate of at least a portion of the high resolution image. 20. The method of claim 19, wherein identifying pixels in the initial estimate of at least a portion of the high resolution image corresponding to pixels in at least one input image using at least one forward imaging transformation comprises: selecting a pixel position in the initial estimate of at least a portion of the high resolution image; andusing geometric transformations including geometric transformations related to parallax observed due to the depths of points in the imaged scene to identify pixels in at least a portion of at least one input image. 21. The method of claim 20, wherein the geometric transformations further comprise scene independent geometric transformations. 22. The method of claim 20, wherein using geometric transformations to identify pixels in at least a portion of at least one input image comprises: identifying at least one pixel in the input image specified by the geometric transformation for at least the selected pixel from the initial estimate of at least a portion of the high resolution image, when a geometric transformation exists for the pixel in the initial estimate of at least a portion of the high resolution image; andidentifying at least one pixel in at least one input image based upon the geometric transformations of pixels in the neighborhood of the selected pixel from the initial estimate of at least a portion of the high resolution image, when a geometric transformation does not exist for the pixel in the initial estimate of at least a portion of the high resolution image. 23. The method of claim 18, wherein comparing corresponding pixels comprises determining differences between pixels in the initial estimate of at least a portion of the high resolution image and the identified corresponding pixels in at least one input image. 24. The method of claim 1, wherein determining a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image comprises: generating an estimate of at least a portion of the high resolution image; andapplying an intra-channel prior filter to the estimate of at least a portion of the high resolution image, where the intra-channel prior filter is configured to preserve edges while removing noise. 25. The method of claim 1, wherein: the imager array captures images in multiple color channels; andthe initial estimate of at least a portion of a high resolution image is an initial estimate of at least a portion of a high resolution image in a first color channel. 26. The method of claim 25, further comprising: placing a plurality of pixels from input images in a second color channel on a high resolution grid; anddetermining at least a portion of a high resolution image in the second color channel using at least the pixels in the second color channel placed on the high resolution grid and at least a portion of a high resolution image in another color channel. 27. The method of claim 26, wherein determining at least a portion of a high resolution image in the second color channel using at least the pixels in the second color channel placed on the high resolution grid and at least a portion of a high resolution image in another color channel comprises: interpolating the pixels on the high resolution grid based upon their correlation with the pixels in the at least a portion of the high resolution image in the other color channel and the correlation between pixels in the high resolution image in the other color channel. 28. The method of claim 27, wherein the high resolution image that is determined using the initial estimate of at a least a portion of the high resolution image in a first color channel that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion is a high resolution image in the first color channel. 29. The method of claim 27, wherein the high resolution image that is determined using the initial estimate of at a least a portion of the high resolution image in a first color channel that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion is a high resolution image in multiple color channels. 30. A method of generating a high resolution image of a scene using an imager array including a plurality of imagers that each capture an image of the scene, and a forward imaging transformation for each imager, the method comprising: obtaining input images captured by a plurality of imagers using a processor configured by image processing pipeline software, where the input images capture a scene in which depths of points in the imaged scene vary and each of the input images differs from the other input images due to: scene independent geometric distortions inherent to the optics and manufacturing processes used to fabricate each of the plurality of imagers; andscene dependent geometric displacements due to parallax experienced by each of the plurality of imagers based upon the different depths of the points in the imaged scene; anddetermining scene dependent parallax information with respect to the input images based upon disparity relative to a reference point of view resulting from the different depths of points in the imaged scene using the processor configured by the image processing pipeline software, where the scene dependent parallax information comprises scene dependent geometric transformations;determining a total shift for each of a plurality of pixels relative to the reference point of view using the processor configured by the image processing pipeline software, where the total shift of a given pixel location is the combination of a scene independent geometric correction determined for the given pixel location using geometric calibration data and the scene dependent geometric transformation determined for the given pixel location;performing a super-resolution process utilizing at least a portion of plurality of input images and the total shift for each of the plurality of pixels relative to the reference point of view as inputs, where the super-resolution process comprises: determining an initial estimate of at least a portion of a high resolution image from a plurality of pixels from the input images using the processor configured by the image processing pipeline software based upon a total shift for each of the plurality of pixels relative to the reference point of view;determining a high resolution image that when mapped through a forward imaging transformation matches the input images to within at least one predetermined criterion using the processor configured using the image processing pipeline software based upon the initial estimate of at least a portion of the high resolution image, where: each forward imaging transformation corresponds to the manner in which each imager in the plurality of imagers captures the input images and comprises applying geometric transformations based upon the total shift at a given pixel location, which is the combination of a scene independent geometric correction determined for the given pixel location using geometric calibration data and the scene dependent geometric transformation determined for the given pixel location; andthe high resolution image has a resolution that is greater than any of the input images; andgenerating a depth map for the high resolution image using a processor configured by image processing pipeline software by determining depth information for pixels in the high resolution image based upon the input images, parallax information, and the characteristics of the imager array;determining a focus map for the high resolution image using the depth map using a processor configured by image processing pipeline software, where the focus map identifies pixels having depths in the depth map that are within a specified depth of a defined focal plane; andperforming dynamic refocus of the high-resolution image using a processor configured by image processing pipeline software by rendering the high resolution image using the focus map.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (118)
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Yamashita,Syugo; Murata,Haruhiko; Iinuma,Toshiya; Nakashima,Mitsuo; Mori,Takayuki, Device and method for converting two-dimensional video to three-dimensional video.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Ludwig, Lester F., Vignetted optoelectronic array for use in synthetic image formation via signal processing, lensless cameras, and integrated camera-displays.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing.
Venkataraman, Kartik; Gallagher, Paul; Jain, Ankit K.; Nisenzon, Semyon; Lelescu, Dan; Ciurea, Florian; Molina, Gabriel, Autofocus system for a conventional camera that uses depth information from an array camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Nayar, Shree; Venkataraman, Kartik; Pain, Bedabrata; Lelescu, Dan, Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures.
Lelescu, Dan; Venkataraman, Kartik, Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for measuring depth using images captured by a camera array including cameras surrounding a central camera.
Venkataraman, Kartik; Huang, Yusong; Jain, Ankit K.; Chatterjee, Priyam, Systems and methods for performing high speed video capture and depth estimation using array cameras.
Lelescu, Dan; Duong, Thang, Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.