Extended color processing on pelican array cameras
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/33
H04N-005/265
H04N-009/093
H04N-009/09
출원번호
US-0145734
(2013-12-31)
등록번호
US-9497429
(2016-11-15)
발명자
/ 주소
Mullis, Robert
Lelescu, Dan
Venkataraman, Kartik
출원인 / 주소
Pelican Imaging Corporation
대리인 / 주소
KPPB LLP
인용정보
피인용 횟수 :
35인용 특허 :
204
초록▼
Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first ban
Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first band of visible wavelengths and a second set of images includes information in a second band of visible wavelengths and non-visible wavelengths, determining an initial estimate by combining the first set of images into a first fused image, combining the second set of images into a second fused image, spatially registering the fused images, denoising the fused images using bilateral filters, normalizing the second fused image in the photometric reference space of the first fused image, combining the fused images, determining a high resolution image that when mapped through a forward imaging transformation matches the input images within at least one predetermined criterion.
대표청구항▼
1. A method of generating a high resolution image of a scene using an imager array including a plurality of imagers that each capture an image of the scene, and a forward imaging transformation for each imager, the method comprising: obtaining input images captured by a plurality of imagers, where a
1. A method of generating a high resolution image of a scene using an imager array including a plurality of imagers that each capture an image of the scene, and a forward imaging transformation for each imager, the method comprising: obtaining input images captured by a plurality of imagers, where a first set of input images includes image information captured in a first band of visible wavelengths and a second set of input images includes image information captured in a second band of visible wavelengths and non-visible wavelengths;determining an initial estimate of at least a portion of a high resolution image using a processor configured by software to: combine image information from the first set of input images into a first fused image;combine image information from the second set of input images into a second fused image, wherein the first fused image and the second fused image have the same resolution and the resolution is higher than the resolution of any of the input images;spatially register the first fused image and the second fused image;denoise the first fused image using a first bilateral filter;denoise the second fused image using a second bilateral filter;normalize the second fused image in the photometric reference space of the first fused image; andcombine the first fused image and the second fused image into an initial estimate of at least a portion of the high resolution image; anddetermining a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image using the processor configured by software;wherein each forward imaging transformation corresponds to the manner in which each imager in the imaging array generated the input images; andwherein the high resolution image has a resolution that is greater than any of the input images. 2. The method of claim 1, wherein the first band of visible wavelengths and the second band of visible and non-visible wavelengths have some degree of overlap. 3. The method of claim 1, wherein the second band of visible and non-visible wavelengths includes green, red, and near-infrared light. 4. The method of claim 1, wherein: the first set of input images are captured by a first set of imagers from the plurality of imagers and the first set of imagers are sensitive to light in the first band of visible wavelengths; andthe second set of input images are captured by a second set of imagers from the plurality of imagers and the second set of imagers are sensitive to light in the second band of visible and non-visible wavelengths. 5. The method of claim 4, wherein the processor being configured to combine image information from the first set of input images into a first fused image utilizes analog gain and noise information from the first set of imagers and the processor being configured to combine image information from the second set of input images into a second fused image utilizes analog gain and noise information from the second set of imagers. 6. The method of claim 1, wherein the first bilateral filter and the second bilateral filter utilize weights that are a function of both the photometric and geometric distance between a pixel and pixels in the neighborhood of the pixel. 7. The method of claim 1, wherein the first bilateral filter is a cross-channel bilateral filter utilizing weights determined for the second fused image. 8. The method of claim 1, wherein the first set of input images are captured by a first set of imagers from the plurality of imagers and the first bilateral filter is a cross-channel bilateral filter utilizing weights determined for the second fused image when an analog gain value of the first set of imagers is above a predetermined threshold. 9. The method of claim 1, wherein normalizing the second fused image in the photometric reference space of the first fused image comprises applying gains and offsets to pixels of the second fused image. 10. The method of claim 9, wherein the gain for each pixel of the second fused image is determined by the equation: a^=⌊∑r∑cg(r,c)·e(r,c)⌋-NrNcg_·e_[∑r∑ce2(r,c)]-NrNce_2,and the bias for each pixel of the second fused image is determined by the equation: {circumflex over (b)}=g−â·ēwhere: e_=1NrNc∑r∑ce(r,c),g_=1NrNc∑r∑cg(r,c),e is the second fused image, g is the first fused image, Nr and Nc are the number of pixels horizontally and vertically of the neighborhood of pixels around the pixel, and r and c are row and column indices into the images within the bounds defined by Nr and Nc. 11. The method of claim 1, wherein determining an initial estimate of at least a portion of a high resolution image using a processor configured by software further comprises the processor being configured to cross-channel normalize the first fused image in the photometric reference space of the second fused image. 12. The method of claim 11, wherein the processor being configured to cross-channel normalize the first fused image in the photometric reference space of the second fused image comprises the processor being configured to apply gains and offsets to pixels of the first fused image. 13. The method of claim 12, wherein the gain for each pixel of the first fused image is determined by the equation: a^=⌊∑r∑cg(r,c)·e(r,c)⌋-NrNcg_·e_[∑r∑ce2(r,c)]-NrNce_2,and the bias for each pixel of the first fused image is determined by the equation: {circumflex over (b)}=g−â·ēwhere: e_=1NrNc∑r∑ce(r,c),g_=1NrNc∑r∑cg(r,c),e is the first fused image, g is the second fused image, Nr and Nc are the number of pixels horizontally and vertically of the neighborhood of pixels around the pixel, and r and c are row and column indices into the images within the bounds defined by Nr and Nc. 14. The method of claim 1, wherein the processor being configured to normalize the second fused image in the photometric reference space of the first fused image comprises the processor being configured to: select a first pixel of interest in the second fused image and a first collection of similar pixels in the neighborhood of the first pixel of interest;select a second pixel of interest in the first fused image corresponding to the first pixel of interest and a second collection of similar pixels in the neighborhood of the second pixel of interest;determine the intersection of the first collection of similar pixels and the second collection of similar pixels;calculate gain and offset values using the intersection of the two collections; andapply the gain and offset values to the appropriate pixels in the second fused image. 15. The method of claim 14 where the intersection of the first collection of similar pixels and the second collection of similar pixels is the set of pixels in the first and second collections having the same corresponding locations in each of the first and second fused images. 16. The method of claim 14 wherein the gain for each pixel in the intersection of the two collections within the second fused image is determined by the equation: a^=⌊∑r∑cg(r,c)·e(r,c)⌋-NrNcg_·e_[∑r∑ce2(r,c)]-NrNce_2,and the bias for each pixel in the intersection of the two collections within the second fused image is determined by the equation: {circumflex over (b)}=g−â·ēwhere: e_=1NrNc∑r∑ce(r,c),g_=1NrNc∑r∑cg(r,c),e is the second fused image, g is the first fused image, Nr and Nc are the number of pixels horizontally and vertically of the neighborhood of pixels around the pixel, and r and c are row and column indices into the images within the bounds defined by Nr and Nc. 17. An array camera configured to generate a high resolution image of a scene using an imager array including a plurality of imagers that each capture an image of the scene, and a forward imaging transformation for each imager, the array camera comprising: an imager array including a plurality of imagers; anda processor configured by software to: obtain input images captured by the plurality of imagers, where a first set of input images includes image information captured in a first band of visible wavelengths and a second set of input images includes image information captured in a second band of visible wavelengths and non-visible wavelengths;determine an initial estimate of at least a portion of a high resolution image by: combining image information from the first set of input images into a first fused image;combining image information from the second set of input images into a second fused image, wherein the first fused image and the second fused image have the same resolution and the resolution is higher than the resolution of any of the input images;spatially registering the first fused image and the second fused image;denoising the first fused image using a first bilateral filter;denoising the second fused image using a second bilateral filter;normalizing the second fused image in the photometric reference space of the first fused image;combining the first fused image and the second fused image into an initial estimate of at least a portion of the high resolution image; anddetermine a high resolution image that when mapped through the forward imaging transformation matches the input images to within at least one predetermined criterion using the initial estimate of at least a portion of the high resolution image using the processor configured by software;wherein each forward imaging transformation corresponds to the manner in which each imager in the imaging array generated the input images; andwherein the high resolution image has a resolution that is greater than any of the input images. 18. The array camera of claim 17, wherein the first band of visible wavelengths and the second band of visible and non-visible wavelengths have some degree of overlap. 19. The array camera of claim 17, wherein the second band of visible and non-visible wavelengths includes green, red, and near-infrared light. 20. The array camera of claim 17, wherein: the first set of input images are captured by a first set of imagers from the plurality of imagers and the first set of imagers are sensitive to light in the first band of visible wavelengths; andthe second set of input images are captured by a second set of imagers from the plurality of imagers and the second set of imagers are sensitive to light in the second band of visible and non-visible wavelengths. 21. The array camera of claim 20, wherein combining image information from the first set of input images into a first fused image utilizes analog gain and noise information from the first set of imagers and combining image information from the second set of input images into a second fused image utilizes analog gain and noise information from the second set of imagers. 22. The array camera of claim 17, wherein the first bilateral filter and the second bilateral filter utilize weights that are a function of both the photometric and geometric distance between a pixel and pixels in the neighborhood of the pixel. 23. The array camera of claim 17, wherein the first bilateral filter is a cross-channel bilateral filter utilizing weights determined for the second fused image. 24. The array camera of claim 17, wherein the first set of input images are captured by a first set of imagers from the plurality of imagers and the first bilateral filter is a cross-channel bilateral filter utilizing weights determined for the second fused image when an analog gain value of the first set of imagers is above a predetermined threshold. 25. The array camera of claim 17, wherein normalizing the second fused image in the photometric reference space of the first fused image comprises applying gains and offsets to pixels of the second fused image. 26. The array camera of claim 25, wherein the gain for each pixel of the second fused image is determined by the equation: a^=⌊∑r∑cg(r,c)·e(r,c)⌋-NrNcg_·e_[∑r∑ce2(r,c)]-NrNce_2,and the bias for each pixel of the second fused image is determined by the equation: {circumflex over (b)}=g−â·ēwhere: e_=1NrNc∑r∑ce(r,c),g_=1NrNc∑r∑cg(r,c),e is the second fused image, g is the first fused image, Nr and Nc are the number of pixels horizontally and vertically of the neighborhood of pixels around the pixel, and r and c are row and column indices into the images within the bounds defined by Nr and c. 27. The array camera of claim 17, wherein the processor is further configured to cross-channel normalize the first fused image in the photometric reference space of the second fused image. 28. The array camera of claim 27, wherein the processor being configured to cross-channel normalize the first fused image in the photometric reference space of the second fused image comprises the processor being configured to apply gains and offsets to pixels of the first fused image. 29. The array camera of claim 28, wherein the gain for each pixel of the first fused image is determined by the equation: a^=⌊∑r∑cg(r,c)·e(r,c)⌋-NrNcg_·e_[∑r∑ce2(r,c)]-NrNce_2,and the bias for each pixel of the first fused image is determined by the equation: {circumflex over (b)}=g−â·ēwhere: e_=1NrNc∑r∑ce(r,c),g_=1NrNc∑r∑cg(r,c),e is the first fused image, g is the second fused image, Nr and Nc are the number of pixels horizontally and vertically of the neighborhood of pixels around the pixel, and r and c are row and column indices into the images within the bounds defined by Nr and c. 30. The array camera of claim 17, wherein normalizing the second fused image in the photometric reference space of the first fused image comprises: selecting a first pixel of interest in the second fused image and a first collection of similar pixels in the neighborhood of the first pixel of interest;selecting a second pixel of interest in the first fused image corresponding to the first pixel of interest and a second collection of similar pixels in the neighborhood of the second pixel of interest;determining the intersection of the first collection of similar pixels and the second collection of similar pixels;calculating gain and offset values using the intersection of the two collections;applying the gain and offset values to the appropriate pixels in the second fused image. 31. The array camera of claim 30 where the intersection of the first collection of similar pixels and the second collection of similar pixels is the set of pixels in the first and second collections having the same corresponding locations in each of the first and second fused images. 32. The array camera of claim 30 wherein the gain for each pixel in the intersection of the two collections within the second fused image is determined by the equation: a^=⌊∑r∑cg(r,c)·e(r,c)⌋-NrNcg_·e_[∑r∑ce2(r,c)]-NrNce_2,and the bias for each pixel in the intersection of the two collections within the second fused image is determined by the equation: {circumflex over (b)}=g−â·ēwhere: e_=1NrNc∑r∑ce(r,c),g_=1NrNc∑r∑cg(r,c), e is the second fused image, g is the first fused image, Nr and Nc are the number of pixels horizontally and vertically of the neighborhood of pixels around the pixel, and r and c are row and column indices into the images within the bounds defined by Nr and Nc.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (204)
Hines, Stephen P, 3-D motion-parallax portable display software application.
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by arrays of luma and chroma cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Yamashita,Syugo; Murata,Haruhiko; Iinuma,Toshiya; Nakashima,Mitsuo; Mori,Takayuki, Device and method for converting two-dimensional video to three-dimensional video.
Pertsel, Shimon; Meitav, Ohad; Pozniansky, Eli; Galil, Erez, Digital camera with selectively increased dynamic range by control of parameters during image acquisition.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Hornback,Bert; Harwood,Doug; Boyd,W. Eric; Carlson,Randy, Imaging device with multiple fields of view incorporating memory-based temperature compensation of an uncooled focal plane array.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Burt Peter J. (Mercer County NJ) van der Wal Gooitzen S. (Mercer NJ) Kolczynski Raymond J. (Mercer NJ) Hingorani Rajesh (Mercer NJ), Method for fusing images and apparatus therefor.
Burt Peter J. (Princeton NJ) van der Wal Gooitzen S. (Hopewell Borough ; Mercer County NJ) Kolczynski Raymond J. (Hamilton Township ; Mercer County NJ) Hingorani Rajesh (West Windsor Township ; Merce, Method for fusing images and apparatus therefor.
Han, Hee-chul; Choi, Yang-lim; Cho, Seung-ki, Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, System and methods for measuring depth using an array camera employing a bayer filter.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for generating depth maps using a set of images containing a baseline image.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using light focused on an image sensor by a lens element array.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth in the presence of occlusions using a subset of images.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth using an array of independently controllable cameras.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for performing post capture refocus using images captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image.
Ludwig, Lester F., Vignetted optoelectronic array for use in synthetic image formation via signal processing, lensless cameras, and integrated camera-displays.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
Venkataraman, Kartik; Gallagher, Paul; Jain, Ankit K.; Nisenzon, Semyon; Lelescu, Dan; Ciurea, Florian; Molina, Gabriel, Autofocus system for a conventional camera that uses depth information from an array camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Nayar, Shree; Venkataraman, Kartik; Pain, Bedabrata; Lelescu, Dan, Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Venkataraman, Kartik; Huang, Yusong; Jain, Ankit K.; Chatterjee, Priyam, Systems and methods for performing high speed video capture and depth estimation using array cameras.
Lelescu, Dan; Duong, Thang, Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.