Methods and apparatus for reducing plenoptic camera artifacts
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/225
H04N-005/232
G02B-027/10
G03B-013/00
출원번호
US-0690871
(2010-01-20)
등록번호
US-8189089
(2012-05-29)
발명자
/ 주소
Georgiev, Todor G.
Lumsdaine, Andrew
출원인 / 주소
Adobe Systems Incorporated
대리인 / 주소
Kowert, Robert C.
인용정보
피인용 횟수 :
172인용 특허 :
33
초록▼
Methods and apparatus for reducing plenoptic camera artifacts. A first method is based on careful design of the optical system of the focused plenoptic camera to reduce artifacts that result in differences in depth in the microimages. A second method is computational; a focused plenoptic camera rend
Methods and apparatus for reducing plenoptic camera artifacts. A first method is based on careful design of the optical system of the focused plenoptic camera to reduce artifacts that result in differences in depth in the microimages. A second method is computational; a focused plenoptic camera rendering algorithm is provided that corrects for artifacts resulting from differences in depth in the microimages. While both the artifact-reducing focused plenoptic camera design and the artifact-reducing rendering algorithm work by themselves to reduce artifacts, the two approaches may be combined.
대표청구항▼
1. A camera, comprising: a photosensor configured to capture light projected onto the photosensor;an objective lens, wherein the objective lens is configured to refract light from a scene located in front of the camera to form an image of the scene at a focal plane of the objective lens, wherein the
1. A camera, comprising: a photosensor configured to capture light projected onto the photosensor;an objective lens, wherein the objective lens is configured to refract light from a scene located in front of the camera to form an image of the scene at a focal plane of the objective lens, wherein the focal plane is located at focal length F of the objective lens, and wherein the scene is at distance A from the objective lens;a microlens array positioned between the objective lens and the photosensor, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the focal plane and not on the objective lens, and wherein a is distance from the microlenses to the focal plane;wherein each microlens of the microlens array is configured to project a separate portion of the image of the scene formed at the focal plane on which the microlens is focused onto a separate location on the photosensor; andwherein the camera is configured so that the value of a is greater than the focal length F squared divided by the distance A. 2. The camera as recited in claim 1, wherein b is distance from the microlenses to the photosensor, wherein magnification M of the microlenses is defined by M=ab, where magnitude of M is 10 or less. 3. The camera as recited in claim 2, where the magnitude of M is between 5 and 10, inclusive. 4. The camera as recited in claim 1, wherein the photosensor is configured to capture a flat comprising the separate portions of the image of the scene projected onto the photosensor by the microlens array, wherein each of the separate portions is in a separate region of the flat. 5. The camera as recited in claim 4, wherein the camera is configured to store the captured flat to a memory device. 6. The camera as recited in claim 4, wherein the captured flat is configured to be processed according to an artifact-reducing rendering module to generate a final image, wherein the module is configured to: for each of the plurality of separate portions in the captured flat, determine a magnification value Mi via registration with one or more neighbor portions;for each of the plurality of separate portions: magnify the portion according to its respective magnification value Mi to generate a magnified portion; andextract a crop from the magnified portion; andappropriately assemble the crops extracted from the magnified portions to produce the final image. 7. A method, comprising: obtaining a flat comprising a plurality of separate portions of an image of a scene, wherein each of the plurality of portions is in a separate region of the flat;determining a magnification value Mi for each of the plurality of separate portions;for each of the plurality of separate portions: magnifying the portion according to its respective magnification value Mi to generate a magnified portion; andextracting a crop from the magnified portion; andappropriately assembling the crops extracted from the magnified portions to produce a final high-resolution image. 8. The method as recited in claim 7, wherein said determining a magnification value Mi for each of the plurality of separate portions comprises: determining a value ki for each of the plurality of separate portions, where ki is a measure of depth of the particular portion; anddetermining the magnification value Mi for each of the plurality of separate portions according to the determined value of ki for the respective portion. 9. The method as recited in claim 8, wherein said determining a value ki for each of the plurality of separate portions comprises performing registration on each of the plurality of separate portions to determine the value ki for the respective portion. 10. The method as recited in claim 9, wherein said performing registration on a given portion comprises registering the given portion with two or more adjacent portions by overlaying the given portion on each of the adjacent portions and shifting the given portion to align the given portion with the respective adjacent portion. 11. The method as recited in claim 10, wherein the value ki is determined according to amounts of shift needed to align the given portion with each of the adjacent portions. 12. The method as recited in claim 7, further comprising capturing the flat with a camera, wherein said capturing comprises: receiving light from the scene at an objective lens of the camera, wherein the scene is at distance A from the objective lens;refracting light from the objective lens to form an image of the scene at a focal plane of the objective lens, wherein the focal plane is located at focal length F of the objective lens;receiving light from the focal plane at a microlens array positioned between the objective lens and a photosensor of the camera, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the focal plane and not on the objective lens, and wherein a is distance from the microlenses to the focal plane; andreceiving light from the microlens array at the photosensor, wherein the photosensor receives a separate portion of the image of the scene formed at the focal plane from each microlens of the microlens array at a separate location on the photosensor;wherein the camera is configured so that the value of a is greater than the focal length F squared divided by the distance A. 13. The method as recited in claim 12, wherein b is distance from the microlenses to the photosensor, wherein magnification M of the microlenses is defined by M=ab, where magnitude of M is 10 or less. 14. A non-transitory computer-readable storage medium storing program instructions, wherein the program instructions are computer-executable to implement: obtaining a flat comprising a plurality of separate portions of an image of a scene, wherein each of the plurality of portions is in a separate region of the flat;determining a magnification value Mi for each of the plurality of separate portions;for each of the plurality of separate portions:magnifying the portion according to its respective magnification value Mi to generate a magnified portion; andextracting a crop from the magnified portion; andappropriately assembling the crops extracted from the magnified portions to produce a final high-resolution image. 15. The computer-readable storage medium as recited in claim 14, wherein, in said determining a magnification value Mi for each of the plurality of separate portions, the program instructions are computer-executable to implement: determining a value ki for each of the plurality of separate portions, where ki is a measure of depth of the particular portion; anddetermining the magnification value Mi for each of the plurality of separate portions according to the determined value of ki for the respective portion. 16. The computer-readable storage medium as recited in claim 15, wherein, in said determining a value ki for each of the plurality of separate portions, the program instructions are computer-executable to implement performing registration on each of the plurality of separate portions to determine the value ki for the respective portion. 17. The computer-readable storage medium as recited in claim 16, wherein, in said performing registration on a given portion, the program instructions are computer-executable to implement registering the given portion with two or more adjacent portions by overlaying the given portion on each of the adjacent portions and shifting the given portion to align the given portion with the respective adjacent portion. 18. The computer-readable storage medium as recited in claim 17, wherein the value ki is determined according to amounts of shift needed to align the given portion with each of the adjacent portions. 19. The computer-readable storage medium as recited in claim 14, wherein the flat is captured with a camera, wherein said capturing comprises: receiving light from the scene at an objective lens of the camera, wherein the scene is at distance A from the objective lens;refracting light from the objective lens to form an image of the scene at a focal plane of the objective lens, wherein the focal plane is located at focal length F of the objective lens;receiving light from the focal plane at a microlens array positioned between the objective lens and a photosensor of the camera, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the focal plane and not on the objective lens, and wherein a is distance from the microlenses to the focal plane; andreceiving light from the microlens array at the photosensor, wherein the photosensor receives a separate portion of the image of the scene formed at the focal plane from each microlens of the microlens array at a separate location on the photosensor;wherein the camera is configured so that the value of a is greater than the focal length F squared divided by the distance A. 20. The computer-readable storage medium as recited in claim 19, wherein b is distance from the microlenses to the photosensor, wherein magnification M of the microlenses is defined by M=ab, where magnitude of M is 10 or less.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (33)
Georgiev Todor, 3D graphics based on images and morphing.
Yamagata, Michihiro; Okayama, Hiroaki; Boku, Kazutake; Tanaka, Yasuhiro; Hayashi, Kenichi; Fushimi, Yoshimasa; Murata, Shigeki; Hayashi, Takayuki, Imaging device including a plurality of lens elements and a imaging sensor.
de Montebello Roger L. (New York NY) Globus Ronald P. (New York NY) Buck Howard S. (New York NY), Integral photography apparatus and method of forming same.
Maguire, Jr., Francis J., Apparatus for image display with multi-focal length progressive lens or multiple discrete lenses each having different fixed focal lengths or a variable focal length.
Venkataraman, Kartik; Gallagher, Paul; Jain, Ankit; Nisenzon, Semyon; Lelescu, Dan; Ciurea, Florian; Molina, Gabriel, Array cameras including an array camera module augmented with a separate camera.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing.
Venkataraman, Kartik; Gallagher, Paul; Jain, Ankit K.; Nisenzon, Semyon; Lelescu, Dan; Ciurea, Florian; Molina, Gabriel, Autofocus system for a conventional camera that uses depth information from an array camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images captured by camera arrays including cameras dedicated to sampling luma and cameras dedicated to sampling chroma.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images captured by camera arrays including heterogeneous optics.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by arrays of luma and chroma cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of near-IR images including occlusions using camera arrays incorporating near-IR light sources.
Sabater, Neus; Drazic, Valter; Sandri, Gustavo Luiz, Method and device for estimating disparity associated with views of a scene acquired with a plenoptic camera.
Georgiev, Todor G.; Chunev, Georgi N., Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data.
Georgiev, Todor G.; Lumsdaine, Andrew, Methods, apparatus, and computer-readable storage media for depth-based rendering of focused plenoptic camera data.
Knight, Timothy; Pitts, Colvin; Akeley, Kurt; Romanenko, Yuriy; Craddock, Carl (Warren), Optimization of optical systems for improved light field capture and manipulation.
Knight, Timothy; Pitts, Colvin; Akeley, Kurt; Romanenko, Yuriy; Craddock, Carl (Warren), Optimization of optical systems for improved light field capture and manipulation.
Duparre, Jacques, Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, System and methods for measuring depth using an array camera employing a bayer filter.
Nayar, Shree; Venkataraman, Kartik; Pain, Bedabrata; Lelescu, Dan, Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures.
Lelescu, Dan; Venkataraman, Kartik, Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing.
Lelescu, Dan; Venkataraman, Kartik, Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing.
Duparré, Jacques, Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for generating depth maps using a set of images containing a baseline image.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth in the presence of occlusions using a subset of images.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth using an array of independently controllable cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for measuring depth using images captured by a camera array including cameras surrounding a central camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for measuring depth using images captured by monolithic camera arrays including at least one bayer camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for normalizing image data captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Venkataraman, Kartik; Huang, Yusong; Jain, Ankit K.; Chatterjee, Priyam, Systems and methods for performing high speed video capture and depth estimation using array cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for performing post capture refocus using images captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for performing post capture refocus using images captured by camera arrays.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for synthesizing high resolution images using a set of geometrically registered images.
Lelescu, Dan; Duong, Thang, Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information.
Lelescu, Dan; Duong, Thang, Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for synthesizing higher resolution images using images captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for synthesizing higher resolution images using images captured by camera arrays.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.