Methods and apparatus for reducing plenoptic camera artifacts
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/262
G02B-027/10
H04N-005/225
출원번호
US-0466904
(2012-05-08)
등록번호
US-9316840
(2016-04-19)
발명자
/ 주소
Georgiev, Todor G.
Lumsdaine, Andrew
출원인 / 주소
Adobe Systems Incorporated
대리인 / 주소
Wolfe-SBMC
인용정보
피인용 횟수 :
1인용 특허 :
92
초록▼
Methods and apparatus for reducing plenoptic camera artifacts. A first method is based on careful design of the optical system of the focused plenoptic camera to reduce artifacts that result in differences in depth in the microimages. A second method is computational; a focused plenoptic camera rend
Methods and apparatus for reducing plenoptic camera artifacts. A first method is based on careful design of the optical system of the focused plenoptic camera to reduce artifacts that result in differences in depth in the microimages. A second method is computational; a focused plenoptic camera rendering algorithm is provided that corrects for artifacts resulting from differences in depth in the microimages. While both the artifact-reducing focused plenoptic camera design and the artifact-reducing rendering algorithm work by themselves to reduce artifacts, the two approaches may be combined.
대표청구항▼
1. A camera, comprising: a photosensor configured to capture light projected onto the photosensor;an objective lens, wherein the objective lens is configured to refract light from a scene located in front of the camera to form an image of the scene at a focal plane of the objective lens;a microlens
1. A camera, comprising: a photosensor configured to capture light projected onto the photosensor;an objective lens, wherein the objective lens is configured to refract light from a scene located in front of the camera to form an image of the scene at a focal plane of the objective lens;a microlens array positioned between the objective lens and the photosensor, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the focal plane and not on the objective lens;wherein each microlens of the microlens array is configured to project a separate portion of the image of the scene formed at the focal plane on which the microlens is focused onto a separate location on the photosensor; andwherein the camera is configured so that difference in magnification ΔM of the microlenses at different depths in the image of the scene is less than magnification M of the microlenses. 2. The camera as recited in claim 1, wherein ΔM is less than or equal to one half of M. 3. The camera as recited in claim 1, wherein ΔM is less than or equal to one tenth of M. 4. The camera as recited in claim 1, wherein magnification M of the microlenses is distance from the microlenses to the focal plane divided by distance from the microlenses to the photosensor. 5. The camera as recited in claim 4, where M is 10 or less. 6. The camera as recited in claim 4, where M is between 5 and 10, inclusive. 7. The camera as recited in claim 1, wherein the photosensor is configured to capture a flat comprising the separate portions of the image of the scene projected onto the photosensor by the microlens array, wherein each of the separate portions is in a separate region of the flat. 8. The camera as recited in claim 7, wherein the camera further comprises a rendering module configured to: for each of the plurality of separate portions in the captured flat, determine a magnification value Mi for the respective portion via registration with one or more neighbor portions;render a final image of the scene from the flat according to the determined magnification values Mi of the plurality of separate portions. 9. A method, comprising: performing, by one or more computing devices: obtaining a flat comprising a plurality of separate portions of an image of a scene, wherein each of the plurality of portions is in a separate region of the flat;for each of the plurality of separate portions in the captured flat, determining a magnification value Mi for the respective portion; andrendering a final image of the scene from the flat according to the determined magnification values Mi of the plurality of separate portions. 10. The method as recited in claim 9, wherein said rendering a final image of the scene from the flat according to the determined magnification values Mi of the plurality of separate portions comprises: magnifying each of the plurality of separate portions according to its respective magnification value Mi to generate a plurality of magnified portions;extracting a crop from each magnified portion; andassembling the crops extracted from the magnified portions to produce the final image. 11. The method as recited in claim 9, wherein said determining a magnification value Mi for the respective portion comprises: performing registration of the respective portion with one or more neighbor portions to determine depth of the respective portion; anddetermining the magnification value Mi for the respective portion according to the determined depth for the respective portion. 12. The method as recited in claim 11, wherein the depth is determined according to an amount of shift needed to align the respective portion with each of the one or more neighbor portions during said registration. 13. The method as recited in claim 9, further comprising capturing the flat with a camera, wherein said capturing comprises: receiving light from the scene at an objective lens of the camera;refracting light from the objective lens to form an image of the scene at a focal plane of the objective lens;receiving light from the focal plane at a microlens array positioned between the objective lens and a photosensor of the camera, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the focal plane and not on the objective lens, and wherein a is distance from the microlenses to the focal plane; andreceiving light from the microlens array at the photosensor, wherein the photosensor receives a separate portion of the image of the scene formed at the focal plane from each microlens of the microlens array at a separate location on the photosensor;wherein the camera is configured so that difference in magnification ΔM of the microlenses at different depths in the image of the scene is less than or equal to one half of magnification M of the microlenses. 14. The method as recited in claim 13, wherein magnification M of the microlenses is distance from the microlenses to the focal plane divided by distance from the microlenses to the photosensor, where M is 10 or less. 15. A non-transitory computer-readable storage medium storing program instructions, wherein the program instructions are computer-executable to implement: obtaining a flat comprising a plurality of separate portions of an image of a scene, wherein each of the plurality of portions is in a separate region of the flat;for each of the plurality of separate portions in the captured flat, determining a magnification value Mi for the respective portion; andrendering a final image of the scene from the flat according to the determined magnification values Mi of the plurality of separate portions. 16. The non-transitory computer-readable storage medium as recited in claim 15, wherein said rendering a final image of the scene from the flat according to the determined magnification values Mi of the plurality of separate portions comprises: magnifying each of the plurality of separate portions according to its respective magnification value Mi to generate a plurality of magnified portions;extracting a crop from each magnified portion; andassembling the crops extracted from the magnified portions to produce the final image. 17. The non-transitory computer-readable storage medium as recited in claim 15, wherein said determining a magnification value Mi for the respective portion comprises: performing registration of the respective portion with one or more neighbor portions to determine depth of the respective portion; anddetermining the magnification value Mi for the respective portion according to the determined depth for the respective portion. 18. The non-transitory computer-readable storage medium as recited in claim 17, wherein the depth is determined according to an amount of shift needed to align the respective portion with each of the one or more neighbor portions during said registration. 19. The non-transitory computer-readable storage medium as recited in claim 15, wherein the flat is captured with a camera, wherein said capturing comprises: receiving light from the scene at an objective lens of the camera;refracting light from the objective lens to form an image of the scene at a focal plane of the objective lens;receiving light from the focal plane at a microlens array positioned between the objective lens and a photosensor of the camera, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the focal plane and not on the objective lens, and wherein a is distance from the microlenses to the focal plane; andreceiving light from the microlens array at the photosensor, wherein the photosensor receives a separate portion of the image of the scene formed at the focal plane from each microlens of the microlens array at a separate location on the photosensor;wherein the camera is configured so that difference in magnification ΔM of the microlenses at different depths in the image of the scene is less than or equal to one half of magnification M of the microlenses. 20. The non-transitory computer-readable storage medium as recited in claim 19, wherein magnification M of the microlenses is distance from the microlenses to the focal plane divided by distance from the microlenses to the photosensor, where M is 10 or less.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (92)
Georgiev Todor, 3D graphics based on images and morphing.
Loce Robert P. (Rochester NY) Cianciosi Michael S. (Rochester NY) Kingsley Jeffrey D. (Williamson NY), Image resolution conversion method that employs statistically generated multiple morphological filters.
Yamagata, Michihiro; Okayama, Hiroaki; Boku, Kazutake; Tanaka, Yasuhiro; Hayashi, Kenichi; Fushimi, Yoshimasa; Murata, Shigeki; Hayashi, Takayuki, Imaging device including a plurality of lens elements and a imaging sensor.
de Montebello Roger L. (New York NY) Globus Ronald P. (New York NY) Buck Howard S. (New York NY), Integral photography apparatus and method of forming same.
Mindler, Robert F.; Calkins, Guy T., Method and apparatus for thermal printing of longer length images by the use of multiple dye color patch triads or quads.
Vetro, Anthony; Yea, Sehoon; Matusik, Wojciech; Pfister, Hanspeter; Zwicker, Matthias, Method and system for acquiring, encoding, decoding and displaying 3D light fields.
Georgiev, Todor G.; Chunev, Georgi N., Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data.
Georgiev, Todor G.; Lumsdaine, Andrew, Methods, apparatus, and computer-readable storage media for depth-based rendering of focused plenoptic camera data.
Corle Timothy R. (Santa Clara County CA) Kino Gordon S. (Santa Clara County CA) Mansfield Scott M. (San Mateo County CA), Optical recording system employing a solid immersion lens.
Patton, David L.; Spoonhower, John P.; Bohan, Anne E.; Paz-Pujalt, Gustavo R., Solid immersion lens array and methods for producing a solid immersion lens array.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.