IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0425306
(2012-03-20)
|
등록번호 |
US-8380060
(2013-02-19)
|
발명자
/ 주소 |
- Georgiev, Todor G.
- Lumsdaine, Andrew
|
출원인 / 주소 |
- Adobe Systems Incorporated
|
인용정보 |
피인용 횟수 :
15 인용 특허 :
34 |
초록
▼
Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be locat
Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be located at distances greater than f from the photosensor, where f is the focal length of the microlenses. Radiance cameras in which the distance of the microlens array from the photosensor is adjustable, and in which other characteristics of the camera are adjustable, are described. Digital and film embodiments of the radiance camera are described. A full-resolution light-field rendering method may be applied to light-fields captured by a radiance camera to render higher-resolution output images than are possible with conventional plenoptic cameras and rendering methods.
대표청구항
▼
1. A camera, comprising: a photosensor configured to capture light projected onto the photosensor;an objective lens, wherein the objective lens is configured to refract light from a scene located in front of the camera to form an image of the scene at an image plane of the objective lens; anda micro
1. A camera, comprising: a photosensor configured to capture light projected onto the photosensor;an objective lens, wherein the objective lens is configured to refract light from a scene located in front of the camera to form an image of the scene at an image plane of the objective lens; anda microlens array positioned between the objective lens and the photosensor, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the image plane and not on the objective lens;wherein each microlens of the microlens array is configured to project a separate portion of the image of the scene formed at the image plane by the objective lens onto a separate location on the photosensor;wherein the microlenses in the microlens array are at distance b from the photosensor, wherein the camera is configured such that the distance b is adjustable; anda processing module configured to render one or more output images of the scene from a light-field image captured by the photosensor, wherein the light-field image includes each of the separate portions of the image of the scene projected onto the photosensor in a separate region of the light-field image. 2. The camera as recited in claim 1, wherein f is the focal length of the microlenses, and wherein b is adjustable to distances greater than f . 3. The camera as recited in claim 2, wherein b is adjustable to distances up to 1.5f . 4. The camera as recited in claim 1, wherein f is the focal length of the microlenses, and wherein b is adjustable to distances within the range of 0.5 f to 1.5f . 5. The camera as recited in claim 1, wherein the camera is configured such that one or more spacers have been inserted between the microlens array and the photosensor to determine the distance b. 6. The camera as recited in claim 1, wherein f is the focal length of the microlenses, and wherein b is adjustable to distances less than f . 7. The camera as recited in claim 6, wherein b is adjustable to distances down to 0.5f . 8. The camera as recited in claim 1, wherein the camera comprises an adjuster configured to adjust the distance b. 9. The camera as recited in claim 8, wherein the adjuster is configured to adjust the distance b in response to user input to the camera. 10. The camera as recited in claim 8, wherein the adjuster is configured to automatically adjust the distance b in response to detection of a change in an optical characteristic of the camera. 11. The camera as recited in claim 1, wherein, to render an output image of the scene from the captured light-field image, the processing module is configured to: crop each of the plurality of separate portions to a subregion of the respective portion to generate a plurality of subregions from the plurality of separate portions, where each subregion includes a plurality of pixels from the respective separate portion; andassemble the plurality of subregions to produce a single output image of the scene. 12. The camera as recited in claim 1, wherein the camera further comprises a memory, and wherein the camera is configured to store the captured light-field image and the one or more output images to the memory. 13. The camera as recited in claim 12, wherein the memory comprises a removable storage device. 14. The camera as recited in claim 11, wherein, to assemble the plurality of subregions, the processing module is configured to move the subregions together so that features of the image of the scene from any given subregion substantially match with features of adjacent subregions. 15. The camera as recited in claim 11, wherein, to assemble the plurality of subregions, the processing module is configured to enlarge each of the subregions so that features of the image of the scene from any given subregion substantially match with features of adjacent subregions. 16. The camera as recited in claim 1, wherein the processing module is further configured to, prior to said render: examine each of two or more of the plurality of separate portions to determine a direction of movement of edges within the two or more separate portions, wherein said examining is performed in a direction, and wherein an edge is a feature of the image of the scene that appears in one or more of the portions;detect that the direction of movement of edges in the two or more separate portions is the same as the direction in which said examining is performed; andinvert at least the two or more separate portions relative to their respective centers in response to said detecting. 17. The camera as recited in claim 1, wherein the processing module is further configured to: render a first output image of the scene from the light-field image at a focus; andrender a second output image of the scene from the light-field image at a different focus. 18. A method for capturing and processing light-field images, comprising: performing, by a camera: receiving light from a scene at an objective lens of the camera;refracting light from the objective lens to form an image of the scene at an image plane of the objective lens;receiving light from the image plane at a microlens array located between the objective lens and a photosensor of the camera, wherein the microlens array comprises a plurality of microlenses, wherein the plurality of microlenses are focused on the image plane and not on the objective lens, and wherein the microlenses in the microlens array are at a distance b from the photosensor;adjusting the distance b between the microlenses and the photosensor;receiving light from the microlens array at the photosensor, wherein the photosensor receives a separate portion of the image of the scene formed at the image plane by the objective lens from each microlens of the microlens array at a separate location on the photosensor;capturing a light-field image of the scene at the photosensor, wherein the light-field image includes each of the separate portions of the image of the scene in a separate region of the light-field image; andrendering one or more output images of the scene from a light-field image captured by the photosensor. 19. The method as recited in claim 18, wherein f is the focal length of the microlenses, and wherein b is adjustable to distances within the range of 0.5 f to 1.5f . 20. The method as recited in claim 18, wherein said rendering an output image of the scene from the captured light-field image comprises: cropping each of the plurality of separate portions to a subregion of the respective portion to generate a plurality of subregions from the plurality of separate portions, where each subregion includes a plurality of pixels from the respective separate portion; andassembling the plurality of subregions to produce a single output image of the scene.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.