Methods and apparatus for rendering focused plenoptic camera data using super-resolved demosaicing
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G02B-013/16
H04N-005/225
G03B-013/00
H04N-005/232
출원번호
US-0957312
(2010-11-30)
등록번호
US-8749694
(2014-06-10)
발명자
/ 주소
Georgiev, Todor G.
Chunev, Georgi N.
출원인 / 주소
Adobe Systems Incorporated
대리인 / 주소
Wolfe-SBMC
인용정보
피인용 횟수 :
19인용 특허 :
87
초록▼
A super-resolved demosaicing technique for rendering focused plenoptic camera data performs simultaneous super-resolution and demosaicing. The technique renders a high-resolution output image from a plurality of separate microimages in an input image at a specified depth of focus. For each point on
A super-resolved demosaicing technique for rendering focused plenoptic camera data performs simultaneous super-resolution and demosaicing. The technique renders a high-resolution output image from a plurality of separate microimages in an input image at a specified depth of focus. For each point on an image plane of the output image, the technique determines a line of projection through the microimages in optical phase space according to the current point and angle of projection determined from the depth of focus. For each microimage, the technique applies a kernel centered at a position on the current microimage intersected by the line of projection to accumulate, from pixels at each microimage covered by the kernel at the respective position, values for each color channel weighted according to the kernel. A value for a pixel at the current point in the output image is computed from the accumulated values for the color channels.
대표청구항▼
1. A method, comprising: rendering an output image of a scene from a plurality of separate microimages of an image of the scene at a depth of focus that defines an angle of projection in an optical phase space, each microimage of the plurality of separate microimages corresponding to a microlens in
1. A method, comprising: rendering an output image of a scene from a plurality of separate microimages of an image of the scene at a depth of focus that defines an angle of projection in an optical phase space, each microimage of the plurality of separate microimages corresponding to a microlens in a microlens array, distance between the microimages being a nonintegral pixel value, at least two microimages overlapping at least in part, the rendering comprising, for a point on an image plane of the output image: determining a line of projection through the microimages in optical phase space according to the point and the angle of projection determined from the depth of focus;for each microimage, applying a kernel centered at a position on the microimage intersected by the line of projection to accumulate, from pixels at each of the plurality of microimages covered by the kernel at the respective position, values for each color channel accumulated from the plurality of microimages and calculated separately, one color channel from another, according to the kernel; andcomputing a value for a pixel at the point in the output image from the values for the color channels that were accumulated and calculated separately. 2. The method as recited in claim 1, wherein the values for each color channel at each microimage are weighted according to distance of the pixels covered by the kernel from the position on the microimage intersected by the line of projection. 3. The method as recited in claim 1, wherein said computing a value for a pixel at the point in the output image from the accumulated values for the color channels comprises computing a value for each color channel of the pixel in the output image from the accumulated value for a corresponding color channel. 4. The method as recited in claim 3, wherein said computing a value for each color channel of the pixel in the output image from the accumulated value for a corresponding color channel comprises normalizing the accumulated values according to a normalization metric for the kernel. 5. The method as recited in claim 1, wherein the kernel is a Gaussian kernel. 6. The method as recited in claim 1, wherein the input image is captured according to a photosensor technology that captures a pattern of pixels in multiple separate color channels. 7. The method as recited in claim 6, wherein the photosensor is a Bayer array photosensor. 8. The method as recited in claim 1, wherein the pixels in the input image are RGB technology pixels. 9. The method as recited in claim 1, further comprising applying a deconvolution technique to the output image. 10. The method as recited in claim 1, further comprising: obtaining a different depth of focus, wherein the different depth of focus determines a different angle of projection in optical phase space; andrepeating said rendering at the different depth of focus to generate a second output image at the different depth of focus. 11. One or more computer-readable storage memory comprising program instructions stored thereon, the program instructions are computer-executable to cause operations to be performed comprising: rendering an output image of a scene from a plurality of separate microimages of an image of the scene at a depth of focus that defines an angle of projection in an optical phase space, each microimage of the plurality of separate microimages corresponding to a microlens in a microlens array, distance between the microimages being a nonintegral pixel value, at least two microimages overlapping at least in part, the rendering comprising, for a point on an image plane of the output image comprising: determining a line of projection through the microimages in optical phase space according to the point and the angle of projection determined from the depth of focus;for each microimage, applying a kernel centered at a position on the microimage intersected by the line of projection to accumulate, from pixels at each of the plurality of microimages covered by the kernel at the respective position, values for each color channel accumulated from the plurality of microimages and calculated separately, one color channel from another, according to the kernel; andcomputing a value for a pixel at the point in the output image from the values for the color channels that were accumulated and calculated separately. 12. The one or more computer-readable storage memory as recited in claim 11, wherein the values for each color channel at each microimage are weighted according to distance of the pixels covered by the kernel from the position on the microimage intersected by the line of projection. 13. The one or more computer-readable storage memory as recited in claim 11, wherein, in said computing a value for a pixel at the point in the output image from the accumulated values for the color channels, the program instructions are computer-executable to implement computing a value for each color channel of the pixel in the output image from the accumulated value for a corresponding color channel. 14. The one or more computer-readable storage memory as recited in claim 11, wherein the input image is captured according to a photosensor technology that captures a pattern of pixels in multiple separate color channels. 15. The one or more computer-readable storage memory as recited in claim 11, wherein the program instructions are computer-executable to implement: obtaining a different depth of focus, wherein the different depth of focus determines a different angle of projection in optical phase space; andrepeating said rendering at the different depth of focus to generate a second high resolution output image at the different depth of focus. 16. A system, comprising at least one processor; anda memory comprising program instructions that are executable by the at least one processor to: obtain an input image comprising a plurality of separate microimages of an image of a scene, each microimage of the plurality of separate microimages corresponding to a microlens in a microlens array, the input image includes pixels in multiple separate color channels;obtain a depth of focus for an output image to be rendered from the input image, the depth of focus defining an angle of projection in optical phase space; andrender an output image of the scene from the plurality of separate microimages at the depth of focus, distance between the microimages being a nonintegral pixel value, at least two microimages overlapping at least in part, the rendering comprising, for a point on an image plane of the output image: determine a line of projection through the microimages in optical phase space according to the point and the angle of projection determined from the depth of focus;for each microimage, apply a kernel centered at a position on the microimage intersected by the line of projection to accumulate, from pixels at each of the plurality of microimages covered by the kernel at the respective position, values for each color channel accumulated from the plurality of microimages and calculated separately, one color channel from another, according to the kernel; andcompute a value for a pixel at the point in the output image from the values for the color channels that were accumulated and calculated separately. 17. The system as recited in claim 16, wherein the values for each color channel at each microimage are weighted according to distance of the pixels covered by the kernel from the position on the microimage intersected by the line of projection. 18. The system as recited in claim 16, wherein, to compute a value for a pixel at the current point in the output image from the accumulated values for the color channels, the program instructions are executable by the at least one processor to compute a value for each color channel of the pixel in the output image from the accumulated value for a corresponding color channel. 19. The system as recited in claim 16, wherein the program instructions are executable by the at least one processor to: obtain a different depth of focus, wherein the different depth of focus determines a different angle of projection in optical phase space; andrepeat said rendering at the different depth of focus to generate a second output image at the different depth of focus. 20. The system as recited in claim 16, wherein the at least one processor includes at least one graphics processing unit (GPU).
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (87)
Georgiev Todor, 3D graphics based on images and morphing.
Loce Robert P. (Rochester NY) Cianciosi Michael S. (Rochester NY) Kingsley Jeffrey D. (Williamson NY), Image resolution conversion method that employs statistically generated multiple morphological filters.
Yamagata, Michihiro; Okayama, Hiroaki; Boku, Kazutake; Tanaka, Yasuhiro; Hayashi, Kenichi; Fushimi, Yoshimasa; Murata, Shigeki; Hayashi, Takayuki, Imaging device including a plurality of lens elements and a imaging sensor.
de Montebello Roger L. (New York NY) Globus Ronald P. (New York NY) Buck Howard S. (New York NY), Integral photography apparatus and method of forming same.
Mindler, Robert F.; Calkins, Guy T., Method and apparatus for thermal printing of longer length images by the use of multiple dye color patch triads or quads.
Vetro, Anthony; Yea, Sehoon; Matusik, Wojciech; Pfister, Hanspeter; Zwicker, Matthias, Method and system for acquiring, encoding, decoding and displaying 3D light fields.
Georgiev, Todor G.; Chunev, Georgi N., Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data.
Corle Timothy R. (Santa Clara County CA) Kino Gordon S. (Santa Clara County CA) Mansfield Scott M. (San Mateo County CA), Optical recording system employing a solid immersion lens.
Patton, David L.; Spoonhower, John P.; Bohan, Anne E.; Paz-Pujalt, Gustavo R., Solid immersion lens array and methods for producing a solid immersion lens array.
Hiwada, Kazuhiro; Kimura, Katsuyuki; Mori, Tatsuya, Image processing device and image processing system where de-mosaic image is generated based on shift amount estimation of pixels of captured images.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.