Systems and methods for decoding light field image files using a depth map
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/36
H04N-013/00
G06T-009/00
G06T-009/20
H04N-019/597
H04N-013/02
H04N-019/625
G06K-009/46
출원번호
US-0504687
(2014-10-02)
등록번호
US-9042667
(2015-05-26)
발명자
/ 주소
Venkataraman, Kartik
Nisenzon, Semyon
Lelescu, Dan
출원인 / 주소
PELICAN IMAGING CORPORATION
대리인 / 주소
KPPB LLP
인용정보
피인용 횟수 :
71인용 특허 :
123
초록▼
Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a proces
Systems and methods in accordance with embodiments of the invention are configured to render images using light field image files containing an image synthesized from light field image data and metadata describing the image that includes a depth map. One embodiment of the invention includes a processor and memory containing a rendering application and a light field image file including an encoded image and metadata describing the encoded image, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image. In addition, the rendering application configures the processor to: locate the encoded image within the light field image file; decode the encoded image; locate the metadata within the light field image file; and post process the decoded image by modifying the pixels based on the depths indicated within the depth map to create a rendered image.
대표청구항▼
1. A system for rendering an image using a light field image file, the system comprising: a processor; andmemory containing a rendering application and a light field image file;wherein the light field image file is structured using the Exchangeable image file (Exif) format and comprises: an encoded
1. A system for rendering an image using a light field image file, the system comprising: a processor; andmemory containing a rendering application and a light field image file;wherein the light field image file is structured using the Exchangeable image file (Exif) format and comprises: an encoded image, where the encoded image is generated based on image data captured from a reference viewpoint using a first imager; andmetadata describing the encoded image stored within an application marker segment within the light field image file;wherein the metadata comprises a depth map that specifies depths from a reference viewpoint for pixels in the encoded image, where the depth map is generated using at least the image data captured form the reference viewpoint by the first imager and image data captured from a different viewpoint to the reference viewpoint using a second imager; andwherein the first imager and the second imager are contained in a camera array comprising a collection of imagers designed to function as a unitary component, where the collection of imagers comprises at least two heterogeneous imagers; andwherein the rendering application directs the processor to render an image by applying post processing to the encoded image using metadata contained within the light field image by: locating the encoded image within the light field image file by locating a start of image marker within the light field image file;decoding the encoded image;locating the metadata within the light field image file; andprocessing the decoded image by modifying the pixels of the decoded image based on the depths of the pixels indicated within the depth map to create a rendered image by applying a depth based effect to the pixels of the decoded image. 2. The system of claim 1, wherein the depth based effect comprises at least one effect selected from the group consisting of: modifying the focal plane of the decoded image;modifying the depth of field of the decoded image;modifying the blur in out-of-focus regions of the decoded image;locally varying the depth of field of the decoded image;creating multiple focus areas at different depths within the decoded image; andapplying a depth related blur. 3. The system of claim 1, wherein the encoded image is an image of a scene synthesized from a reference viewpoint using a plurality of images that capture the scene from different viewpoints. 4. The system of claim 1, wherein the depth map is generated based on a plurality of images that capture the scene from different viewpoints. 5. The system of claim 1, wherein: the metadata in the light field image file further comprises pixels from the plurality of images that are occluded in the reference viewpoint; andthe rendering application configuring the processor to process the decoded image by modifying the pixels based on the depths indicated within the depth map to create the rendered image comprises rendering an image from a different viewpoint using the depth map and the pixels from the plurality of images that are occluded in the reference viewpoint. 6. The system of claim 5, wherein: the metadata in the light field image file includes descriptions of the pixels from the plurality of images that are occluded in the reference viewpoint including the color, location, and depth of the occluded pixels; andrendering an image from a different viewpoint using the depth map and the pixels that are occluded in the reference viewpoint from the plurality of images further comprises: shifting pixels from the decoded image and the occluded pixels in the metadata to the different viewpoint based upon the depths of the pixels;determining pixel occlusions; andgenerating an image from the different viewpoint using the shifted pixels that are not occluded and by interpolating to fill in missing pixels using adjacent pixels that are not occluded. 7. The system of claim 5, wherein the image rendered from the different viewpoint is part of a stereo pair of images. 8. The system of claim 1, wherein the metadata in the light field image file further comprises a confidence map for the depth map, where the confidence map indicates the reliability of the depth values provided for pixels by the depth map. 9. The system of claim 1, wherein: the metadata in the light field image file further comprises an edge map that indicates pixels in the decoded image that lie on a discontinuity; andrendering an image from a different viewpoint using the depth map and the pixels from the plurality of images that are occluded in the reference viewpoint further comprises applying at least one filter based upon the edge map. 10. The system of claim 9, wherein the edge map identifies whether a pixel lies on an intensity discontinuity. 11. The system of claim 9, wherein the edge map identifies whether a pixel lies on an intensity and depth discontinuity. 12. The system of claim 9, wherein the edge map is losslessly encoded. 13. The system of claim 12, wherein the rendering application directs the processor to: locate at least one Application marker segment containing the metadata comprising the edge map; anddecode the edge map using the JPEG decoder. 14. The system of claim 9, wherein the edge map is encoded using lossy compression. 15. The system of claim 1, wherein: the metadata in the light field image file further comprises a missing pixel map that indicates pixels in the decoded image that do not correspond to a pixel from the plurality of images of the scene and that are generated by interpolating pixel values from adjacent pixels in the synthesized image; andrendering an image from a different viewpoint using the depth map and the pixels from the plurality of images that are occluded in the reference viewpoint further comprises ignoring pixels based upon the missing pixel map. 16. The system of claim 15, wherein the missing pixel map is losslessly encoded. 17. The system of claim 16, wherein the rendering application directs the processor to: locate at least one Application marker segment containing the metadata comprising the missing pixel; anddecode the missing pixel map using the JPEG decoder. 18. The system of claim 15, wherein the missing pixel map is encoded using lossy compression. 19. The system of claim 1, wherein the depth map is losslessly encoded. 20. The system of claim 19, wherein the rendering application directs the processor to decode the depth map using a JPEG decoder. 21. The system of claim 1, wherein the depth map is encoded using lossy compression. 22. The system of claim 1, wherein: the memory further comprises a JPEG decoder application;the light field image file is encoded according to the JPEG standard; andthe rendering application further directs the processor to decode the encoded image using the JPEG decoder. 23. The system of claim 1, wherein the application marker segment is identified using an APP1 marker that is used to identify the Exif data. 24. The system of claim 1, wherein each imager in the at least two heterogeneous imagers comprises at least two sensor elements and has different imaging characteristics. 25. The system of claim 1, wherein the collection of imagers is fabricated on a single chip. 26. The system of claim 1, wherein the collection of imagers comprises a one dimensional array of imagers. 27. A method for decoding a light field image file, where the light field image file is structured using the Exchangeable image file (Exif) format and comprises an encoded image and metadata describing the encoded image stored within an application marker segment within the light field image file, the method comprising: locating the encoded image within the light field image file by locating a start of image marker within the light field image file using an image rendering system, where the encoded image is generated based on image data captured using a first imager;decoding the encoded image using the image rendering system;locating the metadata within the light field image file using the image rendering system, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image, where the depth map is generated based on data captured using a second imager, where the first imager and the second imager are contained in a camera array comprising a collection of imagers designed to function as a unitary component, where the collection of imagers comprises at least two heterogeneous imagers; andprocessing the decoded image by modifying the pixels of the decoded image based on the depths of the pixels indicated within the depth map to create a rendered image by applying a depth based effect to the pixels of the decoded image using the image rendering system. 28. A non-transitory machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process comprising: locating an encoded image within a light field image file by locating a start of image marker within the light field image file, where the light field image file is structured using the Exchangeable image file (Exif) format, and comprises the encoded image and metadata describing the encoded image stored within an application marker segment within the light field image file, where the encoded image is generated based on image data captured using a first imager;decoding the encoded image;locating the metadata within the light field image file, where the metadata comprises a depth map that specifies depths from the reference viewpoint for pixels in the encoded image, where the depth map is generated based on data captured using a second imager, where the first imager and the second imager are contained in a camera array comprising a collection of imagers designed to function as a unitary component, where the collection of imagers comprises at least two heterogeneous imagers; andprocessing the decoded image by modifying the pixels of the decoded image based on the depths of the pixels indicated within the depth map to create a rendered image by applying a depth based effect to the pixels of the decoded image.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (123)
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Yamashita,Syugo; Murata,Haruhiko; Iinuma,Toshiya; Nakashima,Mitsuo; Mori,Takayuki, Device and method for converting two-dimensional video to three-dimensional video.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Ludwig, Lester F., Vignetted optoelectronic array for use in synthetic image formation via signal processing, lensless cameras, and integrated camera-displays.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Nayar, Shree; Venkataraman, Kartik; Pain, Bedabrata; Lelescu, Dan, Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures.
Lelescu, Dan; Venkataraman, Kartik, Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for measuring depth using images captured by a camera array including cameras surrounding a central camera.
Venkataraman, Kartik; Huang, Yusong; Jain, Ankit K.; Chatterjee, Priyam, Systems and methods for performing high speed video capture and depth estimation using array cameras.
Lelescu, Dan; Duong, Thang, Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.