Systems and methods for encoding image files containing depth maps stored as metadata
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/36
G06T-009/00
G06T-009/20
H04N-019/597
H04N-013/00
H04N-013/02
H04N-019/625
H04N-019/136
G06T-003/40
H04N-019/85
G06T-015/08
G06T-007/50
출원번호
US-0667503
(2015-03-24)
등록번호
US-9779319
(2017-10-03)
발명자
/ 주소
Venkataraman, Kartik
Nisenzon, Semyon
Lelescu, Dan
출원인 / 주소
FotoNation Cayman Limited
대리인 / 주소
KPPB LLP
인용정보
피인용 횟수 :
0인용 특허 :
176
초록▼
Systems and methods for storing images synthesized from light field image data and metadata describing the images in electronic files in accordance with embodiments of the invention are disclosed. One embodiment includes a processor and memory containing an encoding application and light field image
Systems and methods for storing images synthesized from light field image data and metadata describing the images in electronic files in accordance with embodiments of the invention are disclosed. One embodiment includes a processor and memory containing an encoding application and light field image data, where the light field image data comprises a plurality of low resolution images of a scene captured from different viewpoints. In addition, the encoding application configures the processor to synthesize a higher resolution image of the scene from a reference viewpoint using the low resolution images, where synthesizing the higher resolution image involves creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image; encode the higher resolution image; and create a light field image file including the encoded image, the low resolution images, and metadata including the depth map.
대표청구항▼
1. An image processing system, comprising: a processor; andmemory containing an encoding application;wherein the encoding application configures the processor to: obtain image data, where the image data comprises a plurality of images of a scene captured from different viewpoints;create a depth map
1. An image processing system, comprising: a processor; andmemory containing an encoding application;wherein the encoding application configures the processor to: obtain image data, where the image data comprises a plurality of images of a scene captured from different viewpoints;create a depth map that specifies depths for pixels in a reference image using at least a portion of the image data; andstore an image file including the reference image, and the depth map stored as metadata within the image file in the memory. 2. The system of claim 1, wherein the encoding application configures the processor to encode the depth map. 3. The system of claim 1, wherein the encoding application configures the processor to: identify pixels in the plurality of images of the scene that are occluded in the reference image; andthe metadata includes descriptions of the occluded pixels. 4. The system of claim 3, wherein the descriptions of the occluded pixels include colors, locations, and depths of the occluded pixels. 5. The system of claim 1, wherein the encoding application configures the processor to: create a confidence map for the depth map, where the confidence map indicates the reliability of the depth value for a pixel in the depth map; andthe metadata further includes the confidence map. 6. The system of claim 5, wherein the encoding application configures the processor to encode the confidence map. 7. The system of claim 1, wherein: the encoding application configures the processor to generate an edge map that indicates pixels in the reference image that lie on a discontinuity; andthe metadata further includes the edge map. 8. The system of claim 7, wherein the edge map identifies whether a pixel lies on an intensity discontinuity. 9. The system of claim 7, wherein the edge map identifies whether a pixel lies on an intensity and depth discontinuity. 10. The system of claim 7, wherein the encoding application configures the processor to encode the edge map. 11. The system of claim 1, wherein: the encoding application configures the processor to generate a missing pixel map that indicates pixels in the reference image that do not correspond to a pixel from the plurality of low resolution images of the scene; andthe metadata further includes the missing pixels map. 12. The system of claim 11, wherein the encoding application configures the processor to encode the missing pixels map. 13. The system of claim 1, wherein at least one of the plurality of low resolution images is captured from a viewpoint that is separate and distinct from a viewpoint of the reference image. 14. The system of claim 1, wherein the image file conforms to the JPEG File Interchange Format (JFIF) standard. 15. The system of claim 14, wherein the reference image is encoded in accordance with the JPEG standard. 16. The system of claim 15, wherein the metadata is located within an application marker segment within the image file. 17. The system of claim 16, wherein the application marker segment is identified using an APP9 marker. 18. The system of claim 16, wherein the encoding application configures the processor to encode the depth map in accordance with the JPEG standard using lossless compression and the encoded depth map is stored within the application marker segment containing the metadata. 19. A method for encoding image data as an image file, comprising: synthesize a higher resolution image of a scene from a reference viewpoint and a depth map that describes depths of pixels in the synthesized image using an encoding device and image data, where the image data comprises a plurality of low resolution images of a scene captured from different viewpoints and synthesizing the higher resolution image includes creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image;encoding the higher resolution image using the encoding device; andcreating an image file including the encoded image, and metadata describing the encoded image using the encoding device, where the metadata includes the depth map. 20. A non-transitory machine readable medium containing processor instructions, where execution of the instructions by a processor causes the processor to perform a process comprising: synthesizing a higher resolution image of a scene from a reference viewpoint using light field image data, where the image data comprises a plurality of low resolution images of a scene captured from different viewpoints and synthesizing the higher resolution image includes creating a depth map that specifies depths from the reference viewpoint for pixels in the higher resolution image;encoding the higher resolution image; andcreating an image file including the encoded image, and metadata describing the encoded image, where the metadata includes the depth map.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (176)
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating monolithic array camera modules with high MTF lens stacks for capture of images used in super-resolution processing.
Duparre, Jacques; Lelescu, Dan; Venkataraman, Kartik, Array cameras incorporating optics with modulation transfer functions greater than sensor Nyquist frequency for capture of images used in super-resolution processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Yamashita,Syugo; Murata,Haruhiko; Iinuma,Toshiya; Nakashima,Mitsuo; Mori,Takayuki, Device and method for converting two-dimensional video to three-dimensional video.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Hornback,Bert; Harwood,Doug; Boyd,W. Eric; Carlson,Randy, Imaging device with multiple fields of view incorporating memory-based temperature compensation of an uncooled focal plane array.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Kim, Yong-tae; Park, Ha-joong; Lee, Gun-ill; Min, Houng-sog; Hong, Sung-bin; Choi, Kwang-cheol, Method and apparatus for providing a multi-view still image service, and method and apparatus for receiving a multi-view still image service.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Han, Hee-chul; Choi, Yang-lim; Cho, Seung-ki, Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for performing post capture refocus using images captured by camera arrays.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
Ludwig, Lester F., Vignetted optoelectronic array for use in synthetic image formation via signal processing, lensless cameras, and integrated camera-displays.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.