Systems and methods for the manipulation of captured light field image data
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-019/20
G06F-003/041
G06F-003/01
G06F-003/0488
G06T-019/00
출원번호
US-0773284
(2013-02-21)
등록번호
US-9412206
(2016-08-09)
발명자
/ 주소
McMahon, Andrew Kenneth John
Venkataraman, Kartik
Mullis, Robert
출원인 / 주소
Pelican Imaging Corporation
대리인 / 주소
KPPB LLP
인용정보
피인용 횟수 :
41인용 특허 :
192
초록▼
Systems and methods for the manipulation of captured light fields and captured light field image data in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system for manipulating captured light field image data includes a processor, a display, a user i
Systems and methods for the manipulation of captured light fields and captured light field image data in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system for manipulating captured light field image data includes a processor, a display, a user input device, and a memory, wherein a depth map includes depth information for one or more pixels in the image data, and wherein an image manipulation application configures the processor to display a first synthesized image, receive user input data identifying a region within the first synthesized image, determine boundary data for the identified region using the depth map, receive user input data identifying at least one action, and perform the received action using the boundary data and the captured light field image data.
대표청구항▼
1. A system for manipulating captured light field image data, comprising: a processor;a display connected to the processor and capable of displaying images;a user input device connected to the processor and capable of generating user input data in response to user input; anda memory connected to the
1. A system for manipulating captured light field image data, comprising: a processor;a display connected to the processor and capable of displaying images;a user input device connected to the processor and capable of generating user input data in response to user input; anda memory connected to the processor and storing captured light field image data and an image manipulation application;wherein the captured light field image data comprises image data, pixel position data, and a depth map;wherein the depth map comprises depth information for one or more pixels in the image data; andwherein the image manipulation application directs the processor to: display a first synthesized image based on the image data using the display;receive user input data identifying at least one pixel identifying a region within the first synthesized image using the user input device;determine boundary data for the identified region using the depth map, wherein: boundary data is determined by utilizing the depth information along with color and intensity data to determine at least one boundary at a particular depth within the first synthesized image; andthe boundary data describes edges of at least one object within the identified region based on the depth of the pixels in the image data corresponding to the boundary data;receive user input data identifying at least one action to be performed using the user input device, where the action to be performed comprises an image processing operation utilizing the depth information; andperform the received action to the identified region using the boundary data and the captured light field image data. 2. The system of claim 1, wherein the image data in the captured light field image data is the first synthesized image. 3. The system of claim 1, wherein: the image data in the captured light field image data is a low resolution image;the pixel position data describes pixel positions for alternative view image pixels corresponding to specific pixels within the image data; andthe image manipulation application directs the processor to synthesize the first image using the image data, the pixel position data, and the depth map. 4. The system of claim 1, wherein: the image manipulation application directs the processor to detect an object in the first synthesized image using the boundary data and the depth map by utilizing the depth information along with color and intensity data to determine the boundaries of at least one solid object within the first synthesized image; anda detected object comprises a set of adjacent pixels in the first synthesized image related based on corresponding depth information in the depth map and defined by the determined boundaries. 5. The system of claim 4, wherein the image manipulation application further directs the processor to: obtain object data based on the detected object;generate captured light field image metadata using the requested search data; andassociate the captured light field image metadata with the pixels corresponding to the identified object in the image data. 6. The system of claim 5, wherein the object data is received from a third party information server system separate and remote from the image manipulation device. 7. The system of claim 1, wherein: the received action is a refocus action; andthe image manipulation application further directs the processor to perform the received action by synthesizing a second image using a synthetic aperture in the captured light field image data having a focal plane placed at the depth corresponding to the depth map of the pixels within the determined boundary data. 8. The system of claim 7, wherein: the input device is a gaze tracking device capable of generating input data identifying at least one pixel identifying a region within the first synthesized image based on the detection of a gaze input; andthe focal plane of the first synthesized image is placed at a depth corresponding to the generated input data. 9. The system of claim 7, wherein: the input device is a touchscreen device capable of generating input data identifying at least one pixel identifying a region within the first synthesized image based on received touch input data; andthe focal plane is placed at a depth corresponding to the depth of the region in the first synthesized image corresponding to the generated input data. 10. The system of claim 1, wherein: the received action is a bokeh modification action comprising blur modification data; andthe image manipulation program further directs the processor to perform the received action by: identifying the focal plane of the first synthesized image using the boundary data; andsynthesizing a second image using the identified focal plane, the blur modification data, and the captured light field image data. 11. The system of claim 1, wherein: the captured light field image data further comprises captured light field metadata associated with at least one pixel in the captured light field image data;the received action is a metadata retrieval action; andthe image manipulation application further directs the processor to perform the received action by: determining at least one pixel in the image data corresponding to the boundary data in the first synthesized image;retrieving the captured light field metadata associated with the determined at least one pixel; anddisplaying the retrieved metadata using the display. 12. The system of claim 1, wherein the received action is selected from the group consisting of a cut action, a copy action, a paste action, and a recoloring action, where the received action is performed as a function of the depth map associated with the captured light field image data. 13. A method for manipulating captured light field image data, comprising: obtaining captured light field image data using an image manipulation device, where the captured light field image data comprises image data, pixel position data, and a depth map, wherein the depth map comprises depth information for one or more pixels in the image data;displaying a first synthesized image based on the image data using the image manipulation device;receiving user input data identifying at least one pixel identifying a region within the first synthesized image using the image manipulation device;determining boundary data for the identified region based on the depth map using the image manipulation device, where the boundary data describes edges of at least one object within the identified region and the depth map comprises depth information for one or more pixels in the image data based on the depth of the pixels in the image data corresponding to the boundary data;receiving user input data identifying at least one action to be performed using the image manipulation device, where the action to be performed comprises an image processing operation utilizing the depth map; andperforming the received action to the identified region based on the boundary data and the captured light field image data using the image manipulation device. 14. The method of claim 13, wherein the image data in the captured light field image data is the first synthesized image. 15. The method of claim 13, further comprising synthesizing the first image based on the image data, the pixel position data, and the depth map using the image manipulation device; wherein: the image data in the captured light field image data is a low resolution image; andthe pixel position data describes pixel positions for alternative view image pixels corresponding to specific pixels within the image data. 16. The method of claim 13, further comprising detecting an object in the first synthesized image based on the boundary data and the depth map using the image manipulation device, where an object is a set of adjacent pixels in a synthesized image related based on corresponding depth information in the depth map. 17. The method of claim 16, further comprising: obtaining object data based on the detected object using the image manipulation device;generating captured light field image metadata based on the requested search data using the image manipulation device; andassociating the captured light field image metadata with the pixels corresponding to the identified object in the image data using the image manipulation device. 18. The method of claim 17, further comprising receiving object data from a third party information server system separate and remote from the image manipulation device using the image manipulation device. 19. The method of claim 13, further comprising: performing the received action by synthesizing a second image using a synthetic aperture in the captured light field image data having a focal plane placed at the depth corresponding to the depth map of the pixels within the determined boundary data using the image manipulation device. 20. The method of claim 19, further comprising: generating input data using the image manipulation device by identifying at least one pixel identifying a region within the first synthesized image based on the detection of a gaze input received from a gaze tracking device in the image manipulation device; andplacing the focal plane of the second synthesized at a depth corresponding to the generated input data using the image manipulation device. 21. The method of claim 19, further comprising: generating input data using the image manipulation device by identifying at least one pixel identifying a region within the first synthesized image based on the detection of a touch input received via a touchscreen device in the image manipulation device; andplacing the focal plane of the second synthesized at a depth corresponding to the generated input data using the image manipulation device. 22. The method of claim 13, further comprising: identifying the focal plane of the first synthesized image using the boundary data; andsynthesizing a second image based on the identified focal plane, the blur modification data, and the captured light field image data using the image manipulation device, where the blur modification data affects the bokeh of the second synthesized image. 23. The method of claim 13, further comprising: determining at least one pixel in the captured light field image data corresponding to the boundary data in the first synthesized image using the image manipulation device;retrieving captured light field metadata associated with the determined at least one pixel in the image data using the image manipulation device; anddisplaying the retrieved metadata using the image manipulation device. 24. The method of claim 13, wherein: the received action is selected from the group consisting of a cut action, a copy action, a paste action, and a recoloring action; andperforming the received action using the image manipulation device is based on the depth map associated with the captured light field image data. 25. A system for manipulating captured light field image data, comprising: a processor;a display connected to the processor and capable of displaying images;a user input device connected to the processor and capable of generating user input data in response to user input; anda memory connected to the processor and capable of storing captured light field image data and an image manipulation application;wherein the captured light field image data comprises image data, pixel position data, and a depth map;wherein the depth map comprises depth information for one or more pixels in the image data; andwherein the image manipulation application directs the processor to: display a first synthesized image based on the image data using the display;receive user input data identifying at least one pixel identifying a region within the first synthesized image using the user input device;determine boundary data for the identified region using the depth map, where the boundary data describes edges of at least one object within the identified region based on the depth of the pixels in the image data corresponding to the boundary data;receive user input data identifying at least one action to be performed using the user input device, where the action to be performed comprises a recoloring operation utilizing the depth map; andgenerate a second synthesized image based on the image data, the boundary data, the depth data, and the user input data, wherein the second synthesized image comprises at least one pixel having a color value differing from the corresponding pixel in the image data and the color of the at least one pixel in the second synthesized image is based on the depth of the at least one pixel. 26. A system for detecting objects in captured light field image data, comprising: a processor; anda memory connected to the processor and storing captured light field image data and an image manipulation application;wherein the captured light field image data comprises image data, pixel position data, and a depth map;wherein the depth map comprises depth information for one or more pixels in the image data;wherein the image data comprises a set of pixels, wherein a pixel comprises a set of color and a set of intensity data; andwherein the image manipulation application directs the processor to: determine boundary data within the captured light field image data using the depth map, where the boundary data describes edges of at least one object within the identified region based on the depth of the pixels in the captured light field image data corresponding to the boundary data;detect an object in the captured light field image data using the boundary data and the depth map by utilizing the depth information along with color and intensity data within the captured light field image data to determine the boundaries of at least one solid object within the first synthesized image, wherein a detected object comprises a set of adjacent pixels in the first synthesized image related based on corresponding depth information in the depth map and defined by the determined boundaries;generate object metadata comprising the locations of the detected objects within the captured light field image data; andstore the object metadata in the captured light field image data.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (192)
Wilburn, Bennett; Joshi, Neel; Levoy, Marc C.; Horowitz, Mark, Apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
Iwase Toshihiro (Nara JPX) Kanekura Hiroshi (Yamatokouriyama JPX), Apparatus for and method of converting a sampling frequency according to a data driven type processing.
Boisvert, David Michael; McMahon, Andrew Kenneth John, CCD output processing stage that amplifies signals from colored pixels based on the conversion efficiency of the colored pixels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by arrays of luma and chroma cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using monolithic camera array with heterogeneous imagers.
Yamashita,Syugo; Murata,Haruhiko; Iinuma,Toshiya; Nakashima,Mitsuo; Mori,Takayuki, Device and method for converting two-dimensional video to three-dimensional video.
Pertsel, Shimon; Meitav, Ohad; Pozniansky, Eli; Galil, Erez, Digital camera with selectively increased dynamic range by control of parameters during image acquisition.
Ward, Gregory John; Seetzen, Helge; Heidrich, Wolfgang, Electronic camera having multiple sensors for capturing high dynamic range images and related methods.
Hornback,Bert; Harwood,Doug; Boyd,W. Eric; Carlson,Randy, Imaging device with multiple fields of view incorporating memory-based temperature compensation of an uncooled focal plane array.
Abell Gurdon R. (West Woodstock CT) Cook Francis J. (Topsfield MA) Howes Peter D. (Sudbury MA), Method and apparatus for arraying image sensor modules.
Sawhney,Harpreet Singh; Tao,Hai; Kumar,Rakesh; Hanna,Keith, Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery.
Han, Hee-chul; Choi, Yang-lim; Cho, Seung-ki, Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data.
Alexander David H. (Santa Monica CA) Hershman George H. (Carlsbad CA) Jack Michael D. (Carlsbad CA) Koda N. John (Vista CA) Lloyd Randahl B. (San Marcos CA), Monolithic imager for near-IR.
Hornbaker ; III Cecil V. (New Carrolton MD) Driggers Thomas C. (Falls Church VA) Bindon Edward W. (Fairfax VA), Scanning apparatus using multiple CCD arrays and related method.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, System and methods for measuring depth using an array camera employing a bayer filter.
Lelescu, Dan; Molina, Gabriel; Venkataraman, Kartik, Systems and methods for dynamic refocusing of high resolution images generated using images captured by a plurality of imagers.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for generating depth maps and corresponding confidence maps indicating depth estimation reliability.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for generating depth maps using a set of images containing a baseline image.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using light focused on an image sensor by a lens element array.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth in the presence of occlusions using a subset of images.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for measuring depth using an array of independently controllable cameras.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for parallax measurement using camera arrays incorporating 3 x 3 camera configurations.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for performing depth estimation using image data from multiple spectral channels.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for performing post capture refocus using images captured by camera arrays.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Systems and methods for synthesizing higher resolution images using a set of images containing a baseline image.
Ludwig, Lester F., Vignetted optoelectronic array for use in synthetic image formation via signal processing, lensless cameras, and integrated camera-displays.
Rieger Albert,DEX ; Barclay David ; Chapman Steven ; Kellner Heinz-Andreas,DEX ; Reibl Michael,DEX ; Rydelek James G. ; Schweizer Andreas,DEX, Watertight body for accommodating a photographic camera.
Venkataraman, Kartik; Gallagher, Paul; Jain, Ankit K.; Nisenzon, Semyon; Lelescu, Dan; Ciurea, Florian; Molina, Gabriel, Autofocus system for a conventional camera that uses depth information from an array camera.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images including occlusions focused on an image sensor by a lens stack array.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H.; Duparre, Jacques; Hu, Shane Ching-Feng, Capturing and processing of images using camera array incorperating Bayer cameras having different fields of view.
Srikanth, Manohar; Ramamoorthi, Ravi; Venkataraman, Kartik; Chatterjee, Priyam, System and methods for depth regularization and semiautomatic interactive matting using RGB-D images.
Nayar, Shree; Venkataraman, Kartik; Pain, Bedabrata; Lelescu, Dan, Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures.
Lelescu, Dan; Venkataraman, Kartik, Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan; Venkataraman, Kartik; Molina, Gabriel, Systems and methods for detecting defective camera arrays and optic arrays.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints.
Venkataraman, Kartik; Lelescu, Dan; Molina, Gabriel, Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras.
Venkataraman, Kartik; Jabbi, Amandeep S.; Mullis, Robert H., Systems and methods for generating depth maps using images captured by camera arrays incorporating cameras having different fields of view.
Duparre, Jacques; McMahon, Andrew Kenneth John; Lelescu, Dan, Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors.
Venkataraman, Kartik; Huang, Yusong; Jain, Ankit K.; Chatterjee, Priyam, Systems and methods for performing high speed video capture and depth estimation using array cameras.
Lelescu, Dan; Duong, Thang, Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information.
Venkataraman, Kartik; Nisenzon, Semyon; Chatterjee, Priyam; Molina, Gabriel, Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.