Methods and apparatus for generating a sharp image
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-013/02
G06T-005/00
H04N-005/222
H04N-005/225
G06T-005/50
출원번호
US-0689689
(2015-04-17)
등록번호
US-9824427
(2017-11-21)
발명자
/ 주소
Pulli, Kari
Shroff, Nitesh
Shroff, Sapna A.
Laroia, Rajiv
출원인 / 주소
Light Labs Inc.
대리인 / 주소
Straub & Straub
인용정보
피인용 횟수 :
13인용 특허 :
41
초록▼
Methods and apparatus for generating a sharp image are described. A camera device includes a plurality of camera modules, e.g., optical chains, where at least some of the camera modules have different depths of field. Multiple images of a scene are captured using the plurality of camera modules. Por
Methods and apparatus for generating a sharp image are described. A camera device includes a plurality of camera modules, e.g., optical chains, where at least some of the camera modules have different depths of field. Multiple images of a scene are captured using the plurality of camera modules. Portions of the multiple images which correspond to the same scene area are identified. Image portion sharpness levels are determined for individual image portions. Image portions with high sharpness levels are selected and included in a composite image.
대표청구항▼
1. A method of generating an image, the method comprising: operating an image processing device including a processor to receive multiple images of a scene captured using optical chains of a camera, at least some of said optical chains having different depths of field;determining, by the processor,
1. A method of generating an image, the method comprising: operating an image processing device including a processor to receive multiple images of a scene captured using optical chains of a camera, at least some of said optical chains having different depths of field;determining, by the processor, an image portion sharpness level for each of a plurality of portions of said multiple images; andgenerating, by the processor, a composite image from said multiple images based on the determined image portion sharpness levels by combining the sharpest image portions of the scene captured by different optical chains; andoperating a memory to store the composite image. 2. The method of claim 1, wherein determining an image portion sharpness level for each of a plurality of portions of said multiple images includes determining on a per image portion basis, a numerical value which is indicative of the sharpness of the image portion to which the determined numerical value corresponds. 3. The method of claim 1, further comprising: receiving user input identifying an object to focus on; andfocusing said optical chains based on the user identified object. 4. The method of claim 3, where a plurality of said optical chains with different depths of field are set to focus to the same distance. 5. The method of claim 1, further comprising: generating a depth map corresponding to said scene; andwherein determining an image portion sharpness level for each of a plurality of portions of said multiple images includes: determining an image portion sharpness level value for each of a plurality of different portions of a first image, at least some of the different portions of the first image having different image portion sharpness level values due to different levels of sharpness of the different portions of the first image. 6. The method of claim 5, wherein determining an image portion sharpness level value for a first image portion of the first image includes: using said generated depth map to determine a depth to which the image portion corresponds; anddetermining the image portion sharpness level value based on the depth to which said image portion corresponds and the optical chain used to capture the image portion. 7. The method of claim 5, wherein at least some of said optical chains have different optical characteristics and different depths of field. 8. The method of claim 7, wherein determining an image portion sharpness level value for an image portion is based on an optical transfer function of the optical chain which captured the image portion. 9. The method of claim 7, wherein the optical characteristic of the optical chain is a function of at least one of a depth of field setting, the focus distance, the focal length of the optical chain and the distance from said camera to objects in said image portion as indicated based on said depth map. 10. The method of claim 1, further comprising: identifying portions of multiple images which correspond to a same scene area, identified portions of images corresponding to the same scene area being corresponding image portions. 11. The method of claim 10, wherein identifying portions of images which correspond to the same scene area is based on a comparison of objects detected in said multiple images. 12. The method of claim 10, wherein at least a first image portion of a first image and a first image portion of a second image are corresponding image portions that are captured by different optical chains that correspond to a first scene area, the first image portion being of lower resolution than the second image portion, the first image portion of the first image and the first image portion of the second image being in a first set of corresponding image portions corresponding to the first scene area; andwherein generating a composite image includes, selecting from the first set of corresponding image portions the image portion having the highest sharpness level. 13. The method of claim 12, wherein the first image portion of the first image corresponding to the first scene area is of lower resolution than the first image portion of the second image but has a higher determined sharpness level than the first image portion of the second image; wherein said composite image includes one image portion corresponding to each area of the composite image; andwherein generating a composite image includes, selecting one image portion from each set of corresponding image portions, each selected image portion corresponding to an area of the composite image, said selecting one image portion from each set of corresponding image portions including selecting the first image portion of the first image corresponding to the first scene area for inclusion in the composite image rather than selecting the first image portion of the second image. 14. A camera system comprising: a plurality of optical chains, at least some of said optical chains having different depths of field, said optical chains capturing multiple images of a scene;a processor configured to: determine an image portion sharpness level for each of a plurality of portions of said multiple images; andgenerate a composite image from said multiple images based on the determined image portion sharpness levels by combining the sharpest image portions of the scene captured by different optical chains; anda memory coupled to said processor for storing said composite image. 15. The camera system of claim 14, wherein at least some of the optical chains: i) have different focal lengths, ii) have the same focal length but different apertures, or iii) have the same focal length, same aperture and different sensor pixel sizes. 16. The camera system of claim 14, further comprising: a user input device configured to receive user input identifying an object to focus on; anda focus control device configured to focus said optical chains based on the user identified object. 17. The camera system of claim 14, wherein said processor is further configured to generate a depth map corresponding to said scene. 18. The camera system of claim 17, wherein the processor, as part of determining an image portion sharpness level for each of a plurality of portions of said multiple images: determines, using said generated depth map, a depth to which the image portion corresponds; anddetermines for an individual image portion, an image portion sharpness level based on the depth to which said image portion corresponds and the camera module used to capture the image portion. 19. The camera system of claim 14, wherein the processor is further configured to identify portions of multiple images which correspond to a same scene area, identified portions of images corresponding to the same scene area being corresponding image portions. 20. A non-transitory machine readable medium including processor executable instructions which when executed by a processor of a camera system, control the camera system to perform the steps of: capturing multiple images of a scene using optical chains, at least some of said optical chains having different depths of field;determining an image portion sharpness level for each of a plurality of portions of said multiple images; andgenerating a composite image from said multiple images based on the determined image portion sharpness levels by including the sharpest image portions of the scene captured by different optical chains.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (41)
Katayama Tatsushi,JPX ; Takiguchi Hideo,JPX ; Yano Kotaro,JPX ; Hatori Kenji,JPX, Apparatus and method for combining a plurality of images.
Watanabe, Makoto; Magaki, Yosuke; Nakazawa, Sachiko; Onumata, Yuichi, Camera, storage medium having stored therein camera control program, and camera control method.
Georgiev, Todor G.; Chunev, Georgi N., Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Kawamura Akira (Kanagawa JPX) Togawa Kazuo (Kanagawa JPX), Visual image display apparatus having a video display for one eye and a controllable shutter for the other eye.
Pulli, Kari; Shroff, Nitesh; Shroff, Sapna A., Methods and apparatus for compensating for motion and/or changing light conditions during image capture.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.