Methods and apparatus for reducing noise in images
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-013/00
G06T-005/00
G06T-005/20
G06T-005/50
H04N-005/225
H04N-005/232
H04N-005/235
H04N-005/247
H04N-005/357
H04N-013/02
출원번호
US-0859103
(2015-09-18)
등록번호
US-9967535
(2018-05-08)
발명자
/ 주소
Laroia, Rajiv
Shroff, Nitesh
Shroff, Sapna A
출원인 / 주소
Light Labs Inc.
대리인 / 주소
Straub & Straub
인용정보
피인용 횟수 :
0인용 특허 :
70
초록▼
Various features relating to reducing and/or eliminating noise from images are described. In some embodiments depth based denoising is used on images captured by one or more camera modules based on depth information of a scene area and optical characteristics of the one or more camera modules used t
Various features relating to reducing and/or eliminating noise from images are described. In some embodiments depth based denoising is used on images captured by one or more camera modules based on depth information of a scene area and optical characteristics of the one or more camera modules used to captures the images. In some embodiments by taking into consideration the camera module optics and the depth of the object included in the image portion, a maximum expected frequency can be determined and the image portion is then filtered to reduce or remove frequencies above the maximum expected frequency. In this way noise can be reduced or eliminated from image portions captured by one or more camera modules. The optical characteristic of different camera modules may be different. In some embodiments a maximum expected frequency is determined on a per camera module and depth basis.
대표청구항▼
1. A method of generating an image, the method comprising: determining a first plurality of maximum expected frequencies, each of the first plurality of maximum expected frequencies corresponding to a first camera module and a different depth, said first plurality of maximum expected frequencies inc
1. A method of generating an image, the method comprising: determining a first plurality of maximum expected frequencies, each of the first plurality of maximum expected frequencies corresponding to a first camera module and a different depth, said first plurality of maximum expected frequencies including a first maximum expected frequency corresponding to said first camera module and a first depth and a second maximum expected frequency corresponding to the first camera module and a second depth;receiving portions of a first image captured by said first camera module; andperforming first filtering on individual portions of said first image captured by said first camera module, said first filtering including filtering individual portions of said first image corresponding to different depths using different filters, the filter being used on an individual portion being based on a depth to which the individual portion of said first image, to which the filter is applied, corresponds. 2. The method of claim 1, wherein said first filtering includes filtering a first portion of said first image corresponding to the first depth with a filter which removes or reduces frequency content above said first maximum expected frequency corresponding to said first camera module. 3. The method of claim 2, wherein said step of determining a first plurality of maximum expected frequencies is performed prior to said first filtering. 4. The method of claim 2, wherein said first filtering further includes filtering a second portion of said first image corresponding to the second depth with a second filter which removes or reduces frequency content above said second maximum expected frequency corresponding to the first camera module; andwherein said first filtering further includes filtering a third portion of said first image corresponding to a third depth with a third filter which removes or reduces frequency content above a third maximum expected frequency. 5. The method of claim 4, further comprising: prior to performing said first filtering, determining depths to which different portions of said first image correspond to; andapplying said first filtering to portions of said first image based on the determined depth of the individual image portions. 6. The method of claim 5, wherein determining depths to which different portions of said first image correspond to include includes: generating a depth map for a scene area corresponding to said first image; anddetermining the depths to which portions of said first image correspond based on said depth map. 7. The method of claim 4, further comprising: determining a second plurality of maximum expected frequencies, each of the second plurality of maximum expected frequencies corresponding to a second camera module and a different depth, said second plurality of maximum expected frequencies including a first maximum expected frequency corresponding to said second camera module and the first depth and a second maximum expected frequency corresponding to the second camera module and the second depth;receiving portions of a second image captured by said second camera module; andperforming second filtering on individual portions of said second image captured by said second camera module based on a depth to which each individual portion being filtered corresponds. 8. The method of claim 7, wherein said second filtering includes filtering a first portion of said second image corresponding to the first depth with a filter which removes or reduces frequency content above said first maximum expected frequency corresponding to said second camera module. 9. The method of claim 8, wherein said second filtering further includes filtering a second portion of said second image corresponding to the second depth with a second filter which removes or reduces frequency content above said second maximum expected frequency corresponding to said second camera module. 10. The method of claim 9, wherein said second filtering further includes filtering a third portion of said second image corresponding to a third depth with a third filter which removes or reduces frequency content above a third maximum expected frequency corresponding to the second camera module. 11. The method of claim 1, further comprising: prior to performing said first filtering, determining depths to which different portions of said first image correspond; andwherein performing first filtering on individual portions of said first image captured by said first camera module includes:applying filters having different cut off frequencies to individual image portions corresponding to different depths. 12. The method of claim 1, wherein performing first filtering on individual portions of said first image captured by said first camera module includes: using filters having different filter cut off frequencies to filter different portions of the first image where said different portions correspond to different depths, which of the plurality of different filters to be used for an individual portion of the first image being based on the depth to which the individual portion of the first image corresponds and a noise threshold determined for the depth. 13. The method of claim 12, wherein the filtering applied by one of said different filters removes or smoothes frequencies above the noise threshold determined for the depth of the individual image portion to which the filter is applied. 14. A camera device, comprising: a first camera module; anda processor configured to:determine a first plurality of maximum expected frequencies, each of the first plurality of maximum expected frequencies corresponding to the first camera module and a different depth, said first plurality of maximum expected frequencies including a first maximum expected frequency corresponding to said first camera module and a first depth and a second maximum expected frequency corresponding to the first camera module and a second depth;receive portions of a first image captured by said first camera module; andperform first filtering on individual portions of said first image captured by said first camera module, said first filtering including filtering individual portions of said first image corresponding to different depths using different filters, the filter being used on an individual portion being based on a depth to which the individual portion of said first image, to which the filter is applied, corresponds. 15. An apparatus, comprising: a maximum expected frequency determination module configured to determine a first plurality of maximum expected frequencies, each of the first plurality of maximum expected frequencies corresponding to a first camera module and a different depth, said first plurality of maximum expected frequencies including a first maximum expected frequency corresponding to said first camera module and a first depth and a second maximum expected frequency corresponding to the first camera module and a second depth;a processing module configured to receive portions of a first image captured by the first camera module; anda filtering module configured to perform first filtering on individual portions of said first image captured by said first camera module, said first filtering including filtering individual portions of said first image corresponding to different depths using different filters, the filter being used on an individual portion being based on a depth to which the individual portion of said first image, to which the filter is applied, corresponds. 16. The apparatus of claim 15, wherein said filtering module is configured to filter a first portion of said first image corresponding to the first depth to remove or reduce frequency content above said first maximum expected frequency corresponding to said first camera module, as part of performing said first filtering. 17. The apparatus of claim 16, wherein said filtering module is configured to filter a second portion of said first image corresponding to the second depth to remove or reduce frequency content above said second maximum expected frequency corresponding to the first camera module, as part of performing said first filtering. 18. The apparatus of claim 17, wherein said filtering module is configured to filter a third portion of said first image corresponding to a third depth to remove or reduce frequency content above a third maximum expected frequency, as part of performing said first filtering. 19. The apparatus of claim 18, further comprising: a processor configured to determine depths to which different portions of said first image correspond to; andwherein said filtering module is configured to apply said first filtering to portions of said first image based on the determined depth of the individual image portions. 20. The apparatus of claim 19, wherein said processor is configured to: generate a depth map for a scene area corresponding to said first image; anddetermine depths to which different portions of said first image correspond based on said depth map. 21. The apparatus of claim 20, wherein said processor is further configured to generate said depth map from multiple images of said scene area captured by different camera modules. 22. The apparatus of claim 18, wherein said maximum expected frequency determination module is further configured to determine a second plurality of maximum expected frequencies, each of the second plurality of maximum expected frequencies corresponding to a second camera module and said different depth, said second plurality of maximum expected frequencies including a first maximum expected frequency corresponding to said second camera module and the first depth and a second maximum expected frequency corresponding to the second camera module and the second depth;wherein said image processing module is further configured to receive portions of a second image captured by said second camera module; andwherein said filtering module is further configured to perform second filtering on individual portions of said second image captured by said second camera module based on a depth to which each individual portion being filtered corresponds. 23. A non-transitory machine readable medium for use in a camera including a first image sensor and a second image sensor, the non-transitory machine readable medium including processor executable instructions which when executed control a processor to: determine a first plurality of maximum expected frequencies, each of the first plurality of maximum expected frequencies corresponding to a first camera module and a different depth, said first plurality of maximum expected frequencies including a first maximum expected frequency corresponding to said first camera module and a first depth and a second maximum expected frequency corresponding to the first camera module and a second depth;receive portions of a first image captured by said first camera module; andperform first filtering on individual portions of said first image captured by said first camera module, said first filtering including filtering individual portions of said first image corresponding to different depths using different filters, the filter being used on an individual portion being based on a depth to which the individual portion of said first image, to which the filter is applied, corresponds.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (70)
Katayama Tatsushi,JPX ; Takiguchi Hideo,JPX ; Yano Kotaro,JPX ; Hatori Kenji,JPX, Apparatus and method for combining a plurality of images.
Watanabe, Makoto; Magaki, Yosuke; Nakazawa, Sachiko; Onumata, Yuichi, Camera, storage medium having stored therein camera control program, and camera control method.
Georgiev, Todor G.; Chunev, Georgi N., Methods and apparatus for rendering output images with simulated artistic effects from focused plenoptic camera data.
Ciurea, Florian; Venkataraman, Kartik; Molina, Gabriel; Lelescu, Dan, Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation.
Kawamura Akira (Kanagawa JPX) Togawa Kazuo (Kanagawa JPX), Visual image display apparatus having a video display for one eye and a controllable shutter for the other eye.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.