Methods and systems for suppressing noise in images
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/21
G06T-005/00
H04N-005/217
H04N-005/33
출원번호
US-0943035
(2013-07-16)
등록번호
US-9635220
(2017-04-25)
발명자
/ 주소
Foi, Alessandro
Maggioni, Matteo Tiziano
출원인 / 주소
FLIR Systems, Inc.
대리인 / 주소
Haynes and Boone, LLP
인용정보
피인용 횟수 :
0인용 특허 :
42
초록▼
Various techniques are disclosed to effectively suppress noise in images (e.g., video or still images). For example, noise in images may be more accurately modeled as having a structured random noise component and a structured fixed pattern noise (FPN) component. Various parameters of noise may be e
Various techniques are disclosed to effectively suppress noise in images (e.g., video or still images). For example, noise in images may be more accurately modeled as having a structured random noise component and a structured fixed pattern noise (FPN) component. Various parameters of noise may be estimated robustly and efficiently in real time and in offline processing. Noise in images may be filtered adaptively, based on various noise parameters and motion parameters. Such filtering techniques may effectively suppress noise even in images that have a prominent FPN component, and may also improve effectiveness of other operations that may be affected by noise.
대표청구항▼
1. A method comprising: receiving a plurality of video image frames;constructing a plurality of spatiotemporal volumes by stacking together a plurality of image blocks extracted from same or different spatial positions on different video image frames along a trajectory of estimated motion from the v
1. A method comprising: receiving a plurality of video image frames;constructing a plurality of spatiotemporal volumes by stacking together a plurality of image blocks extracted from same or different spatial positions on different video image frames along a trajectory of estimated motion from the video image frames;filtering the spatiotemporal volumes, wherein the filtering models both a random noise (RND) component and a fixed pattern noise (FPN) component in the video image frames to suppress both types of noise, and wherein the filtering is adaptively performed based at least on the estimated motion captured in each spatiotemporal volume to suppress the FPN component; andaggregating the image blocks from the filtered spatiotemporal volumes to generate a plurality of filtered video image frames. 2. The method of claim 1, wherein each of the image blocks is a fixed-size patch extracted from a corresponding one of the video image frames. 3. The method of claim 1, wherein the video image frames are thermal video image frames. 4. The method of claim 1, further comprising determining the trajectory of the estimated motion from a sequence of the video image frames. 5. The method of claim 1, wherein the filtering of the spatiotemporal volumes further comprises: applying a decorrelating transform to the spatiotemporal volumes to generate corresponding three dimensional (3-D) spectra, wherein each 3-D spectrum comprises a plurality of spectral coefficients for a transform domain representation of a corresponding one of the spatiotemporal volumes;modifying at least some of the spectral coefficients in each of the 3D spectra based at least in part on one or more noise parameters that model both the RND and the FPN components; andapplying, to the 3D spectra, an inverse transform of the decorrelating transform to generate the filtered spatiotemporal volumes. 6. The method of claim 5, further comprising: estimating a standard deviation of the RND component and a standard deviation of the FPN component in the video image frames; andapproximating coefficient standard deviations using at least the standard deviation of the RND component and the standard deviation of the FPN component,wherein the one or more noise parameters comprise the coefficient standard deviations, andwherein the modifying comprises shrinking the at least some of the coefficients based on the corresponding coefficient standard deviations. 7. The method of claim 6, wherein: the standard deviation of the RIND component is estimated using a median absolute deviation (MAD) of a temporal high-pass version of the video image frames; andthe standard deviation of the FPN component is estimated using the standard deviation of the RND component and a MAD of a spatial high-pass version of the video image frames. 8. The method of claim 6, further comprising estimating a power spectral density (PSD) of the RND component and a PSD of the FPN component, wherein the coefficient standard deviations are approximated based further on the PSD of the RND component and the PSD of the FPN component. 9. The method of claim 8, wherein the approximating the coefficient standard deviations is based further on a size of the corresponding spatiotemporal volume, a relative spatial alignment of blocks associated with the corresponding spatiotemporal volume, and/or positions of the corresponding spectral coefficients within the corresponding 3-D spectrum so as to adjust the coefficient standard deviations based at least on the estimated motion. 10. The method of claim 5, further comprising: estimating a FPN pattern in the video image frames; andsubtracting the estimated FPN pattern from the video image frames, wherein the subtracting is performed prior to, during, or after the filtering of the spatiotemporal volumes. 11. The method of claim 5, wherein the aggregating the image blocks is based at least in part on the coefficient standard deviations. 12. The method of claim 5, further comprising modifying the spectral coefficients of the 3-D spectra to sharpen and/or improve contrast of the video image frames. 13. A system comprising: a video interface configured to receive a plurality of video image frames;a processor in communication with the video interface and configured to: construct a plurality of spatiotemporal volumes by stacking together a plurality of image blocks extracted from same or different spatial positions on different video image frames along a trajectory of estimated motion from the video image frames,filter the spatiotemporal volumes, wherein the filtering models both a random noise (RND) component and a fixed pattern noise (FPN) component in the video image frames to suppress both types of noise, and wherein the filtering is adaptively performed based at least on the estimated motion captured in each spatiotemporal volume to suppress the FPN component, andaggregate the image blocks from the filtered spatiotemporal volumes to generate a plurality of filtered video image frames; anda memory in communication with the processor and configured to store the video image frames. 14. The system of claim of claim 13, wherein each of the image blocks is a fixed-size patch extracted from a corresponding one of the video image frames. 15. The system of claim 13, further comprising an image capture device configured to capture images of a scene, wherein the video image frames are provided by the image capture device. 16. The system of claim 15, wherein the image capture device is an infrared camera configured to capture thermal images of the scene. 17. The system of claim 13, wherein the processor is configured to determine the trajectory of the estimated motion from a sequence of the video image frames. 18. The system of claim 13, wherein the processor is configured to: apply a decorrelating transform to the spatiotemporal volumes to generate corresponding three dimensional (3-D) spectra, wherein each 3-D spectrum comprises a plurality of spectral coefficients for a transform domain representation of corresponding one of the spatiotemporal volumes;modify at least some of the spectral coefficients in each of the 3D spectra based at least in part on one or more noise parameters that model both the RND and the FPN components; andapply, to the 3D spectra, an inverse transform of the decorrelating transform to generate the filtered spatiotemporal volumes. 19. The system of claim 18, wherein the processor is further configured to: estimate a standard deviation of the RND component and a standard deviation of the FPN component in the video image frames; andapproximate coefficient standard deviations using at least the standard deviation of the RND component and the standard deviation of the FPN component,wherein the one or more noise parameters comprise the coefficient standard deviations, andwherein the modifying comprises shrinking the at least some of the coefficients based on the corresponding coefficient standard deviations. 20. The system of claim 19, wherein the processor is configured to: estimate the standard deviation of the RND component using a median absolute deviation (MAD) of a temporal high-pass version of the video image frames; andestimate the standard deviation of the FPN component using the standard deviation of the RND component and a MAD of a spatial high-pass version of the video image frames. 21. The system of claim 19, wherein: the processor is further configured to estimate estimating a power spectral density (PSD) of the RND component and a PSD of the FPN component; andthe coefficient standard deviations are approximated based further on the PSD of the RND component and the PSD of the FPN component. 22. The system of claim 21, wherein the processor is configured to approximate the coefficient standard deviations based further on a size of the corresponding spatiotemporal volume, a relative spatial alignment of blocks associated with the corresponding spatiotemporal volume, and/or positions of the corresponding spectral coefficients within the corresponding 3-D spectrum so as to adjust the coefficient standard deviations based at least on the estimated motion. 23. The system of claim 18, wherein the processor is configured to: estimate a FPN pattern in the video image frames; andsubtract the estimated FPN pattern from the video image frames, wherein the estimated FPN component is subtracted prior to, during, or after the spatiotemporal volumes are filtered. 24. The system of claim 18, wherein the processor is configured to aggregate the image blocks based at least in part on the coefficient standard deviations. 25. The system of claim 18, wherein the processor is further configured to modify the spectral coefficients of the 3-D spectra to sharpen and/or improve contrast of the video image frames.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (42)
Chen Hai-Wen ; Braunreiter Dennis C. ; Schmitt Harry A., Adaptive non-uniformity compensation using feedforward shunting and wavelet filter.
Lee, Kangeun; Son, Changyong; Lee, Insung; Shin, Jaehyun; Kim, Jonghun; Jung, Kyuhyuk; Ahn, Youngwook, High-band speech coding apparatus and high-band speech decoding apparatus in wide-band speech coding/decoding system and high-band speech coding and decoding method performed by the apparatuses.
Balasubramanian Thyagarajan (West Lafayette IN) Choi King (Rochester NY) DiVencenzo Joseph (Rochester NY), Image rendering system and associated method for minimizing contours in a quantized digital color image.
Bae, Byeong-woo; Lee, Sung-dong; Suk, Hong-seong; Yoo, Jina; Lee, Ki-won, Mobile communication terminal equipped with temperature compensation function for use in bio-information measurement.
Parkulo,Craig M.; Barbee,Wesley McChord; Malin,Jerald Robert; Landis,Jeffrey Lynn; Shannon,Matthew, Personal multimedia communication system and network for emergency services personnel.
Wober Munib A. ; Yang Yibing ; Hajjahmad Ibrahim ; Sunshine Lon E. ; Reisch Michael L., Structuring a digital image into a DCT pyramid image representation.
Lieberman,Klony; Sharon,Yuval; Naimi,Eyal; Maor,Yaniv; Tsachi,Mattan; Arnon,Boas; Turm,Amichai, Virtual data entry device and method for input of alphanumeric and other data.
Lieberman,Klony; Sharon,Yuval; Naimi,Eyal; Maor,Yaniv; Tsachi,Mattan; Arnon,Boas; Turm,Amichai, Virtual data entry device and method for input of alphanumeric and other data.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.