IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0152148
(2002-05-20)
|
등록번호 |
US-7280696
(2007-10-09)
|
발명자
/ 주소 |
- Zakrzewski,Radoslaw Romuald
- Sadok,Mokhtar
|
출원인 / 주소 |
- Simmonds Precision Products, Inc.
|
대리인 / 주소 |
Muirhead and Saturnelli, LLC
|
인용정보 |
피인용 횟수 :
31 인용 특허 :
68 |
초록
▼
Detecting video phenomena, such as fire in an aircraft cargo bay, includes receiving a plurality of video images from a plurality of sources, compensating the images to provide enhanced images, extracting features from the enhanced images, and combining the features from the plurality of sources to
Detecting video phenomena, such as fire in an aircraft cargo bay, includes receiving a plurality of video images from a plurality of sources, compensating the images to provide enhanced images, extracting features from the enhanced images, and combining the features from the plurality of sources to detect the video phenomena. The plurality of sources may include cameras having a sensitivity of between 400 nm and 1000 nm and/or may include cameras having a sensitivity of between 7 and 14 micrometers. Extracting features may include determining an energy indicator for each of a subset of the plurality of frames. Detecting video phenomena may also include comparing energy indicators for each of the subset of the plurality of frames to a reference frame. The reference frame corresponds to a video frame taken when no fire is present, video frame immediately preceding each of the subset of the plurality of frames, or a video frame immediately preceding a frame that is immediately preceding each of the subset of the plurality of frames.
대표청구항
▼
What is claimed is: 1. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; performing local fusion using said features for each set of
What is claimed is: 1. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; performing local fusion using said features for each set of enhanced images from each of the plurality of sources and producing a local fusion result for said each source related to a video phenomenon, wherein at least one of said features for one of the enhanced images is a numerical value characterizing a plurality of related pixels of said one enhanced image, said local fusion result for said each source being an indicator indicating whether said video phenomenon is present; and combining the local fusion results for each of said plurality of sources to produce a final result indicating whether the video phenomenon is present. 2. A method, according to claim 1, wherein the plurality of sources include cameras having a sensitivity of between 400 nm and 1000 nm. 3. A method, according to claim 1, wherein the plurality of sources include cameras having a sensitivity of between 7 and 14 micrometers. 4. A method, according to claim 1, wherein extracting features includes performing a principal component analysis on a subset of a plurality of the frames. 5. A method, according to claim 4, wherein performing a principal component analysis includes computing eigenvalues and a correlation matrix for the subset of the plurality of frames. 6. A method, according to claim 1, wherein extracting features includes determining wavelet coefficients in connection with multiscale modeling. 7. A method, according to claim 1, wherein at least one of said performing and said combining includes combining features includes using a neural network. 8. A method, according to claim 1, wherein at least one of said performing and said combining includes combining features includes using fuzzy logic. 9. A method, according to claim 1, wherein at least one of said performing and said combining includes combining features includes using a hidden Markov model. 10. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; and combining the features from the plurality of sources to detect the video phenomena, wherein combining features includes using a multiple model estimator. 11. A method, according to claim 1, wherein the video phenomenon is a fire. 12. A method for detecting fire in an aircraft cargo bay, comprising: providing a plurality of cameras in the cargo bay; obtaining image signals from the cameras; compensating the image signals to provide enhanced image signals, wherein said compensating includes performing processing using at least one input parameter determined in accordance with one or more external input values, at least one of said external input values indicating an environmental condition; extracting features from the enhanced image signals, wherein at least one of the features of at least one enhanced image signal is a numerical value characterizing a plurality of related pixels thereof; and combining the features to detect the presence of fire, said combining including producing a local fusion result for each of said cameras, said local fusion result being an indicator indicating whether fire is present, said combining also including producing a final result in accordance with said local fusion result for each of said cameras. 13. A method, according to claim 1, wherein the plurality of sources include some cameras having a sensitivity of between 400 nm and 1000 nm and other cameras having a sensitivity of between 7 and 14 micrometers. 14. A method, according to claim 1, further comprising enhancing the image signals using edge detection techniques. 15. A method, according to claim 14, wherein the edge detection techniques includes at least one of: the Sobel technique, the Prewitt technique, the Roberts technique, and the Canny operator. 16. A method, according to claim 1, wherein image compensation includes compensating for at least one of: camera artifacts, dynamic range unbalance, aircraft vibration, temperature variations, and fog and smoke effects. 17. A method, according to claim 1, wherein the video phenomenon is one of: unexpected motions, intrusion, unauthorized personnel, wing tip clearance during taxiing, runway incursion of foreign objects, pilot alertness, and aircraft body parts. 18. A method for detecting fire in an aircraft cargo bay, comprising: providing a plurality of cameras in the cargo bay; obtaining image signals from the cameras; enhancing the image signals to provide enhanced image signals; extracting features from the enhanced image signals; and combining the features to detect the presence of fire, wherein combining the features includes using information provided by a cargo bay smoke detector. 19. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that performs local fusion using said features for each set of enhanced images from each of the plurality of sources and produces a local fusion result for said each source related to a video phenomenon, wherein at least one of said features for one of the enhanced images is a numerical value characterizing a plurality of related pixels of said one enhanced image, said local fusion result for said each source being an indicator indicating whether said video phenomenon is present; and executable code that combines the local fusion results for each of said plurality of sources to produce a final result indicating whether the video phenomenon is present. 20. An apparatus that detects video phenomena, comprising: a plurality of cameras; and at least one processor, coupled to said cameras, wherein said processor receives a plurality of video images from a plurality of sources, compensates the images to provide enhanced images, extracts features from the enhanced images, performs local fusion using said features for each set of enhanced images from each of the plurality of sources and produces a local fusion result for said each source related to a video phenomenon, wherein at least one of said features for one of the enhanced images is a numerical value characterizing a plurality of related pixels of said one enhanced image, said local fusion result for said each source being an indicator indicating whether said video phenomenon is present, and combines the local fusion results for each of said plurality of sources to produce a final result indicating whether the video phenomenon is present. 21. An apparatus, according to claim 20, wherein at least some of the cameras have a sensitivity of between 400 nm and 1000 nm. 22. An apparatus, according to claim 20, wherein at least some of the cameras have a sensitivity of between 7 and 14 micrometers. 23. The method of claim 1, wherein said compensating includes adjusting a video image for vibration. 24. The method of claim 23, wherein said compensating uses a Wiener filter. 25. The method of claim 23, wherein said compensating includes performing compensation in accordance with a special camera lens used on a camera for obtaining at least one of said plurality of video images. 26. The method of claim 23, wherein said compensating includes image transformation for calibration of a camera used to obtain at least one of said plurality of video images. 27. The method of claim 23, wherein said compensation includes compensating for dynamic range unbalance. 28. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; and combining the features from the plurality of sources to detect the video phenomena, wherein said compensating includes adjusting a video image for vibration and wherein said compensation includes performing temperature compensation for at least one of said plurality of video images obtained using an IR camera. 29. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; and combining the features from the plurality of sources to detect the video phenomena, wherein said compensating includes adjusting a video image for vibration and wherein said compensation includes performing calibration in accordance with an age of a camera used to obtain at least one of said plurality of video images. 30. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; and combining the features from the plurality of sources to detect the video phenomena, wherein said compensating includes adjusting a video image for vibration and wherein said compensation uses at least one external input value including one of: results from a smoke detection control unit, ambient temperature used in IR camera image compensation, an aircraft altitude signal, and a cargo bay door open signal. 31. The method of claim 23, further comprising: filtering image noise due to a vibration using a Wiener filter. 32. The method of claim 31, wherein said vibration is due to an unstable camera due to vibration. 33. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; combining the features from the plurality of sources to detect the video phenomena, wherein said compensating includes adjusting a video image for vibration; and filtering image noise due to a vibration using a Wiener filter, wherein said vibration is due to an unstable camera due to vibration; and processing a video image in the frequency domain using a homomorphic filter to perform simultaneous brightness range compression and contrast enhancement. 34. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; combining the features from the plurality of sources to detect the video phenomena, wherein said compensating includes adjusting a video image for vibration; and filtering image noise due to a vibration using a Wiener filter, wherein said vibration is due to an unstable camera due to vibration; and applying a logarithmic transformation to a video image to split the illumination and reflection components producing a resulting image which is processed in the frequency domain where functions of brightness range compression and contrast enhancement are performed simultaneously. 35. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; combining the features from the plurality of sources to detect the video phenomena, wherein said compensating includes adjusting a video image for vibration; and filtering image noise due to a vibration using a Wiener filter, wherein said vibration is due to an unstable camera due to vibration; and using matrix multiplication on a video image to suppress a camera vibration effect wherein the elements of a matrix used in the matrix multiplication are determined and verified in accordance with at least one vibration pattern observed in an aircraft environment. 36. The method of claim 35, wherein said at least one vibration pattern includes at least one of frequency, magnitude and orientation. 37. The computer program product of claim 19, wherein said executable code that compensates includes executable code that adjusts a video image for vibration. 38. The computer program product of claim 37, wherein said executable code that compensates uses a Wiener filter. 39. he computer program product of claim 37, wherein said executable code that compensates includes executable code that performs compensation in accordance with a special camera lens used on a camera for obtaining at least one of said plurality of video images. 40. The computer program product of claim 37, wherein said executable code that compensates includes executable code that performs an image transformation for calibration of a camera used to obtain at least one of said plurality of video images. 41. The computer program product of claim 37, wherein said executable code that compensates includes executable code that compensates for dynamic range unbalance. 42. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that combines the features from the plurality of sources to detect the video phenomena, wherein said executable code that compensates includes executable code that adjusts a video image for vibration, and wherein said executable code that compensates includes executable code that performs temperature compensation for at least one of said plurality of video images obtained using an IR camera. 43. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that combines the features from the plurality of sources to detect the video phenomena, wherein said executable code that compensates includes executable code that adjusts a video image for vibration, and wherein said executable code that compensates includes executable code that performs calibration in accordance with an age of a camera used to obtain at least one of said plurality of video images. 44. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that combines the features from the plurality of sources to detect the video phenomena, wherein said executable code that compensates includes executable code that adjusts a video image for vibration, and wherein said executable code that compensates uses at least one external input value including one of: results from a smoke detection control unit, ambient temperature used in IR camera image compensation, an aircraft altitude signal, and a cargo bay door open signal. 45. The computer program product of claim 37, further comprising: executable code that filters image noise due to a vibration using a Wiener filter. 46. The computer program product of claim 45, wherein said vibration is due to an unstable camera due to vibration. 47. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that combines the features from the plurality of sources to detect the video phenomena, wherein said executable code that compensates includes executable code that adjusts a video image for vibration; executable code that filters image noise due to a vibration using a Wiener filter, wherein said vibration is due to an unstable camera due to vibration; and executable code that processes a video image in the frequency domain using a homomorphic filter to perform simultaneous brightness range compression and contrast enhancement. 48. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that combines the features from the plurality of sources to detect the video phenomena, wherein said executable code that compensates includes executable code that adjusts a video image for vibration; executable code that filters image noise due to a vibration using a Wiener filter, wherein said vibration is due to an unstable camera due to vibration; and executable code that applies a logarithmic transformation to a video image to split the illumination and reflection components producing a resulting image which is processed in the frequency domain where functions of brightness range compression and contrast enhancement are performed simultaneously. 49. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that combines the features from the plurality of sources to detect the video phenomena, wherein said executable code that compensates includes executable code that adjusts a video image for vibration; executable code that filters image noise due to a vibration using a Wiener filter, wherein said vibration is due to an unstable camera due to vibration; and executable code that suppresses a camera vibration effect using matrix multiplication on a video image wherein the elements of a matrix used in the matrix multiplication are determined and verified in accordance with at least one vibration pattern observed in an aircraft environment. 50. The computer program product of claim 49, wherein said at least one vibration pattern includes at least one of frequency, magnitude and orientation. 51. The method of claim 1, further comprising: compensating a first image to account for a camera imperfection wherein at least one camera has a non-uniform brightness in a camera display area. 52. The method of claim 51, wherein said camera imperfection includes a field of view having a center brighter than at least one corner. 53. The method of claim 52, further comprising: enhancing the first image in a space domain using a contrast stretching technique that increases a dynamic range of said first image. 54. The method of claim 1, further comprising: calibrating a dynamic range for at least one camera used to obtain one of said video images in accordance with a type of said at least one camera; and compensating said one video image causing image grayscale distribution to be within a range capability of said at least one camera. 55. A method of detecting video phenomena, comprising: receiving a plurality of video images from a plurality of sources; compensating the images to provide enhanced images; extracting features from the enhanced images; combining the features from the plurality of sources to detect the video phenomena; detecting a hotspot in a first video image; enhancing said video image using a gray level slicing technique to highlight a specific range of gray levels associated with a hotspot-related feature. 56. The method of claim 55, wherein said first video image is obtained using an IR camera. 57. The method of claim 1, further comprising: filtering out at least one known hot area for at least one video image. 58. The method of claim 1, further comprising: expanding a dynamic range associated with at least one of said plurality of video images in accordance with a viewing range of a human eye. 59. The computer program product of claim 19, further comprising: compensating a first image to account for a camera imperfection wherein at least one camera has a non-uniform brightness in a camera display area. 60. The computer program product of claim 59, wherein said camera imperfection includes a field of view having a center brighter than at least one corner. 61. The computer program product of claim 60, further comprising: executable code that enhances the first image in a space domain using a contrast stretching technique that increases a dynamic range of said first image. 62. The computer program product of claim 19, further comprising: executable code that calibrates a dynamic range for at least one camera used to obtain one of said video images in accordance with a type of said at least one camera; and executable code that compensates said one video image causing image grayscale distribution to be within a range capability of said at least one camera. 63. A computer program product stored on a computer readable medium that detects video phenomena, comprising: executable code that receives a plurality of video images from a plurality of sources; executable code that compensates the images to provide enhanced images; executable code that extracts features from the enhanced images; executable code that combines the features from the plurality of sources to detect the video phenomena; executable code that detects a hotspot in a first video image; and executable code that enhances said video image using a gray level slicing technique to highlight a specific range of gray levels associated with a hotspot-related feature. 64. The computer program product of claim 63, wherein said first video image is obtained using an IR camera. 65. The computer program product of claim 19, further comprising: executable code that filters out at least one known hot area for at least one video image. 66. The computer program product of claim 19, further comprising: executable code that expands a dynamic range associated with at least one of said plurality of video images in accordance with a viewing range of a human eye. 67. A method of detecting fire in an aircraft cargo bay comprising: receiving a plurality image frames from a plurality of cameras in the cargo bay; enhancing the plurality of image frames to compensate for a condition associated with at least one of: a camera condition and a cargo bay condition; selecting a portion of each of said plurality of image frames; extracting features from said portions, wherein at least one of said features for each of said portions is a numerical value characterizing a plurality of related pixels included in said each portion; performing local fusion using said features for each set of enhanced image frames from each of the plurality of cameras and producing a local fusion result for said each set of enhanced image frames from each camera, said local fusion result for each camera being an indicator indicating whether fire is present; and combining the local fusion results for each set of enhanced images to produce a final result indicating whether a fire is present. 68. The method of claim 67, wherein said cargo bay condition includes known hot spots within said cargo bay. 69. A method of detecting fire in an aircraft cargo bay comprising: receiving a plurality image frames from a plurality of cameras in the cargo bay; enhancing the plurality of image frames to compensate for a condition associated with at least one of: a camera condition and a cargo bay condition; selecting a portion of each of said plurality of image frames; extracting features from said portions; and using the extracted features to detect the presence of fire, wherein said cargo bay condition includes known hot spots within said cargo bay, and wherein at least one of said known hot spots is caused by at least one of: temperature of cargo in the cargo bay, a mechanical cooler generating a hot spot, and an aircraft being in a warm area. 70. The method of claim 67, wherein said camera condition includes at least one of: vibration of a camera, a non-uniform brightness in an camera image, a line in a camera image, a dark spot in a camera image, and a camera artifact. 71. The method of claim 67, wherein said plurality of cameras are mounted in upper corners of a cargo bay. 72. The method of claim 67, further comprising: associating a first set of features extracted with a first region of the image and associating a second set of features extracted with a second region of the image. 73. The method of claim 72, further comprising: extracting the first set of features; and extracting the second set of features. 74. The method of claim 72, further comprising: growing one of said first and said second regions by pixel aggregation and averaging. 75. The method of claim 67, further comprising: identifying at least one feature in accordance with an image distribution map. 76. The method of claim 75, wherein said at least one feature includes at least one of: pixel intensity, pixel grey level, a Fourier descriptor, a wavelet coefficient, a statistical moment. 77. The method of claim 75, further comprising: using at least one of said features to identify one or more regions of interest in an image. 78. The method of claim 77, further comprising: splitting a region into a plurality of regions. 79. The method of claim 77, further comprising: merging a region with another region. 80. The method of claim 77, wherein a region of interest is associated with at least one of: a fire region, a smoke region, a hotspot region. 81. The method of claim 80, wherein a region of interest is defined as a contiguous set of pixels. 82. The method of claim 67, further comprising: downsampling each portion producing downsampled portions for said plurality of image frames; and extracting features from the plurality of downsampled portions. 83. The method of claim 82, wherein said downsampling includes performing at least one of: selecting every other pixel of a frame, using a resizing technique on a frame. 84. A computer program product stored on a computer readable medium that detects fire in an aircraft cargo bay comprising: executable code that receives a plurality image frames from a plurality of cameras in the cargo bay; executable code that enhances the plurality of image frames to compensate for a condition associated with at least one of: a camera condition and a cargo bay condition; executable code that selects a portion of each of said plurality of image frames; executable code that extracts features from said portions, wherein at least one of said features for each of said portions is a numerical value characterizing a plurality of related pixels included in said each portion; executable code that performs local fusion using said features for each set of enhanced image frames from each of the plurality of cameras and produces a local fusion result for said each set of enhanced image frames from each camera, said local fusion result for each camera being an indicator indicating whether fire is present; and combining the local fusion results for each set of enhanced images to produce a final result indicating whether the fire is present. 85. The computer program product of claim 84, wherein said cargo bay condition includes known hot spots within said cargo bay. 86. A computer program product stored on a computer readable medium that detects fire in an aircraft cargo bay comprising: executable code that receives a plurality image frames from a plurality of cameras in the cargo bay; executable code that enhances the plurality of image frames to compensate for a condition associated with at least one of: a camera condition and a cargo bay condition; executable code that selects a portion of each of said plurality of image frames; executable code that extracts features from said portions, wherein said cargo bay condition includes known hot spots within said cargo bay; and executable code that uses the extracted features to detect the presence of firewherein at least one of said known hot spots is caused by at least one of: temperature of cargo in the cargo bay, a mechanical cooler generating a hot spot, and an aircraft being in a warm area. 87. The computer program product of claim 84, wherein said camera condition includes at least one of: vibration of a camera, a non-uniform brightness in an camera image, a line in a camera image, a dark spot in a camera image, and a camera artifact. 88. The computer program product of claim 84, wherein said plurality of cameras are mounted in upper corners of a cargo bay. 89. The computer program product of claim 84, further comprising: executable code that associates a first set of features extracted with a first region of the image and associating a second set of features extracted with a second region of the image. 90. The computer program product of claim 89, further comprising: executable code that extracts the first set of features; and executable code that extracts the second set of features. 91. The computer program product of claim 89, further comprising: executable code that grows one of said first and said second regions by pixel aggregation and averaging. 92. The computer program product of claim 84, further comprising: executable code that identifies at least one feature in accordance with an image distribution map. 93. The computer program product of claim 92, wherein said at least one feature includes at least one of: pixel intensity, pixel grey level, a Fourier descriptor, a wavelet coefficient, a statistical moment. 94. The computer program product of claim 92, further comprising: executable code that uses at least one of said features to identify one or more regions of interest in an image. 95. The computer program product of claim 94, further comprising: executable code that splits a region into a plurality of regions. 96. The computer program product of claim 94, further comprising: executable code that merges a region with another region. 97. The computer program product of claim 94, wherein a region of interest is associated with at least one of: a fire region, a smoke region, a hotspot region. 98. The computer program product of claim 97, wherein a region of interest is defined as a contiguous set of pixels. 99. The computer program product of claim 84, further comprising: executable code that downsamples each portion producing downsampled portions for said plurality of image frames; and executable code that extracts features from the plurality of downsampled portions. 100. The computer program product of claim 99, wherein said executable code that downsamples includes executable code that performs at least one of: selecting every other pixel of a frame, and using a resizing technique on a frame. 101. The method of claim 12, wherein said compensating includes performing image compensation processing for at least one image signal obtained from one of said plurality of cameras in accordance with one or more of: a camera specific factor, a camera specific defect, and a type of camera. 102. A method for detecting fire in an aircraft cargo bay, comprising: providing a plurality of cameras in the cargo bay; obtaining image signals from the cameras; compensating the image signals to provide enhanced image signals, wherein said compensating is performed in accordance with one or more external input values, at least one of said external input values indicating an environmental condition; extracting features from the enhanced image signals; and combining the features to detect the presence of fire, wherein said external input values includes at least one value indicating a flight profile condition of an aircraft, said flight profile condition of an aircraft being associated with one of a plurality of flight profiles including loading, landing, taking off, and cruising. 103. The method of claim 1, further comprising: providing at least one external input to said combining, said at least one external input used in connection with performing a sensitivity adjustment in accordance with an amount of movement caused by a change other than said video phenomena. 104. The method of claim 1, wherein at least one of the features of an enhanced image is a numerical value characterizing said enhanced image, and said combining includes determining a weighted score using said at least one of the features of each of said local fusion results. 105. The method of claim 1, wherein said performing local fusion produces an indicator indicating whether fire is present, said final result includes at least one of a score and a final result indicator, said score formed using said local fusion results for said each source, and said final result indicator formed by performing a logical operation using said local fusion results for said each source. 106. The method of claim 1, wherein said combining includes using information provided by a smoke detector. 107. The method of claim 1, further comprising: suppressing a camera vibration effect in accordance with at least one observed vibration pattern of an aircraft environment. 108. The method of claim 1, wherein said compensating includes performing temperature compensation for at least one of said plurality of video images.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.