최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0287626 (2016-10-06) |
등록번호 | US-10182216 (2019-01-15) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 315 |
Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first ban
Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first band of visible wavelengths and a second set of images includes information in a second band of visible wavelengths and non-visible wavelengths, determining an initial estimate by combining the first set of images into a first fused image, combining the second set of images into a second fused image, spatially registering the fused images, denoising the fused images using bilateral filters, normalizing the second fused image in the photometric reference space of the first fused image, combining the fused images, determining a high resolution image that when mapped through a forward imaging transformation matches the input images within at least one predetermined criterion.
1. A method of generating an image of a scene using a camera array including at least one camera that captures an RGB image of a scene and at least one camera that captures a black and white (B/W) image of the scene, the method comprising: obtaining input images captured by a plurality of cameras th
1. A method of generating an image of a scene using a camera array including at least one camera that captures an RGB image of a scene and at least one camera that captures a black and white (B/W) image of the scene, the method comprising: obtaining input images captured by a plurality of cameras that includes a camera that captures an RGB image and a camera that captures a B/W image, where the input images includes a first RGB input image that includes image information captured in at least three channels (RGB) of information and a second B/W input image that includes image information captured in a single black and white (B/W) channel of information;generate a fused image using a processor configured by software to: measure parallax using the input images captured by the plurality of cameras to produce a depth map;normalize the second B/W input image in the photometric reference space of the first RGB input image;cross-channel normalize the first RGB input image with respect to the B/W input image by applying gains and offsets to pixels of the first RGB input image; andperform cross-channel fusion using the first RGB input image and the second B/W input image to produce an image. 2. The method of claim 1, wherein the plurality of cameras further comprises a camera that captures a fourth near-IR channel that can also be used during fusion processing to produce a fused image. 3. The method of claim 1, wherein the first RGB input image and the second B/W input image have the same resolution. 4. The method of claim 1, further comprising: capturing a first set of input RGB images that are captured by a first set of cameras from the plurality of cameras; andcapturing a second set of input B/W images that are captured by a second set of cameras from the plurality of cameras. 5. The method of claim 4, further comprising: combining image information from the first set of input RGB images into a first fused image using analog gain and noise information from the first set of cameras; andcombining image information from the second set of input B/W images into a second fused image utilizes using analog gain and noise information from the second set of cameras. 6. The method of claim 1, further comprising denoising the first RGB input image using a first bilateral filter and denoising the second B/W input image using a second bilateral filter, wherein the first bilateral filter and the second bilateral filter utilize weights that are a function of both the photometric and geometric distance between a pixel and pixels in the neighborhood of the pixel. 7. The method of claim 6, wherein the first bilateral filter is a cross-channel bilateral filter utilizing weights determined for the second B/W input image. 8. The method of claim 6, wherein the first RGB input image is captured by a first camera from the plurality of cameras and the first bilateral filter is a cross-channel bilateral filter utilizing weights determined for the second B/W input image when an analog gain value of the first camera is above a predetermined threshold. 9. The method of claim 1, wherein normalizing the second B/W input image in the photometric reference space of the first RGB input image comprises applying gains and offsets to pixels of the second B/W input image. 10. The method of claim 1, wherein the processor being configured to normalize the second B/W input image in the photometric reference space of the first RGB input image comprises the processor being configured to: select a first pixel of interest in the second B/W input image and a first collection of similar pixels in the neighborhood of the first pixel of interest;select a second pixel of interest in the first RGB input image corresponding to the first pixel of interest and a second collection of similar pixels in the neighborhood of the second pixel of interest;determine the intersection of the first collection of similar pixels and the second collection of similar pixels;calculate gain and offset values using the intersection of the two collections;apply the gain and offset values to the appropriate pixels in the second B/W input image. 11. The method of claim 10 where the intersection of the first collection of similar pixels and the second collection of similar pixels is the set of pixels in the first and second collections having the same corresponding locations in each of the first RGB input image and the second B/W input image. 12. An array camera configured to generate an image of a scene using an array camera including at least one camera that captures an RGB image of a scene and at least one camera that captures a B/W image of the scene, the array camera comprising: an array camera including a plurality of cameras that includes a camera that captures an RGB image and a camera that captures a B/W image; anda processor configured by software to: obtain input images captured by the plurality of cameras that includes a camera that captures an RGB image and a camera that captures a B/W image, where the input images includes a first RGB input image that includes image information captured in at least three channels (RGB) of information and a second B/W input image that includes image information captured in a single black and white (B/W) channel of information; generate a fused image by: measuring parallax using the input images captured by the plurality of cameras to produce a depth map;normalizing the second B/W input image in the photometric reference space of the first RGB input image;cross-channel normalize the first RGB input image with respect to the B/W input image by applying gains and offsets to pixels of the first RGB input image; andperform cross-channel fusion using the first RGB input image the second B/W input image to produce an image. 13. The array camera of claim 12, wherein the plurality of cameras further comprises a camera that captures a fourth near-IR channel that can also be using during fusion processing to produce a fused image. 14. The array camera of claim 12, wherein the three channels (RGB) of information includes green and red light.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.