최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0438542 (2017-02-21) |
등록번호 | US-10091405 (2018-10-02) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 2 인용 특허 : 297 |
Systems and methods for reducing motion blur in images or video in ultra low light with array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method for synthesizing an image from multiple images captured using an array camera includes capturing image data
Systems and methods for reducing motion blur in images or video in ultra low light with array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method for synthesizing an image from multiple images captured using an array camera includes capturing image data using active cameras within an array camera, where the active cameras are configured to capture image data and the image data includes pixel brightness values that form alternate view images captured from different viewpoints, determining sets of corresponding pixels in the alternate view images where each pixel in a set of corresponding pixels is chosen from a different alternate view image, summing the pixel brightness values for corresponding pixels to create pixel brightness sums for pixel locations in an output image, and synthesizing an output image from the viewpoint of the output image using the pixel brightness sums.
1. A method for synthesizing an image from multiple images captured using an array camera in low light conditions, the method comprising: capturing image data using at least one active camera within an array camera, where the image data captured by the at least one active camera comprises pixel brig
1. A method for synthesizing an image from multiple images captured using an array camera in low light conditions, the method comprising: capturing image data using at least one active camera within an array camera, where the image data captured by the at least one active camera comprises pixel brightness values that form a reference image and a plurality of alternate images captured at different times;applying geometric shifts to shift the plurality of alternate view images to the viewpoint of the reference image using a processor configured by software for alternate images that are not from the same viewpoint of the reference image;summing the pixel brightness values for pixels in the reference image with pixel brightness values for corresponding pixels in the alternate images to create pixel brightness sums for the pixel locations in the reference image using the processor configured by software; andsynthesizing an output image from the viewpoint of the reference image using image data comprising the pixel brightness sums for the pixel locations in the reference image using the processor configured by software. 2. The method of claim 1, wherein applying geometric shifts to shift the plurality of alternate images to the viewpoint of the reference image further comprises applying scene independent geometric shifts to the alternate images to compensate for distortions due to physical characteristics of the plurality of active cameras that captured the alternate images. 3. The method of claim 2, further comprising: performing parallax detection using the processor configured by software to identify scene dependent geometric shifts to apply to the alternate images by comparing the reference image and the alternate images;wherein applying geometric shifts to shift the plurality of alternate images to the viewpoint of the reference image further comprises applying scene dependent geometric shifts to the plurality of alternate images to compensate for parallax. 4. The method of claim 3, further comprising: identifying pixels in the alternate images that are occluded in the reference image using the processor configured by software; andleaving occluded pixels out when summing the pixel brightness values for pixels in the reference image with pixel brightness values for corresponding pixels in the alternate images using the processor configured by software. 5. The method of claim 2, further comprising: performing parallax detection using the processor configured by software to identify scene dependent geometric shifts to apply to at least a portion of the pixels in the alternative images by comparing the reference image and the alternative images; andwhen parallax detection identifies at least one pixel within a threshold distance of the reference viewpoint, applying scene dependent geometric shifts to the plurality of alternative images to compensate for parallax. 6. The method of claim 2, further comprising: performing parallax detection using the processor configured by software to identify scene dependent geometric shifts to apply to at least a portion of the pixels in the alternative images by comparing the reference image and the alternative images; andwhen parallax detection determines that a pixel from the reference viewpoint has a depth within a specified depth of field, applying scene dependent geometric shifts to corresponding pixels in the alternative images to compensate for parallax. 7. The method of claim 6, further comprising receiving user input specifying a depth of field via a user interface using the processor configured by software. 8. The method of claim 6, further comprising automatically determining a specified depth of field based upon a depth of an object within a region of interest using the processor configured by software. 9. The method of claim 2, wherein applying geometric shifts to shift the plurality of alternative images to the viewpoint of the reference image further comprises applying a fixed parallax shift to the plurality of alternative images. 10. The method of claim 9, further comprising determining the fixed parallax shift based upon user input specifying a depth received via a user interface using the processor configured by software. 11. The method of claim 9, further comprising automatically determining a fixed parallax shift based upon a depth of an object within a region of interest using the processor configured by software. 12. The method of claim 11, wherein automatically determining a fixed parallax shift based upon a depth of an object within a region of interest further comprises: calculating a depth map for a region of interest;generating a histogram of depths in the region of interest; anddetermining the depth of an object within the region of interest as the median depth of the region of interest. 13. The method of claim 1, further comprising: capturing a second set of image data using the plurality of active cameras and synthesizing a second output image using the processor configured by software;calculating motion compensation vectors for the second output image using the processor configured by software;applying motion compensation shifts to shift the second output image to the viewpoint of the output image using the processor configured by software;summing the pixel brightness values for pixels in the output image with pixel brightness values for corresponding pixels in the second output image to create pixel brightness sums for the pixel locations in the output image using the processor configured by software; andsynthesizing a motion compensated output image from the viewpoint of the reference image using the pixel brightness sums for the pixel locations in the output image using the processor configured by software. 14. The method of claim 1, wherein the plurality of active cameras that capture the reference image and the alternative images form a first subset of cameras and the method further comprises: capturing image data using a second subset of active cameras within the array camera, where the second subset of active cameras are configured to capture image data within the same spectral band as the first subset of cameras and the image data captured by the active cameras comprises pixel brightness values that form a second reference image and a second set of alternative images captured from different viewpoints;applying geometric shifts to shift the second set of alternative images to the viewpoint of the second reference image using the processor configured by software;summing the pixel brightness values for pixels in the second reference image with pixel brightness values for corresponding pixels in the second set of alternative images to create pixel brightness sums for the pixel locations in the second reference image using a processor configured by software;synthesizing an alternate view output image from the viewpoint of the second reference image using the pixel brightness sums for the pixel locations in the second reference image using the processor configured by software; andsynthesizing a high resolution image using the processor configured by software to perform a super resolution process based upon the output image and the alternate view output image. 15. The method of claim 1, wherein the array camera comprises cameras that capture image data within different spectral bands. 16. The method of claim 15, wherein the cameras in the array camera capture image data within spectral bands selected from the group consisting of: red light; green light; blue light; and infrared light. 17. The method of claim 16, wherein at least one camera in the array camera is a Bayer camera. 18. The method of claim 1, wherein: the array camera comprises a plurality of red cameras, blue cameras, and green cameras, where the number of green cameras is larger than the number of red cameras and larger than the number of blue cameras;the plurality of cameras are a plurality of green cameras and the output image is a green output image;the method further comprises: capturing a second set of image data using a plurality of active red cameras and synthesizing a red output image using the processor configured by software;capturing a second set of image data using a plurality of active blue cameras and synthesizing a blue output image using the processor configured by software;increasing pixel brightness values of the red output image by a factor Ng/Nr where Ng is the number of green cameras and Nr is the number of red cameras using the processor configured by software;increasing pixel brightness values of the blue output image by a factor Ng/Nb where Nb is the number of blue cameras using the processor configured by software; andcombining the red, green, and blue output images into a color image using the processor configured by software. 19. The method of claim 1, wherein the array camera comprises an array camera module comprising: an imager array including an array of focal planes, where each focal plane comprises an array of light sensitive pixels; andan optic array including an array of lens stacks, where each lens stack creates an optical channel that forms an image on the array of light sensitive pixels within a corresponding focal plane;wherein pairings of lens stacks and focal planes form multiple cameras including the plurality of active cameras. 20. The method of claim 19, wherein the lens stacks within the optical channels sample the same object space with sub-pixel offsets to provide sampling diversity. 21. The method of claim 1, wherein the reference image is a virtual image synthesized in a location where none of the active cameras exist.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.