최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0547048 (2014-11-18) |
등록번호 | US-10119808 (2018-11-06) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 0 인용 특허 : 305 |
Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays that includes at least two two-dimensional arrays of cameras each several cameras; an illumination system configured to illuminate a scene with a projected texture; a process
Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays that includes at least two two-dimensional arrays of cameras each several cameras; an illumination system configured to illuminate a scene with a projected texture; a processor; and memory containing an image processing pipeline application and an illumination system controller application. In addition, the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture. Furthermore, the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture capture a set of images of the scene illuminated with the projected texture; determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
1. A camera array, comprising: at least one two-dimensional array of cameras comprising a plurality of cameras;an illumination system configured to illuminate a scene with a projected texture;a processor;memory containing an image processing pipeline application and an illumination system controller
1. A camera array, comprising: at least one two-dimensional array of cameras comprising a plurality of cameras;an illumination system configured to illuminate a scene with a projected texture;a processor;memory containing an image processing pipeline application and an illumination system controller application;wherein the at least one two-dimensional array of cameras comprises at least two two-dimensional arrays of cameras located in complementary occlusion zones on opposite sides of the illumination system;wherein the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture;wherein the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, wherein a spatial pattern period of the projected texture is different along different epipolar lines;capture a set of images of the scene illuminated with the projected texture;determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images that includes at least one image captured by a camera in each of the two-dimensional arrays of cameras, wherein generating a depth estimate for a given pixel location in the image from the reference viewpoint comprises: identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths along a plurality of epipolar lines aligned at different angles with respect to each other, wherein each epipolar line in the plurality of epipolar lines is between a camera located at the reference viewpoint and an alternative view camera from a plurality of alternative view cameras in the two-dimensional array of cameras and each epipolar line is used to determine the direction of anticipated shifts of corresponding pixels with depth in alternative view images captured by the plurality of alternative view cameras, wherein disparity along a first epipolar line is greater than disparity along a second epipolar line and the projected pattern incorporates a smaller spatial pattern period in a direction corresponding to the second epipolar line;comparing the similarity of the corresponding pixels identified at each of the plurality of depths; andselecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint. 2. The camera array of claim 1, wherein a portion of a scene that is occluded in the field of view of at least one camera in a first of the two-dimensional arrays of cameras is visible in a plurality of cameras in a second of the arrays of cameras, where the first and second arrays of cameras are located in complementary occlusion zones on opposite sides of the illumination system. 3. The camera array of claim 2, wherein the at least two two-dimensional arrays of cameras comprises a pair of two-dimensional arrays of cameras located in complementary occlusion zones on either side of the illumination system. 4. The camera array of claim 3, wherein each array of cameras is a 2×2 array of monochrome cameras. 5. The camera array of claim 3, wherein the projected texture includes a first spatial pattern period in a first direction and a second larger spatial pattern period in a second direction. 6. The camera array of claim 1, wherein the at least one two-dimensional array of cameras comprises one two-dimensional array of cameras including a plurality of lower resolution cameras and at least one higher resolution camera. 7. The camera array of claim 6, wherein the two-dimensional array of cameras comprises at least one lower resolution camera located above, below, to the left, and to the right of the higher resolution camera. 8. The camera array of claim 7, wherein the higher resolution camera includes a Bayer filter pattern and the lower resolution cameras are monochrome cameras. 9. The camera array of claim 8, wherein the image processing pipeline application configures the higher resolution camera to capture texture information when the illumination system is not illuminating the scene using the projected pattern. 10. The camera array of claim 1, wherein the projected texture includes a first spatial pattern period in a first direction and a second larger spatial pattern period in a second direction. 11. The camera array of claim 1, wherein the illumination system is a static illumination system configured to project a fixed pattern. 12. The camera array of claim 1, wherein: the illumination system is a dynamic illumination system configured to project a controllable pattern; andthe illumination system controller application directs the processor to control the pattern projected by the illumination system. 13. The camera array of claim 1, wherein the illumination system includes a spatial light modulator selected from the group consisting of a reflective liquid crystal on silicon microdisplay and a translucent liquid crystal microdisplay. 14. The camera array of claim 13, wherein the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a first projected texture;capture a first set of images of the scene illuminated with the first projected texture;determine initial depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the first set of images;utilize the illumination system controller application to control the illumination system to illuminate a scene with a second projected texture selected based upon at least one initial depth estimate for a pixel location in an image from a reference viewpoint;capture a second set of images of the scene illuminated with the second projected texture; anddetermine updated depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the first set of images. 15. The camera array of claim 14, wherein the spatial pattern period of the second projected texture at the at least one initial depth estimate for a pixel location in an image from a reference viewpoint is higher than the spatial resolution of the plurality of cameras at the at least one initial depth estimate for a pixel location in an image from the reference viewpoint. 16. The camera array of claim 1, wherein the illumination system comprises an array of projectors. 17. The camera array of claim 16, wherein the array of projectors comprises projectors configured to project different patterns. 18. The camera array of claim 17, wherein the different patterns comprise patterns having different spatial pattern periods. 19. The camera array of claim 17, wherein: the projectors are configured to project controllable patterns; andthe illumination system controller application directs the processor to control the patterns projected by the illumination system. 20. The camera array of claim 1, wherein the projected pattern is random. 21. The camera array of claim 1, wherein the projected pattern includes a smaller spatial pattern period in a first direction and a larger spatial pattern period in a second direction perpendicular to the first direction. 22. The camera array of claim 1, wherein the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture;capture a first set of images of the scene illuminated with the projected texture;determining depth estimates for pixel locations in an image from a first reference viewpoint using at least a subset of the first set of images;utilize the illumination system controller application to control the illumination system to prevent the illumination of the scene with the projected texture;capture at least one image of the scene in which the natural texture of the scene is visible; andcollocate natural texture and depth information for the scene. 23. The camera array of claim 22, wherein the image processing pipeline application directs the processor to collocate natural texture and depth information for the scene by assuming that the first set of images and the at least one image are captured from the same viewpoint. 24. The camera array of claim 22, wherein: at least one image of the scene in which the natural texture of the scene is visible is part of a second set of images of the scene in which the natural texture of the scene is visible;the image processing pipeline application further directs the processor to determining depth estimates for pixel locations in an image from a second reference viewpoint using at least a subset of the second set of images; andthe image processing pipeline application directs the processor to collocate natural texture and depth information for the scene by: identifying similar features in depth maps generated using the first and second sets of images;estimate relative pose using the similar features; andreprojecting depth estimates obtained using the first set of information into the second reference viewpoint. 25. The camera array of claim 1, wherein the image processing pipeline application directs the processor to composite reprojected depth estimates generated using the first set of images and depth estimates generated using the second set of images based upon information concerning the reliability of the depth estimates. 26. A camera array, comprising: at least a pair of two-dimensional arrays of cameras located in complementary occlusion zones on either side of an illumination system, where each two-dimensional array of cameras comprises a plurality of cameras, wherein the pair of arrays of cameras in combination comprise a plurality of lower resolution cameras and at least one higher resolution camera, wherein at least one lower resolution camera is located above, below, to the left, and to the right of the higher resolution camera;the illumination system configured to illuminate a scene with a projected texture;a processor;memory containing an image processing pipeline application and an illumination system controller application;wherein the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture;wherein the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture, wherein a spatial pattern period of the projected texture is different along different epipolar lines;capture a set of images of the scene illuminated with the projected texture;determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images that includes at least one image captured by a camera in each of the two-dimensional arrays of cameras, wherein generating a depth estimate for a given pixel location in the image from the reference viewpoint comprises: identifying pixels in the at least a subset of the set of images that correspond to the given pixel location in the image from the reference viewpoint based upon expected disparity at a plurality of depths along a plurality of epipolar lines aligned at different angles with respect to each other, wherein each epipolar line in the plurality of epipolar lines is between a camera located at the reference viewpoint and an alternative view camera from a plurality of alternative view cameras in the two-dimensional array of cameras and each epipolar line is used to determine the direction of anticipated shifts of corresponding pixels with depth in alternative view images captured by the plurality of alternative view cameras, wherein disparity along a first epipolar line is greater than disparity along a second epipolar line and the projected pattern incorporates a smaller spatial pattern period in a direction corresponding to the second epipolar line;comparing the similarity of the corresponding pixels identified at each of the plurality of depths; andselecting the depth from the plurality of depths at which the identified corresponding pixels have the highest degree of similarity as a depth estimate for the given pixel location in the image from the reference viewpoint.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.