System for executing 3D propagation for depth image-based rendering
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-013/04
H04N-013/02
G06K-009/00
G06T-005/00
G06T-015/20
출원번호
US-0171141
(2014-02-03)
등록번호
US-9654765
(2017-05-16)
발명자
/ 주소
Nguyen, Quang H
Do, Minh N
Patel, Sanjay J
출원인 / 주소
THE BOARD OF TRUSTEES OF THE UNIVERSITY OF ILLINOIS
대리인 / 주소
Invention Mine LLC
인용정보
피인용 횟수 :
1인용 특허 :
43
초록▼
A system is disclosed for executing depth image-based rendering of a 3D image by a computer having a processor and that is coupled with one or more color cameras and at least one depth camera. The color cameras and the depth camera are positionable at different arbitrary locations relative to a scen
A system is disclosed for executing depth image-based rendering of a 3D image by a computer having a processor and that is coupled with one or more color cameras and at least one depth camera. The color cameras and the depth camera are positionable at different arbitrary locations relative to a scene to be rendered. In some examples, the depth camera is a low resolution camera and the color cameras are high resolution. The processor is programmed to propagate depth information from the depth camera to an image plane of each color camera to produce a propagated depth image at each respective color camera, to enhance the propagated depth image at each color camera with the color and propagated depth information thereof to produce corresponding enhanced depth images, and to render a complete, viewable image from one or more enhanced depth images from the color cameras. The processor may be a graphics processing unit.
대표청구항▼
1. A method for executing depth image-based rendering of a 3D image from one or more color cameras and at least one depth camera by at least one computer having a processor and memory, the method comprising: propagating, using the at least one computer, depth information from the at least one depth
1. A method for executing depth image-based rendering of a 3D image from one or more color cameras and at least one depth camera by at least one computer having a processor and memory, the method comprising: propagating, using the at least one computer, depth information from the at least one depth camera to an image plane of each color camera to produce a propagated depth image for each color camera;enhancing, using the at least one computer, the propagated depth image for each color camera with the color and propagated depth information thereof to produce an enhanced depth image for each color camera, wherein, for the propagated depth image for each color camera, the enhancing comprises (i) depth-color bilateral filtering (DCBF) and (ii) filling holes along a specific direction that is determined based on a line originating from an epipole; andcreating, using the at least one computer, a complete image from the one or more enhanced depth images. 2. The method of claim 1, wherein, for the propagated depth image for each color camera, the enhancing further comprises: detection of occluded pixels in the propagated depth images;occlusion removal to replace values of the occluded pixels with newly interpolated values; anddepth edge enhancement to sharpen depth edges surrounding objects in the propagated depth images,wherein the DCBF combines color and propagated depth information to calculate values for unknown depth pixels in the propagated depth image while preserving edges of an object in the scene; andwherein the filling of holes along the specific direction comprises filling areas of unknown depth pixels caused by disocclusion in the propagated depth image. 3. The method of claim 2, wherein the complete image is rendered at a virtual view position, and wherein rendering comprises: propagating depth and color information from each enhanced depth image to a second target view defined by a virtual camera to produce the rendered image through 3D warping, wherein the enhanced depth images of the color cameras are merged to the virtual camera in which the color cameras are second reference views;detecting and removing occlusions from the rendered image; andprocessing the rendered image with a median filter to fill and denoise the rendered image. 4. The method of claim 2, wherein the occlusion removal comprises, for a plurality of known depth pixels in the propagated depth image: selecting a known depth pixel A;partitioning a support window around pixel A into fourths; andreplacing the value of pixel A by a newly interpolated value if there are at least three partitions each having at least one pixel B such that the depth difference between pixels A and B is greater than a predetermined threshold value. 5. The method of claim 2, wherein the depth edge enhancement comprises: computing depth gradients in vertical, horizontal, and at least one diagonal direction with Sobel operators;for each pixel P whose depth edge gradients are greater than a predetermined value, searching within a search window among the neighbor pixels for a best color-matched pixel that has a smallest Euclidian color distance with pixel P; andreplacing the depth value of the pixel P by the depth value of the best color-matched pixel. 6. The method of claim 1, wherein DCBF is used to fill holes along the specific direction. 7. The method of claim 1, further comprising: enhancing, using the at least one computer, the complete image to produce an enhanced complete image; andrendering, using the at least one computer, the enhanced complete image viewable on a display. 8. A system for executing depth image-based rendering of a 3D image from one or more color cameras and at least one depth camera, the system comprising: one or more color cameras and the at least one depth camera connected to at least one processor and memory and instructions that, when executed, cause the at least one processor to:propagate depth information from the at least one depth camera to an image plane of each color camera to produce a propagated depth image for each color camera;enhance the propagated depth image for each color camera with the color and propagated depth information thereof to produce an enhanced depth image for each color camera, wherein, for the propagated depth image for each color camera, the enhancing comprises (i) depth-color bilateral filtering (DCBF) and (ii) filling holes along a specific direction that is determined based on a line originating from an epipole; andcreate a complete image from the one or more enhanced depth images. 9. The system of claim 8, wherein, for the propagated depth image for each color camera, the enhancing further comprises: detection of occluded pixels in the propagated depth images;occlusion removal to replace values of the occluded pixels with newly interpolated values; anddepth edge enhancement to sharpen depth edges surrounding objects in the propagated depth images,wherein the DCBF combines color and propagated depth information to calculate values for unknown depth pixels in the propagated depth image while preserving edges of an object in the scene; andwherein the filling of holes along the specific direction comprises filling areas of unknown depth pixels caused by disocclusion in the propagated depth image. 10. The system of claim 9, wherein the instructions, when executed, further cause the at least one processor to: render the complete image at a virtual view position at least in part by:propagating depth and color information from each enhanced depth image to a second target view defined by a virtual camera to produce the rendered image through 3D warping, wherein the enhanced depth images of the color cameras are merged to the virtual camera in which the color cameras are second reference views;detecting and removing occlusions from the rendered image; andprocessing the rendered image with a median filter to fill and denoise the rendered image. 11. The system of claim 9, wherein the occlusion removal comprises, for a plurality of known depth pixels in the propagated depth image: selecting a known depth pixel A;partitioning a support window around pixel A into fourths; andreplacing the value of pixel A by a newly interpolated value if there are at least three partitions each having at least one pixel B such that the depth difference between pixels A and B is greater than a predetermined threshold value. 12. The system of claim 9, wherein the depth edge enhancement comprises: computing depth gradients in vertical, horizontal, and at least one diagonal direction with Sobel operators;for each pixel P whose depth edge gradients are greater than a predetermined value, searching within a search window among the neighbor pixels for a best color-matched pixel that has a smallest Euclidian color distance with pixel P; andreplacing the depth value of the pixel P by the depth value of the best color-matched pixel. 13. The system of claim 8, wherein DCBF is used to fill holes along the specific direction. 14. The system of claim 8, further comprising instructions that, when executed, cause the at least one processor to: enhance the complete image to produce an enhanced complete image; andrender the enhanced complete image viewable on a display. 15. A non-transitory computer-readable medium comprising a set of instructions for executing depth image-based rendering of a 3D image from one or more color cameras and at least one depth camera by at least one processor and memory, wherein the instructions, when executed, cause the at least one processor to: propagate depth information from the at least one depth camera to an image plane of each color camera to produce a propagated depth image for each color camera;enhance the propagated depth image for each color camera with the color and propagated depth information thereof to produce an enhanced depth image for each color camera, wherein, for the propagated depth image for each color camera, the enhancing comprises (i) depth-color bilateral filtering (DCBF) and (ii) filling holes along a specific direction that is determined based on a line originating from an epipole; andcreate a complete image from the one or more enhanced depth images. 16. The non-transitory computer-readable medium of claim 15, wherein DCBF is used to fill holes along the specific direction. 17. The non-transitory computer-readable medium of claim 15, further comprising instructions that, when executed, cause the at least one processor to: enhance the complete image to produce an enhanced complete image; andrender the enhanced complete image viewable on a display.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (43)
Billyard Adam M. (London GBX), Apparatus and method for performing lighting calculations for surfaces of three-dimensional objects.
DeMenthon Daniel F. (Columbia MD), Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monito.
Baker Stephen J. (Surrey GBX) Cowdrey Dennis A. (West Sussex GBX) Olive Graham J. (West Sussex GBX) Wood Karl J. (West Sussex GBX), Image generator for generating perspective views from data defining a model having opaque and translucent features.
Iwamoto, Masayuki; Fujimura, Koichi, Image processing apparatus, method for processing and image and computer-readable recording medium for causing a computer to process images.
Guenter Brian ; Grimm Cindy Marie ; Malvar Henrique Sarmento, Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects.
Evangelisti Carlo J. (Jefferson Valley NY) Lumelsky Leon (Stamford CT) Pavicic Mark J. (Fargo ND), Parallel rendering of smoothly shaped color triangles with anti-aliased edges for a three dimensional color display.
Cook Robert L. (San Anselmo CA) Porter Thomas K. (Fairfax CA) Carpenter Loren C. (Novato CA), Pseudo-random point sampling techniques in computer graphics.
Uyttendaele,Matthew; Winder,Simon; Zitnick, III,Charles; Szeliski,Richard; Kang,Sing Bing, Real-time rendering system and process for interactive viewpoint video.
Lengyel Jerome E. ; Snyder John, Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.