Real-time rendering of realistic rain is described. In one aspect, image samples of real rain and associated information are automatically modeled in real-time to generate synthetic rain particles in view of respective scene radiances of target video content frames. The synthetic rain particles are
Real-time rendering of realistic rain is described. In one aspect, image samples of real rain and associated information are automatically modeled in real-time to generate synthetic rain particles in view of respective scene radiances of target video content frames. The synthetic rain particles are rendered in real-time using pre-computed radiance transfer with uniform random distribution across respective frames of the target video content.
대표청구항▼
The invention claimed is: 1. A computer-implemented method comprising: analyzing, static image frames of video content depicting real rain to identify various sizes and shapes of real rain; automatically modeling, in real-time, images of real rain and associated information to generate synthetic ra
The invention claimed is: 1. A computer-implemented method comprising: analyzing, static image frames of video content depicting real rain to identify various sizes and shapes of real rain; automatically modeling, in real-time, images of real rain and associated information to generate synthetic rain particles, wherein the images and associated information are automatically modeled for blending with respective ones of multiple frames of image content, and wherein automatically modeling further comprises: for each frame of the multiple frames and for each sample raindrop represented by at least a subset of the images: generating a rain mask for the sample raindrop, the rain mask identifying portion(s) of the frame that will be shaded by the sample raindrop, the rain mask being generated from alpha values defining opacity of the sample raindrop, the alpha values having been modeled to 3-D coordinate space of the frame; creating a shaded rain mask from the rain mask to specify color and intensity distribution of the sample raindrop, wherein creating the shaded rain mask further comprises determining the color and intensity distribution based on pre-computed radiance transfer values and a transfer function of a sphere model of the sample raindrop, the sphere model having a refractive index of water; and rendering, in real-time, the synthetic rain particles across respective frames of video content. 2. The method of claim 1, wherein the images and associated information is automatically modeled for blending with multiple frames of image content, and wherein automatically modeling further comprises: for each frame of the multiple frames and for each image of at least a subset of the images and associated information: determining a current velocity of a raindrop represented by the image based on a previous velocity of the raindrop; computing shape of the raindrop based on the current velocity and camera exposure time; and calculating position and orientation attributes of the raindrop based on 3-D coordinate space of the frame, current velocity, and a uniform random distribution criteria. 3. The method of claim 1, wherein a graphics processing unit (GPU) generates the rain mask, and wherein per-pixel shader logic of the GPU creates the shaded rain mask. 4. The method of claim 1, wherein the transfer function is based on the reversibility of light. 5. The method of claim 1, wherein rendering the synthetic rain particles further comprises blending an alpha matte with the shaded rain mask and detected background color of the frame to generate a synthetic rain matted frame, the alpha matte having current velocity, shape, position, orientation, and color and intensity distribution attributes, the synthetic rain matted frame being a particular one of the respective frames of video content. 6. The method of claim 1, wherein the method further comprises randomly selecting the images of the real rain from a library of images of real rain and corresponding information. 7. The method of claim 1, wherein the method further comprises extracting attributes associated with identified real rain from respective ones of the static image frames to generate at least the images and the associated information; wherein the attributes comprise at least a rain stroke matte including alpha and radiance values, rain stroke direction, and detected background color, the alpha values defining opacity characteristics. 8. The method of claim 7, wherein analyzing the video content and extracting the information is implemented offline. 9. The method of claim 7, wherein the method further comprises: rotating the rain stroke matte to a vertical orientation; and removing noise from the rain stroke matte. 10. A computer-readable storage medium comprising computer-program instructions executable by a processor, the computer-program instructions, when executed by the processor for: creating a set of rain particles from a set of randomly selected rain stroke samples and associated information, the rain stroke samples being randomly selected from a library of extracted rain stroke samples; rendering, in real time, synthetic rain across multiple frames of video content, the synthetic rain being based on respective ones of the rain particles, the rendering comprising: for each frame of the frames, and for each particle of at least a subset of the rain particles: computing, using a graphics processing unit (GPU), velocity, 3-D position, orientation, and shape of an alpha matte associated with the particle; determining, using the GPU, a set of pixels to shade from the alpha matte; calculating a shaded rain mask, using per-pixel shader operations of the GPU in view of the pixels to shade, to specify the color and intensity distribution of pixels using pre-computed radiance transfer values and a transfer function of a sphere model associated with each rain particle, the sphere model having a refractive index of water; and blending, using the GPU, the alpha matte and a background color associated with the frame with the shaded rain mask based on the color and the intensity distribution of the pixels to generated a synthetic rain matted frame for presentation to a user. 11. The computer-readable storage medium of claim 10, wherein the library of extracted rain stroke samples is generated offline, and wherein at least the rendering is performed in real-time. 12. The computer-readable storage medium of claim 10, wherein the 3-D position is uniformly randomly distributed in space delineated by the frame. 13. The computer-readable storage medium of claim 10, wherein the shape comprises a length and a diameter, the length being interpolated based on a current velocity of the particle and an exposure time of a virtual camera, the diameter corresponding to width of the alpha matte after being mapped to scene coordinates of the frame. 14. The computer-readable storage medium of claim 10, wherein operations associated with the pre-computed radiance transfer are based on an outgoing light intensity distribution transfer function. 15. The computer-readable storage medium of claim 10, wherein computer-program instructions further comprise instructions for providing volumetric appearance of synthetic rain by: identifying rough geometry of a scene associated with the frame; and automatically clipping the alpha matte at surface of the scene prior to the rendering, the surface being defined using a mesh of a synthetic scene or a depth map. 16. The computer-readable storage medium of claim 10, wherein computer-program instructions further comprise instructions for: analyzing static image frames of video content depicting real rain to identify various sizes and shapes of the real rain; extracting rain stroke mattes and information associated with identified real rain from the identifies various sizes and shapes of the real rain; and wherein the rain stroke mattes and the information represent the library of extracted rain stroke samples. 17. A computer comprising: a processor; and a memory coupled to the processor, the memory comprising computer-program instructions executable by the processor for: randomly selecting static real rain stroke samples from video content; modeling, in real time using a graphics accelerator, the static real rain stroke samples as synthetic rain particles in view of respective scene radiances, wherein modeling the static real rain stroke samples as synthetic rain particles further comprises: converting rain stroke mattes into 3-D coordinate space of respective image frames of video content; calculating velocity, position, and shape attributes of alpha mattes associates with the rain stroke mattes based on previous values, physical laws, and rain stroke distribution criteria; identifying respective rain masks from the alpha masks; using pre-computed radiance transfer values and a transfer function of a sphere model associated with each rain particle, the sphere model having a refractive index of water, to identify pixel color and intensity distributions from the rain masks; and wherein rendering further comprises alpha blending the alpha masks with information associated with identified pixel color and intensity distributions and detected frame background color to create synthetic rain matted scenes for presentation to a user; and rendering the synthetic rain across scenes with uniform random distribution, controllable velocity, and color determined via pre-computed radiance transfer.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (15)
Miller Richard L. (2211 Saxon Houston TX 77018), 3-D weather display and weathercast system.
MacInnis, Alexander G.; Tang, Chengfuh Jeffrey; Xie, Xiaodong; Patterson, James T.; Kranawetter, Greg A., Apparatus and method for blending graphics and video surfaces.
Baron, Sr., Robert O.; Wilson, Gregory S.; Phillips, Ronald J.; Thompson, Tom S.; Davis, Brian Patrick, Real-time three-dimensional weather data processing method and system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.