Generation of three-dimensional imagery from a two-dimensional image using a depth map
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-017/20
G02B-027/01
H04N-013/00
H04N-013/02
H04N-013/04
G06T-009/00
출원번호
US-0618981
(2015-02-10)
등록번호
US-9721385
(2017-08-01)
발명자
/ 주소
Herman, Brad Kenneth
출원인 / 주소
DreamWorks Animation LLC
대리인 / 주소
Morrison & Foerster LLP
인용정보
피인용 횟수 :
4인용 특허 :
2
초록▼
A method for generating stereoscopic images includes obtaining image data comprising a plurality of sample points. A direction, a color value, and a depth value are associated with each sample point. The directions and depth values are relative to a common origin. A mesh is generated by displacing t
A method for generating stereoscopic images includes obtaining image data comprising a plurality of sample points. A direction, a color value, and a depth value are associated with each sample point. The directions and depth values are relative to a common origin. A mesh is generated by displacing the sample points from the origin. The sample points are displaced in the associated directions by distances representative of the corresponding depth values. The image data is mapped to the mesh such that the color values associated with the sample points are mapped to the mesh at the corresponding directions. A first image of the mesh is generated from a first perspective, and a second image of the mesh is generated from a second perspective. The first and second images of the mesh may be caused to be displayed to provide an illusion of depth.
대표청구항▼
1. A computer-implemented method for generating stereoscopic images, the method comprising: obtaining image data comprising a plurality of sample points, wherein a direction, a color value, and a depth value are associated with each sample point, andwherein the directions and depth values are relati
1. A computer-implemented method for generating stereoscopic images, the method comprising: obtaining image data comprising a plurality of sample points, wherein a direction, a color value, and a depth value are associated with each sample point, andwherein the directions and depth values are relative to a common origin of a spherical coordinate system;generating a mesh, wherein the mesh is displaced from the common origin of the spherical coordinate system in the directions associated with the sample points by distances representative of the corresponding depth values;mapping the image data to the mesh, wherein the color values associated with the sample points are mapped to the mesh at the corresponding directions;generating a first image of the mesh from a first perspective; andgenerating a second image of the mesh from a second perspective. 2. The computer-implemented method of claim 1, further comprising causing a display of at least a portion of the first image and at least a portion of the second image. 3. The computer-implemented method of claim 2, wherein the display of the first and second images creates an illusion of depth. 4. The computer-implemented method of claim 2, wherein the first and second images are displayed on a head-mounted display. 5. The computer-implemented method of claim 1, further comprising tessellating the image data, wherein tessellating the image data creates a plurality of vertices, and wherein the vertices are used as the sample points for generating the mesh, mapping the image data to the mesh, and generating the first and second images of the mesh. 6. The computer-implemented method of claim 5, wherein the density of vertices is greater than a density of pixels of a display to be used to display the first and second images. 7. The computer-implemented method of claim 1, further comprising: determining a portion of an image to be displayed, wherein image data is obtained only for the portion of the image. 8. The computer-implemented method of claim 7, wherein the portion of the image to be displayed is determined at least in part by the position of a head-mounted display. 9. The computer-implemented method of claim 1, wherein the image data represents an image of a scene from the perspective of a vantage point. 10. The computer-implemented method of claim 9, wherein the scene is computer-generated. 11. The computer-implemented method of claim 9, wherein the scene is a live scene. 12. The computer-implemented method of claim 9, wherein the image includes a 360 degree view horizontally around the vantage point and a 180 degree view vertically around the vantage point. 13. A non-transitory computer-readable storage medium for generating stereoscopic images, the non-transitory computer-readable storage medium comprising computer-executable instructions for: obtaining image data comprising a plurality of sample points, wherein a direction, a color value, and a depth value are associated with each sample point, andwherein the directions and depth values are relative to a common origin of a spherical coordinate system;generating a mesh, wherein the mesh is displaced from the common origin of the spherical coordinate system in the directions associated with the sample points by distances representative of the corresponding depth values;mapping the image data to the mesh, wherein the color values associated with the sample points are mapped to the mesh at the corresponding directions;generating a first image of the mesh from a first perspective; andgenerating a second image of the mesh from a second perspective. 14. The non-transitory computer-readable storage medium of claim 13, further comprising computer-executable instructions for causing a display of the first and second images. 15. The non-transitory computer-readable storage medium of claim 14, wherein the display of the first and second images creates an illusion of depth. 16. The non-transitory computer-readable storage medium of claim 14, wherein the first and second images are displayed on a head-mounted display. 17. The non-transitory computer-readable storage medium of claim 13, further comprising computer-executable instructions for tessellating the image data, wherein tessellating the image data creates a plurality of vertices, and wherein the vertices are used as the sample points for generating the mesh, mapping the image data to the mesh, and generating the first and second images of the mesh. 18. The non-transitory computer-readable storage medium of claim 17, wherein the density of vertices is greater than a density of pixels of a display to be used to display the first and second images. 19. The non-transitory computer-readable storage medium of claim 13, further comprising computer-executable instructions for determining a portion of an image to be displayed, wherein image data is obtained only for the portion of the image. 20. The non-transitory computer-readable storage medium of claim 19, wherein the portion of the image to be displayed is determined at least in part by the position of a head-mounted display. 21. The non-transitory computer-readable storage medium of claim 13, wherein the image data represents an image of a scene from the perspective of a vantage point. 22. The non-transitory computer-readable storage medium of claim 21, wherein the image data represents a computer-generated image of the scene. 23. The non-transitory computer-readable storage medium of claim 21, wherein the scene is a live scene. 24. The non-transitory computer-readable storage medium of claim 21, wherein the image includes a 360 degree view horizontally around the vantage point and a 180 degree view vertically around the vantage point. 25. A system for generating stereoscopic images, the system comprising: a display; andone or more processors coupled to the display and configured to: obtain image data comprising a plurality of sample points, wherein a direction, a color value, and a depth value are associated with each sample point, andwherein the directions and depth values are relative to a common origin of a spherical coordinate system;generate a mesh, wherein the mesh is displaced from the common origin of the spherical coordinate system in the directions associated with the sample points by distances representative of the corresponding depth values;map the image data to the mesh, wherein the color values associated with the sample points are mapped to the mesh at the corresponding directions;generate a first image of the mesh from a first perspective;generate a second image of the mesh from a second perspective; andcause a display of the first and second images using the display. 26. The system of claim 25, wherein the display of the first and second images creates an illusion of depth. 27. The system of claim 25, wherein the one or more processors is further configured to tessellate the image data, wherein tessellating the image data creates a plurality of vertices, and wherein the vertices are used as the sample points for generating the mesh, mapping the image data to the mesh, and generating the first and second images of the mesh. 28. The system of claim 27, wherein the display includes a plurality of pixels, and wherein the density of vertices is greater than a density of pixels of the display. 29. The system of claim 25, wherein the one or more processors is further configured to determine a portion of an image to be displayed, wherein image data is obtained only for the portion of the image. 30. The system of claim 29, wherein the display is a head-mounted display, and wherein the portion of the image to be displayed is determined at least in part by the position of the head-mounted display. 31. The system of claim 25, wherein the image data represents an image of a scene from the perspective of a vantage point. 32. The system of claim 31, wherein the image data represents a computer-generated image of the scene. 33. The system of claim 31, wherein the scene is a live scene. 34. The system of claim 31, wherein the image includes a 360 degree view horizontally around the vantage point and a 180 degree view vertically around the vantage point.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (2)
Deering Michael (Los Altos CA), Method and apparatus for head tracked display of precomputed stereo images.
Bhat, Kiran; Garg, Akash; Flynn, Michael Daniel; Welch, Will, Systems and methods for generating computer ready animation models of a human head from captured data images.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.