The digital 3D/360° camera system is an omnidirectional stereoscopic device for capturing image data that may be used to create a 3-dimensional model for presenting a 3D image, a 3D movie, or 3D animation. The device uses multiple digital cameras, arranged with overlapping fields of view, to capture
The digital 3D/360° camera system is an omnidirectional stereoscopic device for capturing image data that may be used to create a 3-dimensional model for presenting a 3D image, a 3D movie, or 3D animation. The device uses multiple digital cameras, arranged with overlapping fields of view, to capture image data covering an entire 360° scene. The data collected by one, or several, digital 3D/360° camera systems can be used to create a 3D model of a 360° scene by using triangulation of the image data within the overlapping fields of view.
대표청구항▼
1. A method for generating a three-hundred-sixty degree digital representation of an area, comprising: capturing an image from each of a plurality of digital cameras having a field of view that overlaps with the field of view of at least one other digital camera among the plurality of digital camera
1. A method for generating a three-hundred-sixty degree digital representation of an area, comprising: capturing an image from each of a plurality of digital cameras having a field of view that overlaps with the field of view of at least one other digital camera among the plurality of digital cameras forming a stereoscopic field of view;generating a composite pixel vector map based on the images associated with the plurality of digital cameras and defining a coordinate system of the plurality of digital cameras in terms of at least: camera position data defining the position of each of the plurality of digital cameras, anda plurality of camera-specific pixel vector maps, each camera-specific pixel vector map associated with a camera of the plurality of digital cameras; andgenerating a three-hundred-sixty degree digital representation of the area surrounding the plurality of digital cameras based at least on the stored composite pixel vector map. 2. The method of claim 1, further comprising: generating the plurality of camera-specific pixel vector maps using directional vectors defining the path between a reference point of each camera and a plurality of image pixels within the field of view of each camera. 3. The method of claim 2, wherein generating the plurality of camera-specific pixel vector maps comprises: defining a first camera reference point for a first camera of the plurality of digital cameras and a second camera reference point for a second camera of the plurality of digital cameras; anddetermining the displacement between the first camera reference point and the second camera reference point. 4. The method of claim 3, wherein generating the plurality of camera-specific pixel vector maps further comprises: identifying a first image pixel associated with an end point within the field of view of the first camera;determining a vector between the first camera reference point and the first image pixel;identifying a second image pixel associated with the end point within the field of view of a second camera; anddetermining a vector between the second camera reference point and second image pixel. 5. The method of claim 4, further comprising: determining the relative position of the end point within the coordinate system of the plurality of digital cameras based on the camera-specific pixel vector maps and camera position data. 6. The method of claim 2, wherein the defined path between the reference point of each camera and the plurality of image pixels comprises a distinct reference point of each camera for each of the plurality of image pixels within the field of view of each camera. 7. The method of claim 1. wherein: the plurality of digital cameras comprise a first plurality of digital cameras and a second plurality of digital cameras, andthe second plurality of digital cameras is positioned in a location different from the first plurality of digital cameras to image the area from a different point of view. 8. The method of claim 1, wherein the generation of the three-hundred-sixty degree digital representation of the area occurs via at least one processor associated with the plurality of digital cameras. 9. A system for generating a three-hundred-sixty degree digital representation of an area, comprising: a plurality of digital cameras having a field of view that overlaps with the field of view of at least one other digital camera among the plurality of digital cameras forming a stereoscopic field of view,a controller which causes each camera of the plurality of digital cameras to capture an image;a processor executing software that generates a composite pixel vector map based on the images associated with the plurality of digital cameras and defining a coordinate system of the plurality of digital cameras in terms of at least: camera position data defining the position of each of the plurality of digital cameras, anda plurality of camera-specific pixel vector maps, each camera-specific pixel vector map associated a camera of the plurality of digital cameras; andwherein the system generates a three-hundred-sixty degree digital representation of the area surrounding the plurality of digital cameras based at least on the stored composite pixel vector map. 10. The system of claim 9, wherein the system generates the plurality of camera-specific pixel vector maps using directional vectors defining the path between a reference point of each camera and a plurality of image pixels within the field of view of each camera. 11. The system of claim 10, wherein the system generates the plurality of camera-specific pixel vector maps by: defining a first camera reference point for a first camera of the plurality of digital cameras and a second camera reference point for a second camera of the plurality of digital cameras; anddetermining the displacement between the first camera reference point and the second camera reference point. 12. The system of claim 11, wherein the system generates the plurality of camera-specific pixel vector maps by: identifying a first image pixel associated with an end point within the field of view of the first camera;determining a vector between the first camera reference point and the first image pixel;identifying a second image pixel associated with the end point within the field of view of a second camera; anddetermining a vector between the second camera reference point and second image pixel. 13. The system of claim 12, wherein the system determines the relative position of the end point within the coordinate system of the plurality of digital cameras based on the camera-specific pixel vector maps and camera position data. 14. The system of claim 10, wherein the defined path between the reference point of each camera and the plurality of image pixels comprises a distinct reference point of each camera for each of the plurality of image pixels within the field of view of each camera. 15. The system of claim 9, wherein: the plurality of digital cameras comprise a first plurality of digital cameras and a second plurality of digital cameras, andthe second plurality of digital cameras is positioned in a location different from the first plurality of digital cameras to image the area from a different point of view. 16. A non-transitory computer-readable medium encoded with processor-executable instructions that, when executed by a processor, cause a digital camera system comprising a plurality of digital cameras to: cause each camera of the plurality of digital cameras to capture an image having a field of view that overlaps with the field of view of at least one other digital camera among the plurality of digital cameras forming a stereoscopic field of view;generate a composite pixel vector map based on the images associated with the plurality of digital cameras and defining a coordinate system of the plurality of digital cameras in terms of at least:camera position data defining the position of each of the plurality of digital cameras, anda plurality of camera-specific pixel vector maps, each camera-specific pixel vector map associated with a camera of the plurality of digital cameras; andgenerate a three-hundred-sixty degree digital representation of the area surrounding the plurality of digital cameras based at least on the stored pixel vector map. 17. The non-transitory computer-readable medium of claim 16, wherein the computer-executable instructions, when executed, further cause the digital camera system to: generate the plurality of camera-specific pixel vector maps using directional vectors defining the path between a reference point of each camera and a plurality of image pixels within the field of view of each camera. 18. The non-transitory computer-readable medium of claim 17, wherein generating the plurality of camera-specific pixel vector maps comprises: defining a first camera reference point for a first camera of the plurality of digital cameras and a second camera reference point for a second camera of the plurality of digital cameras; anddetermining the displacement between the first camera reference point and the second camera reference point. 19. The non-transitory computer-readable medium of claim 18, wherein generating the plurality of camera-specific pixel vector maps further comprises: identifying a first image pixel associated with an end point within the field of view of the first camera;determining a vector between the first camera reference point and the first image pixel;identifying a second image pixel associated with the end point within the field of view of a second camera; anddetermining a vector between the second camera reference point and second image pixel. 20. The non-transitory computer-readable medium of claim 19, wherein the computer-executable instructions, when executed: determine the relative position of the end point within the coordinate system of the plurality of digital cameras based on the camera-specific pixel vector maps and camera position data. 21. The non-transitory computer-readable medium of claim 17, wherein the defined path between the reference point of each camera and the plurality of image pixels comprises a distinct reference point of each camera for each of the plurality of image pixels within the field of view of each camera. 22. The non-transitory computer-readable medium of claim 16, wherein: the plurality of digital cameras comprise a first plurality of digital cameras and a second plurality of digital cameras, andthe second plurality of digital cameras is positioned in a location different from the first plurality of digital cameras to image the area from a different point of view.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (33)
Kulik John J. (Longwood FL), 360 Degree closed circuit television system.
Gaylord William J. (Stone Mountain GA), Apparatus and method for segmenting a field of view into contiguous, non-overlapping, vertical and horizontal sub-fields.
Jain Ramesh ; Moezzi Saied ; Katkere Arun, Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accorda.
Glassman Martin S. ; Gorr Russell E. ; Hancock Thomas R. ; Judd Stephen J. ; Novak Carol L. ; Pearlmutter Barak A. ; Rickard ; Jr. Scott T., Omnidirectional visual image detector and processor.
Braun David A. (Denville NJ) Nilson ; III William A. E. (Bridgewater NJ) Nelson Terence J. (New Providence NJ) Smoot Lanny S. (Morris Township ; Morris County NJ), Television system for displaying multiple views of a remote location.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.