Modeling and video projection for augmented virtual environments
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G09G-005/02
G09G-005/14
출원번호
UP-0676377
(2003-09-30)
등록번호
US-7583275
(2009-09-16)
발명자
/ 주소
Neumann, Ulrich
You, Suya
출원인 / 주소
University of Southern California
대리인 / 주소
Fish & Richardson P.C.
인용정보
피인용 횟수 :
135인용 특허 :
35
초록▼
Systems and techniques to implement augmented virtual environments. In one implementation, the technique includes: generating a three dimensional (3D) model of an environment from range sensor information representing a height field for the environment, tracking orientation information of image sens
Systems and techniques to implement augmented virtual environments. In one implementation, the technique includes: generating a three dimensional (3D) model of an environment from range sensor information representing a height field for the environment, tracking orientation information of image sensors in the environment with respect to the 3D model in real-time, projecting real-time video from the image sensors onto the 3D model based on the tracked orientation information, and visualizing the 3D model with the projected real-time video. Generating the 3D model can involve parametric fitting of geometric primitives to the range sensor information. The technique can also include: identifying in real time a region in motion with respect to a background image in real-time video, the background image being a single distribution background dynamically modeled from a time average of the real-time video, and placing a surface that corresponds to the moving region in the 3D model.
대표청구항▼
What is claimed is: 1. A method comprising: obtaining a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment; identifying in real time a region in motion with
What is claimed is: 1. A method comprising: obtaining a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment; identifying in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to the three dimensional model, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information; placing a surface that corresponds to the moving region in the three dimensional model, wherein placing the surface comprises casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region; projecting the real-time video imagery information onto the three dimensional model, including the surface, based on the position and orientation information; and visualizing the three dimensional model with the projected real-time video imagery; wherein identifying a region in motion in real time comprises subtracting the background image from the real-time video imagery information, identifying a foreground object in the subtracted real-time video imagery information, validating the foreground object by correlation matching between identified objects in neighboring image frames, and outputting the validated foreground objects; wherein identifying a foreground object comprises identifying the foreground object in the subtracted real-time video imagery information using a histogram-based threshold and a noise filter. 2. The method of claim 1, further comprising tracking the position and orientation information of the at least one image sensor in the environment with respect to the three dimensional model in real-time. 3. The method of claim 2, wherein obtaining a three dimensional model of a three dimensional environment comprises generating the three dimensional model of the three dimensional environment. 4. The method of claim 1, wherein the surface comprises a two dimensional surface. 5. The method of claim 1, wherein identifying a region in motion in real time further comprises estimating the background image by modeling the background image as a temporal pixel average of five recent image frames in the real-time video imagery information. 6. An augmented virtual environment system comprising: an object detection and tracking component that identifies in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information, and places a surface that corresponds to the moving region with respect to the three dimensional model, wherein the object detection and tracking component places the surface by performing operations comprising casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region; a dynamic fusion imagery projection component that projects the real-time video imagery information onto the three dimensional model, including the surface, based on the position and orientation information; and a visualization sub-system that visualizes the three dimensional model with the projected real-time video imagery; wherein the object detection and tracking component identifies the moving region by performing operations comprising subtracting the background image from the real-time video imagery information, identifying a foreground object in the subtracted real-time video imagery information, validating the foreground object by correlation matching between identified objects in neighboring image frames, and outputting the validated foreground object; wherein identifying a foreground object comprises identifying the foreground object in the subtracted real-time video imagery information using a histogram-based threshold and a noise filter. 7. The system of claim 6, further comprising a tracking sensor system that integrates visual input, global navigational satellite system receiver input, and inertial orientation sensor input to obtain the position and the orientation information associated with the at least one image sensor in real time in conjunction with the real-time video imagery. 8. The system of claim 7, further comprising a model construction component that generates the three dimensional model of the three dimensional environment. 9. The system of claim 6, wherein the surface comprises a two dimensional surface. 10. The system of claim 6, wherein identifying a region in motion in real time further comprises estimating the background image by modeling the background image as a temporal pixel average of five recent image frames in the real-time video imagery information. 11. A machine-readable storage device embodying information indicative of instructions for causing one or more machines to perform operations comprising: obtaining a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment; identifying in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to the three dimensional model, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information; placing a surface that corresponds to the moving region in the three dimensional model, wherein placing the surface comprises casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region; projecting the real-time video imagery information onto the three dimensional model, including the surface, based on the position and orientation information; and visualizing the three dimensional model with the projected real-time video imagery; wherein identifying a region in motion in real time comprises subtracting the background image from the real-time video imagery information, identifying a foreground object in the subtracted real-time video imagery information, validating the foreground object by correlation matching between identified objects in neighboring image frames, and outputting the validated foreground object; wherein identifying a foreground object comprises identifying the foreground object in the subtracted real-time video imagery information using a histogram-based threshold and a noise filter. 12. The machine-readable storage device of claim 11, wherein the surface comprises a two dimensional surface. 13. The machine-readable storage device of claim 11, further comprising tracking the position and orientation information of the at least one image sensor in the environment with respect to the three dimensional model in real-time. 14. The machine-readable storage device of claim 13, wherein obtaining a three dimensional model of a three dimensional environment comprises generating the three dimensional model of the three dimensional environment. 15. The machine-readable storage device of claim 11, wherein identifying a region in motion in real time further comprises estimating the background image by modeling the background image as a temporal pixel average of five recent image frames in the real-time video imagery information.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (35)
Madden Paul B. ; Moorby Philip R. ; Robotham John S. ; Schott Jean-Pierre, Adaptive modeling and segmentation of visual image streams.
Keith James Hanna ; Rakesh Kumar ; James Russell Bergen ; Harpreet Singh Sawhney ; Jeffrey Lubin, Apparatus for enhancing images using flow estimation.
Moezzi Saied ; Katkere Arun ; Jain Ramesh, Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional v.
Rushmeier, Holly E.; Bernardini, Fausto, Method and apparatus for acquiring a set of consistent image maps to represent the color of the surface of an object.
Weiblen,Michael E.; Walters,Charles B.; Brockway,Dan E.; McDonald,Richard M., Method and apparatus for building a real time graphic scene database having increased resolution and improved rendering speed.
Hanna Keith James ; Kumar Rakesh ; Bergen James Russell ; Sawhney Harpreet Singh ; Lubin Jeffrey, Method and apparatus for enhancing regions of aligned images using flow estimation.
Sawhney, Harpreet Singh; Kumar, Rakesh; Guo, Yanlin; Asmuth, Jane; Hanna, Keith James, Method and apparatus for multi-view three dimensional estimation.
Kumar, Rakesh; Hsu, Stephen Charles; Hanna, Keith; Samarasekera, Supun; Wildes, Richard Patrick; Hirvonen, David James; Klinedinst, Thomas Edward; Lehman, William Brian; Matei, Bodgan; Zhao, Wenyi; L, Method and apparatus for performing geo-spatial registration of imagery.
Hsu Stephen Charles ; Kumar Rakesh ; Sawhney Harpreet Singh ; Bergen James R. ; Dixon Doug ; Paragano Vince ; Gendel Gary, Method and apparatus for performing local to global multiframe alignment to construct mosaic images.
Keith James Hanna ; Rakesh Kumar ; James Russell Bergen ; Harpreet Singh Sawhney ; Jeffrey Lubin, Method and apparatus for processing images to compute image flow information.
Zwern,Arthur; Waupotitsch,Roman; Fejes,Sandor; Chen,Jinlong; Callari,Francesco; Mishin,Oleg; Peng,Anrong; Bandari,Esfandiar, Method and system for generating fully-textured 3D.
Kumar, Rakesh; Hanna, Keith James; Bergen, James R.; Anandan, Padmanabhan; Williams, Kevin; Tinker, Mike, Method and system for rendering and combining images to form a synthesized view of a scene containing image information from a second image.
Jain Ramesh C. ; Hicks Terry Randolph ; Bailey Asquith A. ; McKinley Ryan B. ; Kuramura Don Yamato ; Katkere Arun L., Multi-perspective viewer for content-based interactivity.
Carlin,Bruce; Asami,Satoshi; Porras,Arthur; Porras,Sandra, Network-linked interactive three-dimensional composition and display of saleable objects in situ in viewer-selected scenes for purposes of promotion and procurement.
Lyons Damian M., System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs.
DeNicola ; Jr. Anthony J. (Newark DE) Giroux Thomas A. (Bear DE), Theromoplastic blends containing graft copolymers of polyacrylates as impact modifiers.
Valkenburg, Robert Jan; Penman, David William; Schoonees, Johann August; Alwesh, Nawar Sami; Palmer, George Terry, 3D scene scanner and a position and orientation system.
Kasahara, Shunichi, Apparatus, method, and program for changing augmented-reality display in accordance with changed positional relationship between apparatus and object.
Zhang, Yajun-Edwin; Jin, Zhao Xia; Huang, Jerry Qi; Halford, Andrew D.; Liu, Chengyi; Mitchell, Curtis, Approach for planning, designing and observing building systems.
Aguilera Perez, Jaime; Alonso Blazquez, Fernando; Gómez Fernandez, Juan Bautista, Generating a 3D interactive immersive experience from a 2D static image.
Russell, Matthew Alan, Hybrid winder machine system configured to operate in a first state to wind a continuous web material to produce a final wound product and a second state to emulate production of the wound product.
Rousselle, Adam Robert; Leppanen, Vesa Johannes; Kinnunen, Jari Tapio; DeJong, Alan John; Clymer, Hugh Andrew; Dalmasse, Leighton Edward, Method and system for locating a stem of a target tree.
Ben-David, Amit; Peleg, Omri; Briskin, Gil; Shimonovitch, Eran, Method and system of generating a three-dimensional view of a real scene for military planning and operations.
Leppanen, Vesa Johannes; Rousselle, Adam Robert; Clymer, Hugh Andrew; Dalmasse, Leighton; Beck, Brian; Kinnunen, Jari; Shipilov, Andrey, Method for locating vegetation having a potential to impact a structure.
Khalid, Mohammad Raheel; Jaafar, Ali; Breitenfeld, Denny; Hansen, Xavier; Egeler, Christian; Kamal, Syed; Chandrasiri, Lama Hewage Ravi Prathapa; Smith, Steven L., Methods and systems for creating and providing a real-time volumetric representation of a real-world event.
Lucey, Simon Michael; Gupta, Priyanshu; Johnston, Benjamin Peter; Yu, Tzu-Chin, Methods and systems for providing interface components for respiratory therapy.
Nielsen, Curtis W.; Anderson, Matthew O.; McKay, Mark D.; Wadsworth, Derek C.; Boyce, Jodie R.; Hruska, Ryan C.; Koudelka, John A.; Whetten, Jonathon; Bruemmer, David J., Methods and systems relating to an augmented virtuality environment.
Geisner, Kevin A.; Mount, Brian J.; Latta, Stephen G.; McCulloch, Daniel J.; Lee, Kyungsuk David; Sugden, Ben J.; Margolis, Jeffrey N.; Perez, Kathryn Stone; Small, Sheridan Martin; Finocchio, Mark J.; Crocco, Jr., Robert L., Realistic occlusion for a head mounted augmented reality display.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.