Method and apparatus for parallel speculative rendering of synthetic images
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-013/00
G06T-015/70
G06F-015/16
G09G-005/00
G06F-003/048
G06F-015/80
G06F-015/76
출원번호
UP-0339359
(2003-01-08)
등록번호
US-7515156
(2009-07-01)
발명자
/ 주소
Tinker, Peter Allmond
Daily, Mike
출원인 / 주소
HRL Laboratories, LLC
대리인 / 주소
Cary Tope McKay
인용정보
피인용 횟수 :
9인용 특허 :
43
초록▼
A method, apparatus and computer program product for parallel speculative rendering of synthetic images in an image rendering system are presented. The operations include obtaining measurements regarding scene characteristics. The measurements are provided to predictors, each predicting a future sta
A method, apparatus and computer program product for parallel speculative rendering of synthetic images in an image rendering system are presented. The operations include obtaining measurements regarding scene characteristics. The measurements are provided to predictors, each predicting a future state for a measurement. The future states are provided to a renderer that renders graphical entities, each rendering resulting from a different predicted future state. Subsequently, a new set of measurements is obtained regarding the scene characteristics. Then each measurement of the new set of measurements is compared with a corresponding one of the predicted future states produced by the predictors. The predicted future state that most closely matches with the new measurements is then selected. Then, the graphical entities associated with the predicted future state that most closely match with the new measurements are selected. The selected graphical entities displayed on a display device.
대표청구항▼
What is claimed is: 1. A method for parallel speculative rendering of synthetic images in an image rendering system comprising steps of: obtaining measurements regarding scene characteristics; providing the measurements to a plurality of predictors; predicting, at each of the predictors, a future s
What is claimed is: 1. A method for parallel speculative rendering of synthetic images in an image rendering system comprising steps of: obtaining measurements regarding scene characteristics; providing the measurements to a plurality of predictors; predicting, at each of the predictors, a future state for each of the measurements; providing the future states to a plurality of renderers; rendering, at each renderer, graphical entities such that each rendering results from a different predicted future state; obtaining a new set of measurements regarding the scene characteristics; comparing, at a comparator, each one of the new set of measurements with a corresponding one of the predicted future states produced by the predictors; determining which predicted future state most closely matches with the new measurements; selecting, at a switch, graphical entities associated with the predicted future state that most closely matches with the new measurements; and displaying the selected graphical entities on a display device; wherein providing the measurements to a plurality of predictors, the predictors are arranged in a hierarchical fashion having higher-level predictors and leaf-level predictors, and wherein predicting, the higher-level predictors predict at a faster rate than the leaf-level predictors. 2. A method as set forth in claim 1, wherein a plurality of predicting, providing, and rendering steps are performed, with each sequence of predicting, providing, and rendering steps performed substantially in parallel with respect to the others. 3. A method as set forth in claim 2, wherein the parallel sequences of predicting, providing, and rendering steps are performed asynchronously with respect to each other. 4. A method as set forth in claim 3, wherein the measurements are obtained from at least two sensors with each sensor selected from a group consisting of magnetic sensors, video sensors, position sensors, inertial sensors, databases, and computer networks. 5. A method as set forth in claim 4, wherein each step of predicting is performed by predictors having a configuration selected from a group consisting of Kalman filters, sudden-stop and sudden-start models, and behavioral and physical models. 6. A method as set forth in claim 5, wherein the predictors predict values at a common future time. 7. A method as set forth in claim 6, wherein at least two predictors have the same configuration, and wherein the predictors produce output based on differing assumptions. 8. A method as set forth in claim 7, wherein the predictors provide output having the same form as the measurements regarding the scene characteristics. 9. A method as set forth in claim 8, wherein in the step of rendering, for each renderer, there is an average of at least one predictor. 10. A method as set forth in claim 9, wherein the step of comparing is performed by a simple weighted summing of value differences between the predicted future state and the new set of measurements. 11. A method as set forth in claim 10, wherein in the step of predicting each predictor includes multiple prediction models along with a model that models a likelihood of each of the other models being accurate, and where the method further comprises a step of selecting the model having a greatest likelihood of being accurate, and wherein the predicting step is performed using the model having the greatest likelihood of being accurate. 12. A method as set forth in claim 11, wherein in the step of comparing, a comparator is biased based on a combination of at least one biasing parameter selected from a group consisting of user-specified preferences, system-derived preferences, and a belief network in a predictor. 13. A method as set forth in claim 12, wherein the step of selecting is performed by a switch configured to select a plurality of images for display in an environment selected from multi-image display environments and multi-user environments with multiple displays. 14. A method as set forth in claim 13, wherein the step of predicting, the step of comparing, the step of rendering, and the step of switching are performed competitively as steps in a self-optimizing process subject to the following constraint: Δtd+Δtp+Δ tr+Δtc+Δts=T p, where: Tp=total prediction time; Δtd=average time between data updates; Δtp=average time to perform one prediction; Δtr=average time to render one scene; Δtc=average time to compare all predicted states with the current state; and Δts=average time to switch between one candidate image and another. 15. A method as set forth in claim 10, wherein the predictors are configured to predict all possible future states and wherein the renderers are configured to render all possible future states. 16. A method as set forth in claim 10, wherein the predictors are configured with a variety of prediction models, having a variety of processing speeds and a variety of qualities, so that a variety of predicted future states is available; whereby different models may be used depending on the speed and quality of prediction necessary. 17. A method as set forth in claim 8, wherein for each predictor, there is an average of more than one renderer. 18. A method as set forth in claim 5, wherein the predictors predict values at different future times. 19. A method as set forth in claim 2, wherein the parallel sequences of predicting, providing, and rendering steps are performed synchronously with respect to each other. 20. A method as set forth in claim 1, wherein the measurements are obtained from at least two sensors with each sensor selected from a group consisting of magnetic sensors, video sensors, position sensors, inertial sensors, databases, and computer networks. 21. A method as set forth in claim 1, wherein each step of predicting is performed by predictors having a configuration selected from a group consisting of Kalman filters, sudden-stop and sudden-start models, and behavioral and physical models. 22. A method as set forth in claim 21, wherein the predictors predict values at a common future time. 23. A method as set forth in claim 22, wherein at least two predictors have the same configuration, and wherein the predictors produce output based on differing assumptions. 24. A method as set forth in claim 23, wherein the predictors provide output having the same form as the measurements regarding the scene characteristics. 25. A method as set forth in claim 24, wherein in the step of rendering, for each renderer, there is an average of at least one predictor. 26. A method as set forth in claim 25, wherein for each predictor, there is an average of more than one renderer. 27. A method as set forth in claim 21, wherein the predictors predict values at different future times. 28. A method as set forth in claim 1, wherein the step of comparing is performed by a simple weighted summing of value differences between the predicted future state and the new set of measurements. 29. A method as set forth in claim 28, wherein the predictors are configured with a variety of prediction models, having a variety of processing speeds and a variety of qualities, so that a variety of predicted future states is available; whereby different models may be used depending on the speed and quality of prediction necessary. 30. A method as set forth in claim 28, wherein the predictors are configured to predict all possible future states and wherein the renderers are configured to render all possible future states. 31. A method as set forth in claim 1, wherein in the step of predicting each predictor includes multiple prediction models along with a model that models a likelihood of each of the other models being accurate, and where the method further comprises a step of selecting the model having a greatest likelihood of being accurate, and wherein the predicting step is performed using the model having the greatest likelihood of being accurate. 32. A method as set forth in claim 1, wherein in the step of comparing, a comparator is biased based on a combination of at least one biasing parameter selected from a group consisting of user-specified preferences, system-derived preferences, and a belief network in a predictor. 33. A method as set forth in claim 1, wherein the step of selecting is performed by a switch configured to select a plurality of images for display in an environment selected from multi-image display environments and multi-user environments with multiple displays. 34. A method as set forth in claim 1, wherein the step of predicting, the step of comparing, the step of rendering, and the step of switching are performed competitively as steps in a self-optimizing process subject to the following constraint: Δtd+Δtp+Δ tr+Δtc+Δts=T p, where: Tp=total prediction time; Δtd=average time between data updates; Δtp=average time to perform one prediction; Δtr=average time to render one scene; Δtc=average time to compare all predicted states with the current state; and Δts=average time to switch between one candidate image and another. 35. An apparatus for parallel speculative rendering of synthetic images in an image rendering system, the apparatus comprising at least one computer, the computer comprising an input, a processor connected with the input, a memory connected with the processor, and an output connected with the processor, the computer further comprising means for performing operations of: obtaining measurements regarding scene characteristics; providing the measurements to a plurality of predictors; predicting, at each of the predictors, a future state for each of the measurements; providing the future states to a plurality of renderers; rendering, at each renderer, graphical entities such that each rendering results from a different predicted future state; obtaining a new set of measurements regarding the scene characteristics; comparing, at a comparator, each one of the new set of measurements with a corresponding one of the predicted future states produced by the predictors; determining which predicted future state most closely matches with the new measurements; selecting, at a switch, graphical entities associated with the predicted future state that most closely matches with the new measurements; and outputting the selected graphical entities; wherein providing the measurements to a plurality of predictors, the predictors are arranged in a hierarchical fashion having higher-level predictors and leaf-level predictors, and wherein predicting, the higher-level predictors predict at a faster rate than the leaf-level predictors. 36. An apparatus as set forth in claim 35, wherein a plurality of predicting, providing, and rendering means are provided so that each sequence of predicting, providing, and rendering operations is performed substantially in parallel with respect to the others. 37. An apparatus as set forth in claim 36, wherein the parallel sequences of predicting, providing, and rendering operations are performed asynchronously with respect to each other. 38. An apparatus as set forth in claim 37, wherein the measurements are obtained from at least two sensors with each sensor selected from a group consisting of magnetic sensors, video sensors, position sensors, inertial sensors, databases, and computer networks. 39. An apparatus as set forth in claim 38, wherein each operation of predicting is performed by predictors having a configuration selected from a group consisting of Kalman filters, sudden-stop and sudden-start models, and behavioral and physical models. 40. An apparatus as set forth in claim 39, wherein the predictors predict values at a common future time. 41. An apparatus as set forth in claim 40, wherein at least two predictors have the same configuration, and wherein the predictors produce output based on differing assumptions. 42. An apparatus as set forth in claim 41, wherein the predictors provide output having the same form as the measurements regarding the scene characteristics. 43. An apparatus as set forth in claim 42, wherein in the operation of rendering, for each renderer, there is an average of at least one predictor. 44. An apparatus as set forth in claim 43, wherein the operation of comparing is performed by a simple weighted summing of value differences between the predicted future state and the new set of measurements. 45. An apparatus as set forth in claim 44, wherein in the operation of predicting each predictor includes multiple prediction models along with a model that models a likelihood of each of the other models being accurate, and where the apparatus further comprises a operation of selecting the model having a greatest likelihood of being accurate, and wherein the predicting operation is performed using the model having the greatest likelihood of being accurate. 46. An apparatus as set forth in claim 45, wherein in the operation of comparing, a comparator is biased based on a combination of at least one biasing parameter selected from a group consisting of user-specified preferences, system-derived preferences, and a belief network in a predictor. 47. An apparatus as set forth in claim 46, wherein the operation of selecting is performed by a switch configured to select a plurality of images for display in an environment selected from multi-image display environments and multi-user environments with multiple displays. 48. An apparatus as set forth in claim 47, wherein the means for predicting, the means for comparing, the means for rendering, and the means for switching are run competitively as operations in a self-optimizing process subject to the following constraint: Δtd+Δtp+Δ tr+Δtc+Δts=T p, where: Tp=total prediction time; Δtd=average time between data updates; Δtp=average time to perform one prediction; Δtr=average time to render one scene; Δtc=average time to compare all predicted states with the current state; and Δts=average time to switch between one candidate image and another. 49. An apparatus as set forth in claim 44, wherein the predictors are configured to predict all possible future states and wherein the renderers are configured to render all possible future states. 50. An apparatus as set forth in claim 44, wherein the predictors are configured with a variety of prediction models, having a variety of processing speeds and a variety of qualities, so that a variety of predicted future states is available; whereby different models may be used depending on the speed and quality of prediction necessary. 51. An apparatus as set forth in claim 42, wherein for each predictor, there is an average of more than one renderer. 52. An apparatus as set forth in claim 39, wherein the predictors predict values at different future times. 53. An apparatus as set forth in claim 36, wherein the parallel sequences of predicting, providing, and rendering operations are performed synchronously with respect to each other. 54. An apparatus as set forth in claim 35, wherein the measurements are obtained from at least two sensors with each sensor selected from a group consisting of magnetic sensors, video sensors, position sensors, inertial sensors, databases, and computer networks. 55. An apparatus as set forth in claim 35, wherein each operation of predicting is performed by predictors having a configuration selected from a group consisting of Kalman filters, sudden-stop and sudden-start models, and behavioral and physical models. 56. An apparatus as set forth in claim 55, wherein the predictors predict values at a common future time. 57. An apparatus as set forth in claim 56, wherein at least two predictors have the same configuration, and wherein the predictors produce output based on differing assumptions. 58. An apparatus as set forth in claim 57, wherein the predictors provide output having the same form as the measurements regarding the scene characteristics. 59. An apparatus as set forth in claim 58, wherein in the operation of rendering, for each renderer, there is an average of at least one predictor. 60. An apparatus as set forth in claim 58, wherein for each predictor, there is an average of more than one renderer. 61. An apparatus as set forth in claim 55, wherein the means for comparing operates by performing a simple weighted summing of value differences between the predicted future state and the new set of measurements. 62. An apparatus as set forth in claim 35, wherein the operation of comparing is performed by a simple weighted summing of value differences between the predicted future state and the new set of measurements. 63. An apparatus as set forth in claim 62, wherein the predictors are configured with a variety of prediction models, having a variety of processing speeds and a variety of qualities, so that a variety of predicted future states is available; whereby different models may be used depending on the speed and quality of prediction necessary. 64. An apparatus as set forth in claim 62, wherein the predictors are configured to predict all possible future states and wherein the renderers are configured to render all possible future states. 65. An apparatus as set forth in claim 35, wherein in the operation of predicting each predictor includes multiple prediction models along with a model that models a likelihood of each of the other models being accurate, and where the apparatus further comprises a means for selecting the model having a greatest likelihood of being accurate, and wherein the means for predicting uses the model having the greatest likelihood of being accurate. 66. An apparatus as set forth in claim 35, wherein in the operation of comparing, a comparator is biased based on a combination of at least one biasing parameter selected from a group consisting of user-specified preferences, system-derived preferences, and a belief network in a predictor. 67. An apparatus as set forth in claim 35, wherein the operation of selecting is performed by a switch configured to select a plurality of images for display in an environment selected from multi-image display environments and multi-user environments with multiple displays. 68. An apparatus as set forth in claim 35, wherein the operation of predicting, the operation of comparing, the operation of rendering, and the operation of switching are performed competitively as operations in a self-optimizing process subject to the following constraint: Δtd+Δtp+Δ tr+Δtc+Δts=T p, where: Tp=total prediction time; Δtd=average time between data updates; Δtp=average time to perform one prediction; Δtr=average time to render one scene; Δtc=average time to compare all predicted states with the current state; and Δts=average time to switch between one candidate image and another. 69. A computer program product for parallel speculative rendering of synthetic images in an image rendering system having stored on a computer-readable medium, means for performing operations of: obtaining measurements regarding scene characteristics; providing the measurements to a plurality of predictors; predicting, at each of the predictors, a future state for each of the measurements; providing the future states to a plurality of renderers; rendering, at each renderer, graphical entities such that each rendering results from a different predicted future state; obtaining a new set of measurements regarding the scene characteristics; comparing, at a comparator, each one of the new set of measurements with a corresponding one of the predicted future states produced by the predictors; determining which predicted future state most closely matches with the new measurements; selecting, at a switch, graphical entities associated with the predicted future state that most closely matches with the new measurements; and displaying the selected graphical entities on a display device; wherein providing the measurements to a plurality of predictors, the predictors are arranged in a hierarchical fashion having higher-level predictors and leaf-level predictors, and wherein predicting, the higher-level predictors predict at a faster rate than the leaf-level predictors. 70. A computer program product as set forth in claim 69, wherein a plurality of predicting, providing, and rendering operations are performed by a plurality of means for predicting, providing, and rendering, with each sequence of predicting, providing, and rendering operations performed substantially in parallel with respect to the others. 71. A computer program product as set forth in claim 70, wherein the parallel sequences of predicting, providing, and rendering operations are performed asynchronously with respect to each other. 72. A computer program product as set forth in claim 71, wherein the measurements are obtained from at least two sensors with each sensor selected from a group consisting of magnetic sensors, video sensors, position sensors, inertial sensors, databases, and computer networks. 73. A computer program product as set forth in claim 72, wherein each operation of predicting is performed by predictors having a configuration selected from a group consisting of Kalman filters, sudden-stop and sudden-start models, and behavioral and physical models. 74. A computer program product as set forth in claim 73, wherein the predictors predict values at a common future time. 75. A computer program product as set forth in claim 74, wherein at least two predictors have the same configuration, and wherein the predictors produce output based on differing assumptions. 76. A computer program product as set forth in claim 75, wherein the predictors provide output having the same form as the measurements regarding the scene characteristics. 77. A computer program product as set forth in claim 76, wherein in the operation of rendering, for each renderer, there is an average of at least one predictor. 78. A computer program product as set forth in claim 77, wherein the operation of comparing is performed by a simple weighted summing of value differences between the predicted future state and the new set of measurements. 79. A computer program product as set forth in claim 78, wherein in the operation of predicting each predictor includes multiple prediction models along with a model that models a likelihood of each of the other models being accurate, and where the computer product further comprises a operation of selecting the model having a greatest likelihood of being accurate, and wherein the predicting operation is performed using the model having the greatest likelihood of being accurate. 80. A computer program product as set forth in claim 79, wherein in the operation of comparing, a comparator is biased based on a combination of at least one biasing parameter selected from a group consisting of user-specified preferences, system-derived preferences, and a belief network in a predictor. 81. A computer program product as set forth in claim 80, wherein the operation of selecting is performed by a switch configured to select a plurality of images for display in an environment selected from multi-image display environments and multi-user environments with multiple displays. 82. A computer program product as set forth in claim 81, wherein the operation of predicting, the operation of comparing, the operation of rendering, and the operation of switching are performed competitively as operations in a self-optimizing process subject to the following constraint: Δtd+Δtp+Δ tr+Δtc+Δts=T p, where: Tp=total prediction time; Δtd=average time between data updates; Δtp=average time to perform one prediction; Δtr=average time to render one scene; Δtc=average time to compare all predicted states with the current state; and Δts=average time to switch between one candidate image and another. 83. A computer program product as set forth in claim 78, wherein the predictors are configured to predict all possible future states and wherein the renderers are configured to render all possible future states. 84. A computer program product as set forth in claim 78, wherein the predictors are configured with a variety of prediction models, having a variety of processing speeds and a variety of qualities, so that a variety of predicted future states is available; whereby different models may be used depending on the speed and quality of prediction necessary. 85. A computer program product as set forth in claim 76, wherein for each predictor, there is an average of more than one renderer. 86. A computer program product as set forth in claim 73, wherein the predictors predict values at different future times. 87. A computer program product as set forth in claim 70, wherein the parallel sequences of predicting, providing, and rendering operations are performed synchronously with respect to each other. 88. A computer program product as set forth in claim 69, wherein the measurements are obtained from at least two sensors with each sensor selected from a group consisting of magnetic sensors, video sensors, position sensors, inertial sensors, databases, and computer networks. 89. A computer program product as set forth in claim 69, wherein each operation of predicting is performed by predictors having a configuration selected from a group consisting of Kalman filters, sudden-stop and sudden-start models, and behavioral and physical models 90. A computer program product as set forth in claim 89, wherein the predictors predict values at a common future time. 91. A computer program product as set forth in claim 90, wherein at least two predictors have the same configuration, and wherein the predictors produce output based on differing assumptions. 92. A computer program product as set forth in claim 91, wherein the predictors provide output having the same form as the measurements regarding the scene characteristics. 93. A computer program product as set forth in claim 92, wherein in the operation of rendering, for each renderer, there is an average of at least one predictor. 94. A computer program product as set forth in claim 92, wherein for each predictor, there is an average of more than one renderer. 95. A computer program product as set forth in claim 89, wherein the operation of comparing is performed by a simple weighted summing of value differences between the predicted future state and the new set of measurements. 96. A computer program product as set forth in claim 69, wherein the operation of comparing is performed by a simple weighted summing of value differences between the predicted future state and the new set of measurements. 97. A computer program product as set forth in claim 96, wherein the predictors are configured with a variety of prediction models, having a variety of processing speeds and a variety of qualities, so that a variety of predicted future states is available; whereby different models may be used depending on the speed and quality of prediction necessary. 98. A computer program product as set forth in claim 96, wherein the predictors are configured to predict all possible future states and wherein the renderers are configured to render all possible future states. 99. A computer program product as set forth in claim 69, wherein in the operation of predicting each predictor includes multiple prediction models along with a model that models a likelihood of each of the other models being accurate, and where the computer program product further comprises a operation of selecting the model having a greatest likelihood of being accurate, and wherein the predicting operation is performed using the model having the greatest likelihood of being accurate. 100. A computer program product as set forth in claim 69, wherein in the operation of comparing, a comparator is biased based on a combination of at least one biasing parameter selected from a group consisting of user-specified preferences, system-derived preferences, and a belief network in a predictor. 101. A computer program product as set forth in claim 69, wherein the operation of selecting is performed by a switch configured to select a plurality of images for display in an environment selected from multi-image display environments and multi-user environments with multiple displays. 102. A computer program product as set forth in claim 69, wherein the operation of predicting, the operation of comparing, the operation of rendering, and the operation of switching are performed competitively as operations in a self-optimizing process subject to the following constraint: Δtd+Δtp+Δ tr+Δtc+Δts=T p, where: Tp=total prediction time; Δtd=average time between data updates; Δtp=average time to perform one prediction; Δtr=average time to render one scene; Δtc=average time to compare all predicted states with the current state; and Δts=average time to switch between one candidate image and another.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (43)
Kramer James F. (Menlo Park CA), Accurate, rapid, reliable position sensing using multiple sensing technologies.
Poulton John W. (Chapel Hill NC) Molnar Steven E. (Chapel Hill NC) Eyles John G. (Chapel Hill NC), Architecture and apparatus for image generation utilizing enhanced memory devices.
Freedman Aaron S. (205 Barrington Rd. Syracuse NY 13214) Neri Mark L. (1731 Rutledge Rd. Longwood FL 32779), Computer network data distribution and selective retrieval system.
Fuchs Henry ; Livingston Mark Alan ; Bishop Thomas Gary ; Welch Gregory Francis, Dynamic generation of imperceptible structured light for tracking and acquisition of three dimensional scene geometry a.
Dong Yang (Cleveland Heights OH) Chizeck Howard J. (Cleveland Heights OH) Khoury James M. (Strongsville OH) Schmidt Robert N. (Cleveland OH), Extended horizon adaptive block predictive controller with an efficient prediction system.
Cohen, Charles J.; Beach, Glenn; Cavell, Brook; Foulk, Gene; Jacobus, Charles J.; Obermark, Jay; Paul, George, Gesture-controlled interfaces for self-service machines and other applications.
Frasier Richard A. (Grass Valley CA) Witek F. Andrew (Nevada City CA) Hoard Charles Q. (Grass Valley CA) Olmstead Neil R. (Nevada City CA) Lange William C. (Grass Valley CA), Graphics path prediction display.
Horton Mike A. (Berkeley CA) Newton A. Richard (Woodside CA), Method and apparatus for determining position and orientation of a moveable object using accelerometers.
Golio John M. (Chandler AZ) Turner Robert C. (Mesa AZ) Miller Monte G. (Phoenix AZ) Halchin David J. (Chandler AZ), Optimization method using parallel processors.
Emma Philip G. (Danbury CT) Knight Joshua W. (Mohegan Lake NY) Pomerene James H. (Chappaqua NY) Puzak Thomas R. (Ridgefield CT), Simultaneous prediction of multiple branches for superscalar processing.
Yang Kebing (Gaithersburg MD) Tzou Kou-Hu (Potomac MD) Lin Tahsin L. (Ellicott City MD) Rao Ashok K. (Germantown MD), Unified motion estimation architecture.
Hamilton, II, Rick Allen; O'Connell, Brian Marshall; Pickover, Clifford Alan; Walker, Keith Raymond, Method and apparatus for moving an avatar in a virtual universe.
Hamilton, II, Rick Allen; O'Connell, Brian Marshall; Pickover, Clifford Alan; Walker, Keith Raymond, Method and apparatus for predicting avatar movement in a virtual universe.
Hamilton, II, Rick Allen; O'Connell, Brian Marshall; Pickover, Clifford Alan; Walker, Keith Raymond, Method and apparatus for spawning projected avatars in a virtual universe.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.