IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
UP-0830073
(2004-04-23)
|
등록번호 |
US-7583252
(2009-09-16)
|
발명자
/ 주소 |
- Kurtenbach, Gordon Paul
- Fitzmaurice, George William
- Balakrishnan, Ravin
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
17 인용 특허 :
18 |
초록
▼
The present invention is a system that allows a number of 3D volumetric display or output configurations, such as dome, cubical and cylindrical volumetric displays, to interact with a number of different input configurations, such as a three-dimensional position sensing system having a volume sensin
The present invention is a system that allows a number of 3D volumetric display or output configurations, such as dome, cubical and cylindrical volumetric displays, to interact with a number of different input configurations, such as a three-dimensional position sensing system having a volume sensing field, a planar position sensing system having a digitizing tablet, and a non-planar position sensing system having a sensing grid formed on a dome. The user interacts via the input configurations, such as by moving a digitizing stylus on the sensing grid formed on the dome enclosure surface. This interaction affects the content of the volumetric display by mapping positions and corresponding vectors of the stylus to a moving cursor within the 3D display space of the volumetric display that is offset from a tip of the stylus along the vector.
대표청구항
▼
What is claimed is: 1. A system, comprising: a three-dimensional (3D) volumetric display output configuration having a display content; and an input configuration coupled to the volumetric display output configuration and comprising a passive sensor allowing a user to affect the display content wit
What is claimed is: 1. A system, comprising: a three-dimensional (3D) volumetric display output configuration having a display content; and an input configuration coupled to the volumetric display output configuration and comprising a passive sensor allowing a user to affect the display content with one hand through the passive sensor by mapping the affect to a 3D position of a cursor within the display, while allowing the user to move about the display. 2. A system as recited in claim 1, wherein the sensor comprises a motion tracking camera. 3. A system as recited in claim 1, wherein the sensor comprises a glove and glove tracking system. 4. A system as recited in claim 1, wherein the sensor comprises a touch sensitive surface. 5. A system as recited in claim 1, wherein the sensor comprises magnetic field tracking system. 6. A system as recited in claim 1, wherein the output configuration comprises one of a dome, a cylinder, a cubical box and an arbitrary shape. 7. A system as recited in claim 1, wherein the input configuration comprises one of a 3D volumetric input space mapped to the 3D volumetric display, a planar 2D input space mapped to the 3D volumetric display, a planar 2D input space mapped to a planar 2D space within the 3D volumetric display, and a non-planar 2D input space mapped to the 3D volumetric display. 8. A system as recited in claim 7, wherein the user produces inputs comprising one or directly with a hand, with a surface touching device and with an intermediary device. 9. A system as recited in claim 7, wherein the input configuration further comprises one of an input volume adjacent to the display, an input volume surrounding the display, a digitizing surface covering a surface of the display, a digitizing surface offset from the surface of the display, and an intermediary device used with the display. 10. A system as recited in claim 9, wherein the intermediary device comprises one of a stylus, a surface fitting mouse, a park able mouse, a multi-dimensional mouse, a movable input device positioned on a bottom periphery of the display and a set of identical input devices positioned spaced around a bottom periphery of the display. 11. A system as recited in claim 1, wherein the input configuration comprises a non-planar 2D input space mapped to the 3D volumetric display. 12. A system as recited in claim 1, wherein the input configuration comprises a tracking system tracking a user. 13. A system as recited in claim 1, wherein the input configuration is non-spatial. 14. A system as recited in claim 1, wherein the input configuration comprises a voice recognition system allowing the user to affect the display content using voice commands. 15. A system as recited in claim 14, where the input configuration, output configuration and the user define a dynamically updatable spatial correspondence. 16. A system as recited in claim 1, wherein the input configuration and output configuration define a spatial correspondence between an input space and an output space. 17. A system as recited in claim 16, wherein the spatial correspondences comprises one of 3D to 3D, 2D planar to 3D, 2D planar to 2D planar and non-planar 2D to 3D. 18. A method, comprising: interacting, by a user, with a three-dimensional (3D) volumetric display via a passive detecting system; and affecting the 3D content of the display responsive to the interaction from one hand of the user by mapping the interacting to a 3D position of a cursor within the display, while allowing the user to move about the display. 19. A method as recited in claim 18, wherein the display comprises a camera and said interacting comprises tracking movements by the user with the camera. 20. A system, comprising: a three-dimensional (3D) volumetric display output configuration having a display content; and an input configuration coupled to the volumetric display output configuration and allowing a user to affect the display content with one hand by mapping the affect to a 3D position of a cursor within the display, while allowing the user to move about the display, said input configuration comprising a touch sensitive surface overlaid on said display. 21. A system, comprising: a three-dimensional (3D) volumetric display output configuration having a display content; and an input configuration coupled to the volumetric display output configuration and allowing a user to affect the display content with one hand by mapping the affect to a 3D position of a cursor within the display, while allowing the user to move about the display, said input configuration comprising a surface motion system detecting motion on a surface of said display. 22. A system, comprising: a three-dimensional (3D) volumetric display output configuration having a display content; and an input configuration coupled to the volumetric display output configuration and allowing a user to affect the display content with one hand by mapping the affect to a 3D position of a cursor within the display, while allowing the user to move about the display, said input configuration comprising an input device moving in three dimensions on a surface of said display. 23. A system, comprising: a three-dimensional (3D) volumetric display output configuration having a display content; and an input configuration coupled to the volumetric display output configuration and comprising a passive sensor allowing a user to manipulate the display content through the passive sensor with one hand by mapping the affect to a 3D position of a cursor, while allowing the user to move about the display. 24. A system as recited in claim 23, wherein the cursor is superimposed within the volumetric display. 25. A system as recited in claim 22, wherein the surface of said display is a deformable membrane surface. 26. A method, comprising: receiving an input to a three-dimensional volumetric display from a pointer operated by a user relative to an input detector outside of the display, the user located at any position in proximity to the display; and interacting with three-dimensional content inside the display responsive to movement of the pointer by mapping the movement to the content.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.