IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0335640
(1999-06-18)
|
등록번호 |
US-7278115
(2007-10-02)
|
발명자
/ 주소 |
- Conway,Matthew J.
- Jacquot,Stephen A.
- Proffitt,Dennis R.
- Robertson,George G.
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
53 인용 특허 :
8 |
초록
▼
A graphical user interface in which object thumbnails are rendered in a three-dimensional environment and which exploits spatial memory. The objects may be moved, continuously, with a two-dimensional input device. Pop-up title bars may be rendered over active objects. Intelligent help may be prov
A graphical user interface in which object thumbnails are rendered in a three-dimensional environment and which exploits spatial memory. The objects may be moved, continuously, with a two-dimensional input device. Pop-up title bars may be rendered over active objects. Intelligent help may be provided to the user, as visual indicators, based on proximity clustering or based on matching algorithms. The simulated location of the object thumbnails in a direction orthogonal to the surface is based on function, such as a linear, polynomial, or exponential function for example, of one or more object properties, such as number of mouse clicks since selected, age, size, etc.
대표청구항
▼
What is claimed is: 1. A man-machine interface method for permitting a user to act on thumbnails, each thumbnail representing an associated object containing information, for use with a machine having a video display device and a user input device, the man-machine interface method comprising: a) ge
What is claimed is: 1. A man-machine interface method for permitting a user to act on thumbnails, each thumbnail representing an associated object containing information, for use with a machine having a video display device and a user input device, the man-machine interface method comprising: a) generating a three-dimensional environment, having a depth, to be rendered on the video display device; b) determining a two-dimensional location and a depth of each of the thumbnails in the three-dimensional environment, wherein, for each of the thumbnails, the depth is a function of at least one parameter of the object associated with the thumbnail; and c) generating the thumbnails within the three-dimensional environment, at the determined two-dimensional locations and depths, to be rendered on the video display device. 2. The man-machine interface method of claim 1 wherein, for each of the thumbnails, the depth is a linear function of at least one parameter of the object associated with the thumbnail. 3. The man-machine interface method of claim 1 wherein, for each of the thumbnails, the depth is a polynomial function of at least one parameter of the object associated with the thumbnail. 4. The man-machine interface method of claim 1 wherein, for each of the thumbnails, the depth is an exponential function of at least one parameter of the object associated with the thumbnail. 5. The method of claim 1 wherein the at least one parameter includes at least one parameter selected from a group of parameters consisting of (a) click history, (b) age, (c) time since last use, (d) size, (e) file type, (f) associated application, (g) classification, and (h) author. 6. The man-machine interface method of claim 1 further comprising: d) accepting inputs from the user input device; e) determining a two-dimensional cursor location based on the accepted inputs; and f) generating a cursor at the determined two-dimensional cursor location, to be rendered on the video display device. 7. The man-machine interface method of claim 6 further comprising: g) if the two-dimensional location of the cursor is located on or over one of the thumbnails, defining a state of that thumbnail as active. 8. The man-machine interface method of claim 7 further comprising: h) generating a pop-up information bar located over the active thumbnail, to be rendered on the video display device. 9. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 8. 10. The man-machine interface method of claim 7 further comprising: h) if the user input provides a selection input and if an active or floated thumbnail exists, then generating a higher resolution visual representation of the object represented by and associated with the active or floated thumbnail, at a preferred viewing location at a foreground of the three dimensional environment, to be rendered on the video display device. 11. The man-machine interface method of claim 10 wherein the act of generating the higher resolution visual representation of the object represented by and associated with the active thumbnail includes: generating an animation which moves the higher resolution visual representation of the object represented by and associated with the active thumbnail from the location of the active thumbnail to the preferred viewing location at the foreground of the three dimensional environment, to be rendered on the video display device. 12. The man-machine interface method of claim 11 further comprising: i) if the user input provides a deselection input and if a selected thumbnail exists, then generating a video output for moving the high resolution visual representation of the object represented by and associated with the active thumbnail to the two-dimensional location of the selected thumbnail, to be rendered on the video display device. 13. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 12. 14. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 11. 15. The man-machine interface method of claim 10 further comprising: i) if the user input provides a sink input and if a floated thumbnail exists, then setting the depth of the floated thumbnail to a previous value and defining a state of the floated thumbnail as active. 16. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 15. 17. The man-machine interface method of claim 10 further comprising: h) if the user input provides a selection input and if a floated thumbnail exists, then i) invoking an application related to the object represented by and associated with the floated thumbnail, ii) loading the object represented by and associated with the floated thumbnail into the application, and iii) generating a video output of the application with the loaded object represented by and associated with the floated thumbnail at a preferred viewing location, to be rendered on the video display device. 18. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 17. 19. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 10. 20. The man-machine interface method of claim 7 further comprising: h) if the user input provides a float input and if an active thumbnail exists, then setting the depth of the active thumbnail to a predetermined value and defining a state of the active thumbnail as floated. 21. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 20. 22. The man-machine interface method of claim 7 further comprising: h) if the user input provides a selection input and if an active thumbnail exists, then i) invoking an application related to the object represented by and associated with the active thumbnail, ii) loading the object represented by and associated with the active thumbnail into the application, and iii) generating a video output of the application with the loaded object represented by and associated with the active thumbnail at a preferred viewing location, to be rendered on the video display device. 23. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 22. 24. The man-machine interface method of claim 7 further comprising: h) if the user input provides a move input and if an active or floated thumbnail exists, then i) updating the two-dimensional location of the active or floated thumbnail based on the move input. 25. The man-machine interface method of claim 24 wherein the move input is a left button mouse drag. 26. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 7. 27. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 6. 28. The man-machine interface method of claim 1 wherein the three-dimensional environment defines a foreground and a background, and wherein the act of generating thumbnails, within the three-dimensional environment, at the determined two-dimensional locations and depths, to be rendered on the video display device, includes: i) using perspective views so that any thumbnails in the foreground defined by the three-dimensional environment appear larger than any thumbnails in the background defined by the three-dimensional surface. 29. The man-machine interface method of claim 28 wherein a thumbnail partially occludes any thumbnails behind it, based on a viewing point. 30. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 29. 31. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 28. 32. The man-machine interface method of claim 1 further comprising: d) accepting inputs from the user input device; e) determining a viewing point two-dimensional location, depth and direction based on the accepted inputs; and f) generating only that portion of the three-dimensional environment and only those thumbnails that are in front of the virtual viewing point determined in act (e), to be rendered on the video display device. 33. The method of claim 32 wherein if the depth of the viewing point is below a predetermined depth, further performing a step of: g) gradually decreasing the depth of the viewing point to float the viewing point while no user inputs are received. 34. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 32. 35. The man-machine interface method of claim 1 wherein the thumbnails are low resolution bit maps. 36. The man-machine interface method of claim 35 wherein the low resolution bit maps are 64 pixels by 64 pixels and have 24 bit color. 37. The method of claim 1 further comprising, for each of the thumbnails, determining a shade to be applied to the thumbnail based on its depth. 38. The method of claim 37 wherein the shade to be applied to the thumbnail darkens as the depth increases. 39. The method of claim 37 wherein the shade to be applied to the thumbnail darkens as a distance between the depth of the thumbnail and a viewing point increases. 40. The method of claim 1 further comprising, for each of the thumbnails, determining a fade to be applied to the thumbnail based on its depth. 41. The method of claim 40 wherein the fade to be applied to the thumbnail increases as the depth increases. 42. The method of claim 40 wherein the fade to be applied to the thumbnail increases as a distance between the depth of the thumbnail and a viewing point increases. 43. The method of claim 1 further comprising, for each of the thumbnails, determining a tint to be applied to the thumbnail based on its depth. 44. The method of claim 43 wherein the tint to be applied to the thumbnail increases as the depth increases. 45. The method of claim 43 wherein the tint to be applied to the thumbnail increases as a distance between the depth of the thumbnail and a viewing point increases. 46. The method of claim 1 wherein the three dimensional environment includes a floor, the method further comprising a step of generating a shadow, for each of the thumbnails, on the floor. 47. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 1. 48. A system which permits a user to interact with thumbnails, each thumbnail representing an associated object containing information, the system comprising: a) an input facility for accepting user inputs; b) a storage facility containing i) a two-dimensional location, a depth and state information for each of the thumbnails, ii) a two-dimensional cursor location, and iii) a three-dimensional environment having a simulated depth; c) a processing unit which i) accepts user inputs from the input facility, ii) updates (a) the two-dimensional location, and state information for each of the thumbnails contained in the storage facility, and (b) the two-dimensional cursor location contained in the storage facility, based on the accepted user inputs, iii) updates depth information for each of the thumbnails contained in the storage facility based on at least one parameter of the object associated with the thumbnail, and iv) generates video outputs based on A) the two-dimensional location, depth and state information for each of the thumbnails, B) the two-dimensional cursor location, and C) the three-dimensional environment, contained in the storage facility; and d) a video display unit for rendering the video outputs generated by the processing unit. 49. The system of claim 48 wherein the state information for each of the thumbnails contained in the storage facility includes an indication of whether or not the thumbnail is active, and wherein the processing unit determines that a thumbnail is active if a cursor is located on or over a thumbnail based on the two-dimensional location of the cursor and the two dimensional location of the thumbnail. 50. The system of claim 49 wherein, if a thumbnail is active or floated and the input facility accepts a selection input, then i) the processing unit updates the state of the thumbnail, ii) the processing unit gets a second, higher resolution, visual representation of the object represented by and associated with the thumbnail, iii) the processing unit generates a video output based on the higher resolution, visual representation of the object represented by and associated with the thumbnail at a preferred viewing location, and iv) the video display device renders the video output generated by the processing unit. 51. The system of claim 50 further comprising an audio output device, wherein the storage facility further contains a first audio cue, and wherein, when an object is selected, the processing unit provides the first audio cue to the audio output device. 52. The system of claim 50 wherein each thumbnail is a 64 pixel by 64 pixel bit map having 24 bit color and wherein each higher resolution, visual representation of the objects is a 512 pixel by 512 pixel bit map having 24 bit color. 53. The system of claim 50 wherein the processing unit further effects a video output based on an animation of the higher resolution, visual representation of the object represented by and associated with the thumbnail, moving from the location of the thumbnail to a location at the foreground of the three-dimensional environment. 54. The system of claim 50 wherein, if a thumbnail is active or floated and the input facility accepts a move input, then i) the processing unit updates the state and location of the thumbnail, ii) the processing unit generates a video output based on the updated location of the thumbnail, and iii) the video display device renders the video output generated by the processing unit. 55. The system of claim 49 wherein if the input facility provides a float input and an active thumbnail exists, then the processing unit will set the depth of the active thumbnail to a predetermined value and will define the state of the active thumbnail as floated. 56. The system of claim 55 wherein if the input facility provides a sink input and if a floated thumbnail exists, then the processing unit will set the depth of the floated thumbnail to a previous value and will define a state of the floated thumbnail as active. 57. The system of claim 49 wherein, if a thumbnail is active and the input facility accepts a selection input, then i) the processing unit updates the state of the thumbnail to selected, ii) the processing unit opens an application with which the object, associated with and represented by the selected thumbnail, is associated, iii) the processing unit loads the object into the application, iv) the processing unit generates a video output based on the object loaded onto the opened application and a preferred viewing location, and v) the video display device renders the video output generated by the processing unit. 58. The system of claim 49 wherein if a thumbnail is floated and the input facility accepts a selection input, then i) the processing unit updates the state of the thumbnail to selected, ii) the processing unit opens an application with which the object, associated with and represented by the selected thumbnail, is associated, iii) the processing unit loads the object into the application, iv) the processing unit generates a video output based on the object loaded onto the opened application and a preferred viewing location, and v) the video display device renders the video output generated by the processing unit. 59. The system of claim 48 wherein the storage facility further contains descriptive textual information for each of the thumbnails, and wherein, if a thumbnail is active, i) the processing unit generates a pop-up bar, based on descriptive textual information, for the active thumbnail, and ii) the video display unit renders the pop-up bar over the rendered thumbnail. 60. The system of claim 48 wherein the storage facility further contains virtual viewing point location information, wherein the input facility includes a mouse, and wherein the processing unit d) accepts inputs from the user input device; e) determines a viewing point location and direction based on the accepted inputs; and f) generates only that portion of the three-dimensional environment and only those thumbnails that are in front of the virtual viewing point determined in step (e), to be rendered on the video display device. 61. A man-machine interface method for permitting a user to act on thumbnails, each thumbnail representing an associated object containing information, for use with a machine having a video display device and a user input device, the man-machine interface method comprising: a) generating a three-dimensional environment, having a depth, to be rendered on the video display device; b) determining a two-dimensional location and a depth of each of the thumbnails in the three-dimensional environment, wherein, for each of the thumbnails, the depth is a function of at least one property of the object associated with the thumbnail; and c) generating the thumbnails within the three-dimensional environment, at the determined two-dimensional locations and depths, to be rendered on the video display device. 62. A machine readable medium containing data and machine executable instructions which, when executed by a machine, performs the method of claim 61. 63. A system which permits a user to interact with thumbnails, each thumbnail representing an associated object containing information, the system comprising: a) an input facility for accepting user inputs; b) a storage facility containing i) a two-dimensional location, a depth and state information for each of the thumbnails; ii) a two-dimensional cursor location, and iii) a three-dimensional environment having a simulated depth; c) a processing unit which i) accepts user inputs from the input facility, ii) updates (a) the two-dimensional location, and state information for each of the thumbnails contained in the storage facility, and (b) the two-dimensional cursor location contained in the storage facility, based on the accepted user inputs, iii) updates depth information for each of the thumbnails contained in the storage facility based on at least one property of the object associated with the thumbnail, and iv) generates video outputs based on A) the two-dimensional location, depth and state information for each of the thumbnails, B) the two-dimensional cursor location, and C) the three-dimensional environment, contained in the storage facility; and d) a video display unit for rendering the video outputs generated by the processing unit.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.