System and method for providing three-dimensional graphical user interface
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-003/048
G06F-007/00
G06T-015/00
출원번호
UP-0531676
(2006-09-13)
등록번호
US-7735018
(2010-06-29)
발명자
/ 주소
Bakhash, E. Eddie
출원인 / 주소
SpaceTime3D, Inc.
대리인 / 주소
O'Melveny & Myers LLP
인용정보
피인용 횟수 :
108인용 특허 :
7
초록▼
Methods and systems are provided for providing an improved three-dimensional graphical user interface. In one embodiment, the method generally comprises: receiving an input from an end user, and capturing computing output from at least one computer source in response to the received end-user input.
Methods and systems are provided for providing an improved three-dimensional graphical user interface. In one embodiment, the method generally comprises: receiving an input from an end user, and capturing computing output from at least one computer source in response to the received end-user input. The computing output can be presented as two or more objects within a three-dimensional virtual space displayed to the end user. In one embodiment, the method further comprises generating a timeline that includes an icon for each object presented within the virtual space. In another embodiment, the method further comprises providing a database for storing and categorizing data regarding each object presented within the virtual space.
대표청구항▼
What is claimed is: 1. A method for providing a three-dimensional (3D) graphical user interface, comprising: receiving a search query input from an end user; capturing a computing output from at least one computer source in response to the search query, wherein the computing output is a search resu
What is claimed is: 1. A method for providing a three-dimensional (3D) graphical user interface, comprising: receiving a search query input from an end user; capturing a computing output from at least one computer source in response to the search query, wherein the computing output is a search result that identifies a plurality of websites related to the search query; displaying at least a portion of the computing output on at least two windows within a 3D space, comprising: rendering a first one of the plurality of websites; capturing a first image of the rendered first one of the plurality of websites; and texturing the first image on a first one of the at least two windows, the first one of the at least two windows being displayed in a foreground of the 3D space and a second one of the at least two windows being displayed in a background of the 3D space; displaying at least one navigation icon, wherein the at least one navigation icon can be interacted with to at least move the second one of the at least two windows to the foreground of the 3D space; and displaying additional information on the first one of the at least two windows, comprising: receiving an interaction by the end user at a particular location on the first image; mapping the location of the interaction to a corresponding location on the rendered first one of the plurality of websites, the location corresponding to one link of a plurality of links embedded in the first one of the plurality of websites, the link corresponding to the additional information; rendering the additional information; capturing a third image of the rendered additional information; texturing the third image on the first one of the at least two windows, the third image thereby replacing the first image on the first one of the at least two windows. 2. The method of claim 1, wherein displaying at least a portion of the computing output on at least two objects within the 3D space further comprises displaying the at least two objects within a simulated 3D Cartesian space. 3. The method of claim 1, wherein receiving an input further comprises receiving a search query entered by the end user. 4. The method of claim 3, wherein capturing computing output from at least one computer source comprises receiving a search result list from a network server, wherein said search result list is generated in response to said search query and includes a plurality of search results. 5. The method of claim 4, wherein displaying at least a portion of the computed output on at least two objects within the 3D space further comprises displaying information from a first one of the plurality of search results on a first one of the at least two objects and displaying information from a second one of the plurality of search results on a second one of the at least two objects. 6. The method of claim 4, wherein displaying at least a portion of the computed output on at least two objects within the 3D space further comprises displaying information from the search result list on a first one of the at least two objects and displaying information from a first one of the plurality of search results on a second one of the at least two objects. 7. The method of claim 4, further comprising receiving a second input from the end user that identifies the network server. 8. The method of claim 1, wherein displaying at least a portion of the computing output on at least two objects further comprises: rendering a second portion of the computing output; capturing a second image of the rendered second portion of the computing output; and texturing the second image on the second one of the at least two objects. 9. The method of claim 8, wherein capturing a first image of the rendered first portion of the computing output and texturing the first image on the first one of the at least two objects further comprises capturing a bitmap of the rendered first portion of the computing output and texturing the bitmap on the first one of the at least two objects. 10. The method of claim 8, wherein rendering a first portion of the computing output further comprises rendering the first portion of the computing output using a native application, wherein the first portion of the computing output is selected from a list consisting of a web page, an image file, a video file, a PDF file, a flash file, a document file, a PowerPoint file, a 3D file, and a music file. 11. The method of claim 1, wherein displaying at least one navigation icon further comprises displaying at least a forward arrow icon and a backward arrow icon, wherein the forward arrow icon is interacted with to move the second one of the at least two objects from the background of the 3D space to the foreground of the 3D space and the backward arrow icon is interacted with to move the second one of the at least two objects from the foreground of the 3D space to the background of the 3D space. 12. The method of claim 1, further comprising displaying a timeline that includes a first icon corresponding to the first one of the at least two objects in the 3D space. 13. The method of claim 12, further comprising: receiving a second input from the end user; capturing a second computing output from one of the at least one computing source and at least one other computing source in response to the second input; displaying at least a portion of the second computing output on at least one object within the 3D space, wherein a first one of the at least one object is displayed in the foreground of the 3D space; and including a second icon in the timeline, wherein the second icon corresponds to the first one of the at least one object in the 3D space. 14. The method of claim 13, further comprising adjusting the viewpoint of the 3D space toward the at least two objects in response to the end user interacting with the first icon in the timeline and adjusting the viewpoint of the 3D space toward the at least one object in response to the end user interacting with the second icon in the timeline. 15. The method of claim 14, wherein adjusting the viewpoint of the 3D space toward the at least two objects in response to the end user interacting with the first icon further comprises substantially centering one of the at least two objects horizontally on a display in response to the end user interacting with the first icon. 16. The method of claim 1, further comprising displaying a database that includes at least information corresponding to the at least two objects within the 3D space. 17. The method of claim 16, further comprising adjusting the viewpoint of the 3D space toward the at least two objects in response to the end user interacting with the information in the database. 18. The method of claim 17, wherein adjusting the viewpoint of the 3D space toward the at least two objects in response to the end user interacting with the information in the database further comprises substantially centering one of the at least two objects horizontally on a display in response to the end user interacting with the information in the database. 19. The method of claim 1, further comprising launching a native application in response to the end user interacting with a first one of the at least two objects, wherein the native application is selected based on a type of information displayed on the first one of the at least two objects. 20. The method of claim 1, wherein displaying at least a portion of the computing output on at least two objects within a 3D space further comprises displaying a portion of the computing output on the first one of the at least two objects, wherein the portion of the computing output is selected from a list consisting of a web page, an image file, a video file, a PDF file, a flash file, a document file, a PowerPoint file, a 3D file, and a music file. 21. The method of claim 1, further comprising displaying a web page corresponding to a portion of the computing output displayed on the first one of the at least two objects in a two-dimensional (2D) window in response to the end user interacting with the first one of the at least two objects. 22. The method of claim 1, further comprising displaying a new 3D space in response to the end user interacting with the first one of the at least two objects, wherein the new 3D space includes at least one additional object. 23. The method of claim 1, wherein the at least one computer source is at least one local memory device. 24. The method of claim 1, wherein the at least one computer source is at least one network server. 25. The method of claim 1 wherein displaying at least one navigation icon further comprises displaying at least a forward arrow icon and a backward arrow icon, wherein the forward arrow icon is interacted with to move the second one of the at least two objects from the background of the 3D space to the foreground of the 3D space and the first one of the at least two objects from the foreground of the 3D space to (i) the background of the 3D space and (ii) the left of the second one of the at least two objects, and the backward arrow icon is interacted with to move the first one of the at least two objects from the background of the 3D space to the foreground of the 3D space and the second one of the at least two objects from the foreground of the 3D space to (i) the background of the 3D space and (ii) the right of the first one of the at least two objects. 26. The method of claim 25, wherein the forward arrow icon is interacted with again to move the first one of the at least two objects further to the left, and the backward arrow icon is interacted with again to move the second one of the at least two objects further to the right. 27. A method for providing a three-dimensional (3D) graphical user interface, comprising: receiving a search query from an end user; receiving a search result from at least one computer source in response to the search query, wherein the search result identifies a plurality of websites related to the search query; displaying the search result on a plurality of windows within a 3D space, comprising: rendering a first one of the plurality of websites using a native application; capturing an image of at least a portion of the rendered first one of the plurality of websites; and texturing the image on a first one of the plurality of windows within the 3D space, the first one of the plurality of windows being displayed in a foreground of the 3D space, a second one of the plurality of windows including information from a second one of the plurality of websites and being displayed in a background of the 3D space, and a third one of the plurality of windows including information from a third one of the plurality of websites and being displayed in the background of the 3D space; displaying at least a forward arrow and a backward arrow, wherein the second one of the plurality of windows is moved from the background of the 3D space to the foreground of the 3D space if the end user interacts with the forward arrow and the second one of the plurality of windows is moved from the foreground of the 3D space to the background of the 3D space if the end user interacts with the backward arrow; and displaying additional information on the first one of the plurality of windows, comprising: receiving an interaction by the end user at a particular location on the image; mapping the location of the interaction to a corresponding location on the rendered first one of the plurality of websites, the location corresponding to one link of a plurality of links embedded in the first one of the plurality of websites, the link corresponding to the additional information; rendering the additional information; capturing a second image of at least a portion of the rendered additional information; and texturing the second image on the first one of the plurality of windows, the second image thereby replacing the image on the first one of the plurality of windows. 28. The method of claim 27, wherein the plurality of websites are arranged sequentially in the search result, the first one of the plurality of websites is literally the first sequential website identified in the search result, the second one of the plurality of websites is literally the second sequential website identified in the search result, and the third one of the plurality of websites is literally the third sequential website identified in the search result. 29. The method of claim 27, wherein the first one of the plurality of windows at least partially overlaps at least a portion of at least one of the second and third ones of the plurality of windows. 30. The method of claim 27, wherein displaying the search result on a plurality of windows within a 3D space further comprises displaying the first one of the plurality of windows in the foreground of the 3D space, displaying the third one of the plurality of windows in the background of the 3D space, and displaying the second one of the plurality of windows in the background of the 3D space, wherein the second one of the plurality of windows is in front of the third one of the plurality of windows and behind the first one of the plurality of windows. 31. The method of claim 27, wherein displaying the search result on a plurality of windows within a 3D space further comprises at least capturing a bitmap of at least a portion of a first one of the plurality of websites and texturing the bitmap on the first one of the plurality of windows within the 3D space. 32. The method of claim 27, further comprising displaying a timeline that includes a first icon corresponding to the first one of the plurality of windows in the 3D space. 33. The method of claim 27, further comprising displaying a database that includes information corresponding to the first one of the plurality of windows in the 3D space. 34. The method of claim 27, further comprising: receiving a second query from an end user; receiving a second search result from one of the at least one computer source and at least one other computer source in response to the search query, wherein the second search result includes a second search result list, the second search result list includes information on a second plurality of websites related to the second search query; displaying the second search result as a second plurality of windows within the 3D space, wherein a first one of the second plurality of windows includes information on the first one of the second plurality of websites and is displayed in the foreground of the 3D space, a second one of the second plurality of windows includes information from the second one of the second plurality of websites and is displayed in the background of the 3D space, and a third one of the second plurality of windows includes information from the third one of the second plurality of websites and is displayed in the background of the 3D space. 35. The method of claim 34, further comprising: displaying a timeline that includes a first icon corresponding to the plurality of windows and a second icon corresponding to the second plurality of windows; adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows in response to the end user interacting with the first icon; and adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the second plurality of windows in response to the end user interacting with the second icon. 36. The method of claim 35, wherein adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows in response to the end user interacting with the first icon further comprises substantially centering one of the plurality of windows on a display in response to the end user interacting with the first icon. 37. The method of claim 34, further comprising: displaying a database that includes a first set of information corresponding to the plurality of windows and a second set of information corresponding to the second plurality of windows; adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows in response to the end user interacting with the first set of information; and adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the second plurality of windows in response to the end user interacting with the second set of information. 38. The method of claim 37, wherein adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows in response to the end user interacting with the first set of information further comprises substantially centering one of the plurality of windows on a display in response to the end user interacting with the first set of information. 39. The method of claim 27, further comprising: receiving a uniform resource locator (URL) from the end user; receiving web page information from one of the at least one computer source and at least one other computer source in response to the URL; displaying a second window in the foreground of the 3D space, wherein the second window includes at least a portion of the web page information; displaying a database that includes a first set of information corresponding to the plurality of windows and a second set of information corresponding to the second window; adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows in response to the end user interacting with the first set of information; and adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the second window in response to the end user interacting with the second set of information. 40. The method of claim 39, further comprising: displaying a timeline that includes a first icon corresponding to the plurality of windows and a second icon corresponding to the second window; adjusting the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows in response to the end user interacting with the first icon; and adjusting the viewpoint of the 3D space so that the end user can see the second window in response to the end user interacting with the second icon. 41. The method of claim 27, further comprising receiving data identifying the at least one computer source from the end user. 42. The method of claim 27, further comprising presenting a web page corresponding to a first one of the plurality of websites in response to the end user interacting with at least a portion of the first one of the plurality of windows. 43. The method of claim 42, wherein presenting a web page further comprises displaying the web page on a two dimensional (2D) window. 44. The method of claim 27, wherein the at least one computer source is one of a local computer source and a network server. 45. The method of claim 27, wherein the information from a first one of the plurality of websites is selected from a list consisting of a web page, an image file, a video file, a PDF file, a flash file, a document file, a PowerPoint file, a 3D file, and a music file. 46. The method of claim 27, wherein the second one of the plurality of windows is moved from the background of the 3D space to the foreground of the 3D space and the first one of the plurality of windows is moved from the foreground of the 3D space to (i) the background of the 3D space and (ii) the left of the second one of the plurality of windows if the end user interacts with the forward arrow, and the first one of the plurality of windows is moved from the background of the 3D space to the foreground of the 3D space and the second one of the plurality of windows is moved from the foreground of the 3D space to (i) the background of the 3D space and (ii) the right of the first one of the plurality of windows if the end user interacts with the backward arrow. 47. The method of claim 46, wherein the first one of the plurality of windows is moved further to the left if the end user interacts again with the forward arrow, and the second one of the plurality of windows is moved further to the right if the end user interacts again with the backward arrow. 48. A system for providing a three-dimensional (3D) graphical user interface, comprising: a display screen; an input device for receiving a search query from an end user; a processor module operatively coupled to the display screen and the user input device; and a memory module operatively coupled to the processor module, the memory module comprising executable code for the processor module to: receive via a communication path a search result from at least one computer source in response to the search query, wherein the search result includes at least a plurality of links to a plurality of files related to the search query; display the search result on a plurality of windows within a 3D space, comprising: rendering at least a portion of a first one of the plurality of files using an application; capturing an image of the rendered portion of the first one of the plurality of files, and texturing the image on a first one of the plurality of windows within the 3D space, the first one of the plurality of windows being displayed in the foreground of the 3D space, and a second one of the plurality of windows including information from a second one of the plurality of files and being displayed in the background of the 3D space; and display a navigator that can be interacted with to at least move the second one of the plurality of windows and a third one of the plurality of windows to the foreground of the 3D space; and displaying additional information on the first one of the plurality of windows, comprising: receiving an interaction by the end user at a particular location on the image; mapping the location of the interaction to a corresponding location on the rendered portion of the first one of the plurality of files, the location corresponding to one control of a plurality of controls embedded in the first one of the plurality of files, the control corresponding to the additional information; rendering the additional information using the application; capturing a second image of at least a portion of the rendered additional information; and texturing the second image on the first one of the plurality of windows, the second image thereby replacing the image on the first one of the plurality of windows. 49. The system as recited in claim 48, wherein the processor module displays the search result on a plurality of windows within a 3D space by displaying the plurality of windows within a simulated 3D Cartesian space. 50. The method of claim 48, wherein the processor module displays the search results on the plurality of windows by at least (i) rendering at least a portion of the first one of the plurality of files using a native application, (ii) capturing a bitmap of the at least a portion of the first one of the plurality of files, and (iii) texturing the bitmap on the first one of the plurality of windows within the 3D space. 51. The system as recited in claim 48, wherein the processor module further displays a timeline that includes a first icon corresponding to the first one of the plurality of windows in the 3D space. 52. The system as recited in claim 48, wherein the processor module further displays a database that includes information corresponding to the first one of the plurality of windows in the 3D space. 53. The system as recited in claim 48, wherein the processor module further: receives a second search result from one of the at least one computer source and at least one other computer source in response to a second search query, wherein the second search result includes a second plurality of links to a second plurality of files related to the second search query; and displays the second search result on a second plurality of windows within the 3D space, wherein a first one of the second plurality of windows includes information from the first one of the second plurality of files and is displayed in the foreground of the 3D space, and a second one of the second plurality of windows includes information from the second one of the second plurality of files and is displayed in the background of the 3D space. 54. The system as recited in claim 53, wherein the processor module further: displays a timeline that includes a first icon corresponding to the plurality of windows and a second icon corresponding to the second plurality of windows; adjusts the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows on the display screen in response to the end user interacting with the first icon; and adjusts the viewpoint of the 3D space so that the end user can see at least a portion of the second plurality of windows on the display screen in response to the end user interacting with the second icon. 55. The system as recited in claim 54, wherein the processor module adjusts the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows on the display screen by at least one of substantially centering at least one of the plurality of windows horizontally on the display screen and substantially centering at least one of the plurality of windows vertically on the display screen. 56. The system as recited in claim 53, wherein the processor module further: displays a database that includes a first set of information corresponding to the plurality of windows and a second set of information corresponding to the second plurality of windows; adjusts the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows on the display screen in response to the end user interacting with the first set of information; and adjusts the viewpoint of the 3D space so that the end user can see at least a portion of the second plurality of windows on the display screen in response to the end user interacting with the second set of information. 57. The system as recited in claim 56, wherein the processor module adjusts the viewpoint of the 3D space so that the end user can see at least a portion of the plurality of windows on the display screen by at least one of substantially centering at least one of the plurality of windows horizontally on the display screen and substantially centering at least one of the plurality of windows vertically on the display screen. 58. The system as recited in claim 48, wherein the input device further receives data from the end user, the data being used by the processor module to select the at least one computer source. 59. The system as recited in claim 48, wherein the processor module further displays at least a portion of the first one of the plurality of files on a two dimensional (2D) window on the display screen in response to the end user interacting with at least a portion of the first one of the plurality of windows. 60. The system as recited in claim 48, wherein the communication path is selected from a list consisting of a local bus connected to a local memory device and a network connected to a network server. 61. The system as recited in claim 48, wherein the first one of the plurality of files is selected from a list consisting of a web page, an image file, a video file, a PDF file, a flash file, a document file, a PowerPoint file, a 3D file, and a music file. 62. The system as recited in claim 48, wherein the processor module displays a navigator icon by displaying a next icon and a previous icon, wherein the processor module moves the second one of the plurality of windows from the background of the 3D space to the foreground of the 3D space in response to the end user interacting with the next icon, and moves the second one of the plurality of windows from the foreground of the 3D space to the background of the 3D space in response to the end user interacting with the previous icon. 63. The system as recited in claim 48, further comprising displaying a new 3D space in response to the end user interacting with the first one of the plurality of windows, wherein the new 3D space includes at least one additional window. 64. The system as recited in claim 48, wherein the plurality of links are arranged sequentially in the search result, the first one of the plurality of files is retrieved using the first sequential link provided in the search result, and the second one of the plurality of files is retrieved using the second sequential link provided in the search result. 65. The system as recited in claim 48, wherein the first one of the plurality of windows partially overlaps at least a portion of the second one of the plurality of windows in the 3D space. 66. The method of claim 48, wherein the processor module displays a navigator icon by displaying a next icon and a previous icon, wherein the processor moves the second one of the plurality of windows from the background of the 3D space to the foreground of the 3D space and the first one of the plurality of windows from the foreground of the 3D space to (i) the background of the 3D space and (ii) the left of the second one of the plurality of windows in response to the end user interacting with the next icon, and moves the first one of the plurality of windows from the background of the 3D space to the foreground of the 3D space and the second one of the plurality of windows from the foreground of the 3D space to (i) the background of the 3D space and (ii) the right of the first one of the plurality of windows in response to the end user interacting with the previous icon. 67. The method of claim 66, wherein the processor moves the first one of the plurality of windows further to the left in response to the end user interacting again with the next icon, and moves the second one of the plurality of windows further to the right in response to the end user interacting again with the previous icon.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (7)
Yoshikawa Kouhei (Nara JPX), Apparatus for specifying coordinates of a body in three-dimensional space.
Kahl Daryl J. (Flower Mound TX) King Chen D. (Colleyville TX) Lee Raymond E. (Irving TX) Stanners Sharon (Boca Raton FL) Torres Robert J. (Colleyville TX), Method and apparatus for maintaining a record of set-creating data processing activities and associated data sets.
Kaushal Kurapati ; Lira Nikolovska NL; Jacquelyn A. Martino ; Alison F. Camplin GB, User interface providing automatic organization and filtering of search criteria.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered AR eyepiece interface to external devices.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered control of AR eyepiece applications.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and user action control of external applications.
Hwang, Danny; Kwon, Yong-Hwan; Kim, Jieun; Kim, Jihong; Kim, Hyeryung; Seo, Jangwon; Jeon, Seran, Display screen or portion thereof with animated graphical user interface.
Hwang, Danny; Kwon, Yong-Hwan; Kim, Jieun; Kim, Jihong; Kim, Hyeryung; Seo, Jangwon; Jeon, Seran, Display screen or portion thereof with animated graphical user interface.
Hwang, Danny; Kwon, Yong-Hwan; Kim, Jieun; Kim, Jihong; Kim, Hyeryung; Seo, Jangwon; Jeon, Seran, Display screen or portion thereof with animated graphical user interface.
Seo, Jang-Won; Kwon, Yong-Hwan; Kim, Ji-Eun; Kim, Ji-Hong; Kim, Hye-Ryung; Jeon, Se-Ran; Hwang, Woo-Seok, Display screen or portion thereof with graphical user interface.
Seo, Jang-Won; Kwon, Yong-Hwan; Kim, Ji-Eun; Kim, Ji-Hong; Kim, Hye-Ryung; Jeon, Se-Ran; Hwang, Woo-Seok, Display screen or portion thereof with graphical user interface.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Border, John N.; Miller, Gregory D.; Stovall, Ross W., Eyepiece with uniformly illuminated reflective display.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Grating in a light transmissive illumination system for see-through near-eye display glasses.
Liu, Xianghai; Bajpai, Chandra; Menice, Kevin; Goetz, Jarrett, In-process trapping for service substitution in hosted applications executing on mobile devices with multi-operating system environment.
Matthews, David A.; Satterfield, Jesse Clay; Hoefnagels, Stephan; Ebeling, Rolf A.; Sundelin, Nils A.; Anderson, Bret P.; Worley, Matthew I.; DeBacker, Gabriel S.; Jarrett, Robert J., Managing an immersive environment.
Lu, Lu; Chen, Yu; Li, Jun; Li, Xin; Huang, Shuangxi, Methods and apparatuses for window display, and methods and apparatuses for touch-operating an application.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses.
Hwang, Danny; Kwon, Yong-Hwan; Kim, Jieun; Kim, Jihong; Kim, Hyeryung; Seo, Jangwon; Jeon, Seran, Portable electronic device with animated graphical user interface.
Hwang, Danny; Kwon, Yong-Hwan; Kim, Jieun; Kim, Jihong; Kim, Hyeryung; Seo, Jangwon; Jeon, Seran, Portable electronic device with animated graphical user interface.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film.
Border, John N.; Haddick, John D.; Osterhout, Ralph F., See-through near-eye display glasses including a partially reflective, partially transmitting optical element.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear.
Border, John N.; Haddick, John D.; Osterhout, Ralph F., See-through near-eye display glasses with a light transmissive wedge shaped illumination system.
Border, John N.; Haddick, John D.; Lohse, Robert Michael; Osterhout, Ralph F., See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light.
Gutt, Zachary Mark; Ray, Paul Ronald; Edwards, Rodney Coleman; Yamamoto, Darwin Kengo; El Kheir, Hady Moustafa Abou; MacDonald, Brian Whalen; Bain, Jerry Daniel, Visual search and three-dimensional results.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.