IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0754101
(2001-01-02)
|
발명자
/ 주소 |
- Li, Jiang
- Shum, Heung-Yeung
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
7 인용 특허 :
6 |
초록
▼
An image-based walkthrough system and process that employs pictures, panoramas, and/or concentric mosaics captured from real scenes to present a photo-realistic environment to a viewer. In general, a walkthrough space is divided into a horizontally sectioned grid. Each cell of the grid is assigned a
An image-based walkthrough system and process that employs pictures, panoramas, and/or concentric mosaics captured from real scenes to present a photo-realistic environment to a viewer. In general, a walkthrough space is divided into a horizontally sectioned grid. Each cell of the grid is assigned at least one source of image data from which an image of a part or all of the surrounding scene as viewed from that cell can be rendered. Whenever the viewer moves into one of the grid cells, the distance between the viewer's currently selected viewing position in that cell, and each picture viewpoint, panorama center, and nearest concentric mosaic wandering circle point, in the cell and its neighboring cells, is computed. An image depicting the scene as would be viewed from the point associated with the closest image data source is then rendered and displayed to the viewer.
대표청구항
▼
An image-based walkthrough system and process that employs pictures, panoramas, and/or concentric mosaics captured from real scenes to present a photo-realistic environment to a viewer. In general, a walkthrough space is divided into a horizontally sectioned grid. Each cell of the grid is assigned a
An image-based walkthrough system and process that employs pictures, panoramas, and/or concentric mosaics captured from real scenes to present a photo-realistic environment to a viewer. In general, a walkthrough space is divided into a horizontally sectioned grid. Each cell of the grid is assigned at least one source of image data from which an image of a part or all of the surrounding scene as viewed from that cell can be rendered. Whenever the viewer moves into one of the grid cells, the distance between the viewer's currently selected viewing position in that cell, and each picture viewpoint, panorama center, and nearest concentric mosaic wandering circle point, in the cell and its neighboring cells, is computed. An image depicting the scene as would be viewed from the point associated with the closest image data source is then rendered and displayed to the viewer. to over the element, rendering the element opaque and altering the appearance of the opaque element such that the dimension or presented features is different than the translucent element. 13. The medium of claim 12, wherein upon detecting that the pointer has moved from not over the element to over the element, initially waiting for a predetermined amount of time prior to rendering the element opaque. 14. The medium of claim 13, further comprising allowing mouse clicks to register through to a second graphical user interface element under the element while waiting for the predetermined amount of time prior to rendering the element opaque. 15. The medium of claim 12, further comprising rendering the element translucent upon detecting no pointer movement for a predetermined amount of time. 16. The medium of claim 15, further comprising allowing mouse clicks to register through to a second graphical user interface element under the element after the element has been rendered translucent and the pointer is still positioned over the element. 17. The medium of claim 15, further comprising rendering the element opaque upon detecting pointer movement and such that the pointer is still positioned over the element. 18. The medium of claim 12, the method further comprises rendering the element opaque via voice activation. 19. A graphical user interface for a computerized system comprising: a user-movable pointer; and, a graphical user interface element having a first mode in which the element is translucent when the pointer is not positioned thereover, and a second mode in which the element is opaque when the pointer is positioned thereover; wherein the appearance of said graphical user interface element is altered such that the dimension or presented features in said second mode is different than in said first mode. 20. The user interface of claim 19, wherein when the pointer is positioned over the graphical user interface element, the element waits for a predetermined amount of time prior to entering the second mode. 21. The user interface of claim 19, further comprising a second graphical user interface element under the element that receives mouse clicks while the element waits for the predetermined amount of time prior to entering the second mode. 22. The user interface of claim 19, wherein when the pointer has been positioned over the graphical user element for a predetermined amount of time with no movement, the element re-enters the first mode. 23. The user interface of claim 22, further comprising a second graphical user interface element under the element that receives mouse clicks when the element re-enters the first mode. 24. The user interface of claim 22, wherein the element again enters the second mode when the pointer is moved but is still positioned over the element. 25. The user interface of claim 22, wherein the element enters the second mode further via voice activation. ring specific views of a network based off a base view along with one or more filtered network features. , 19991100, Rosen et al.; US-6003065, 19991200, Yan et al.; US-6006105, 19991200, Rostoker et al., 455/552; US-6006231, 19991200, Popa; US-6006241, 19991200, Purnaveja et al.; US-6008836, 19991200, Bruck et al.; US-6011546, 20000100, Bertram; US-6011905, 20000100, Huttenlocher et al.; US-6012086, 20000100, Lowell; US-6014694, 20000100, Aharoni et al.; US-6014706, 20000100, Cannon et al.; US-6016535, 20000100, Krantz et al.; US-6021409, 20000200, Burrows; US-6023749, 20000200, Richardson; US-6026435, 20000200, Enomoto et al.; US-6034686, 20000300, Lamb et al.; US-6047047, 20000400, Aldridge et al.; US-6049539, 20000400, Lee et al.; US-6049821, 20000400, Theriault et al.; US-6049831, 20000400, Gardell et al.; US-6052676, 20000400, Hekmatpour; US-6057857, 20000500, Bloomfield; US-6065057, 20000500, Rosen et al.; US-6065800, 20000500, Olson; US-6067571, 20000500, Igarashi et al.; US-6072483, 20000600, Rosin et al.; US-6073168, 20000600, Mighdoll et al.; US-6073483, 20000600, Nitecki et al.; US-6081623, 20000600, Bloomfield et al.; US-6087952, 20000700, Prabhakaran; US-6092107, 20000700, Eleftheriadis et al.; US-6097352, 20000800, Zavracky et al.; US-6101180, 20000800, Donahue et al.; US-6104392, 20000800, Shaw et al.; US-6105021, 20000800, Berstis; US-6108655, 20000800, Schleimer et al.; US-6108727, 20000800, Boals et al.; US-6118449, 20000900, Rosen et al.; US-6118899, 20000900, Bloomfield et al.; US-6119135, 20000900, Helfman; US-6121970, 20000900, Guedalia; US-6182127, 20010100, Cronin, III et al.; US-6185625, 20010200, Tso et al., 709/247; US-6192393, 20010200, Tarantino et al.; US-6195667, 20010200, Duga et al., 707/513; US-6263347, 20010700, Kobayashi et al.; US-6298162, 20011000, Sutha et al.; US-6304928, 20011000, Mairs et al.; US-6313880, 20011100, Smyers et al.; US-6317781, 20011100, De Boor et al., 709/217; US-6359603, 20020300, Zwern; US-6404416, 20020600, Kahn et al.; US-6411275, 20020600, Hedberg; US-6449639, 20020900, Blumberg; US-6466198, 20021000, Feinstein; US-6477143, 20021100, Ginossar; US-6535743, 20030300, Kennedy, III et al., 455/456
※ AI-Helper는 부적절한 답변을 할 수 있습니다.