Some embodiments provide a device that stores a novel navigation application. The application in some embodiments includes a user interface (UI) that has a display area for displaying a two-dimensional (2D) navigation presentation or a three-dimensional (3D) navigation presentation. The UI includes
Some embodiments provide a device that stores a novel navigation application. The application in some embodiments includes a user interface (UI) that has a display area for displaying a two-dimensional (2D) navigation presentation or a three-dimensional (3D) navigation presentation. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations.
대표청구항▼
1. A non-transitory machine readable medium storing a navigation program for execution by at least one processing unit of a device, the device comprising a touch-sensitive screen and a touch input interface, the program comprising sets of instructions for: generating and displaying a two-dimensional
1. A non-transitory machine readable medium storing a navigation program for execution by at least one processing unit of a device, the device comprising a touch-sensitive screen and a touch input interface, the program comprising sets of instructions for: generating and displaying a two-dimensional (2D) navigation presentation;generating and displaying a three-dimensional (3D) navigation presentation; anddisplaying a single 3D control for transitioning between the 2D navigation presentation and the 3D navigation presentation, wherein the 3D control has at least three different appearances, each appearance corresponding to a different state of navigation presentation. 2. The non-transitory machine readable medium of claim 1, wherein the program further comprises sets of instructions for: receiving a selection of the 3D control; andin response to the selection, directing the program to switch from the 2D navigation presentation to the 3D navigation presentation when the 3D navigation presentation is displayed, anddirecting the program to switch from the 3D navigation presentation to the 2D navigation presentation when the 2D navigation presentation is displayed. 3. The non-transitory machine readable medium of claim 1, wherein the program further comprises sets of instructions for: receiving, from the touch input interface, a first touch input for selecting the 3D control;receiving, from the touch input interface, a second different touch input that is not for selecting the 3D control;determining that the second touch input is a multi-touch gestural input for transitioning from the 2D navigation presentation to the 3D navigation presentation; andin response to the first or second touch input, directing the program to switch from the 2D navigation presentation to the 3D navigation presentation. 4. The non-transitory machine readable medium of claim 1, wherein the program further comprises sets of instructions for: receiving, from the touch input interface, a touch input that is not for selecting the 3D control;determining that the touch input is a multi-touch gestural input for transitioning from the 2D navigation presentation to the 3D navigation presentation; andin response to the touch input, directing the program to switch from the 2D navigation presentation to the 3D navigation presentation. 5. The non-transitory machine readable medium of claim 4, wherein the multi-touch gestural input is a multi-touch drag operation in a particular direction,wherein the multi-touch drag operation comprises at least two different touch contacts that are dragged along the touch-sensitive screen in the particular direction or within a threshold of the particular direction. 6. The non-transitory machine readable medium of claim 5, wherein the particular direction is an upward direction along a vertical axis of the touch-sensitive screen, and the multi-touch drag operation is a two finger drag operation. 7. The non-transitory machine readable medium of claim 1, wherein at least one particular navigation presentation displays a tracked current location of the device in a particular location within the presentation,wherein the displayed tracked current location of the device is a map view of an area surrounding the current location of the device,wherein the program further comprises sets of instructions for: receiving a touch input through the touch input interface during the particular navigation presentation;determining that the touch input is a gestural input for changing the map view during the particular navigation presentation; andin response to the touch input, changing the particular navigation presentation by presenting a portion of the map that was previously not presented during the particular navigation presentation. 8. The non-transitory machine readable medium of claim 7, wherein the program further comprises a set of instructions for returning, after a duration of time, the displayed tracked current location of the device in the navigation presentation back to an original position in the presentation before changing the particular navigation presentation in response to the touch input. 9. The non-transitory machine readable medium of claim 7, wherein the touch input is a single-touch drag operation. 10. The non-transitory machine readable medium of claim 1, wherein the 3D control has a first appearance that corresponds to the 2D navigation presentation; and a second appearance that corresponds to the 3D navigation presentation. 11. The non-transitory machine readable medium of claim 10, wherein the 3D control further has a third appearance when data is not available to present the 3D navigation presentation. 12. The non-transitory machine readable medium of claim 1, wherein the set of instructions for generating the 3D navigation presentation comprises a set of instructions for rendering a 3D presentation from a particular perspective view of a 3D map scene of an area in a map that surrounds a current location of the device. 13. The non-transitory machine readable medium of claim 12, wherein the 3D map scene includes constructs in the 3D map, said constructs comprising roads and buildings. 14. The non-transitory machine readable medium of claim 12, wherein the set of instructions for rendering comprises sets of instructions for: defining a virtual camera to represent the particular perspective view; andmoving the virtual camera in response to receiving a touch input in order to change the perspective view for rendering the 3D map scene. 15. The non-transitory machine readable medium of claim 1, wherein the program further comprises a set of instructions for tracking a position of the device while providing one of the navigation presentations;wherein the navigation presentation displays a tracked current location of the device in a particular location within the presentation;wherein the navigation presentation shows the tracked current location of the device in a map of an area surrounding the device. 16. The non-transitory machine readable medium of claim 15, wherein the program further comprises sets of instructions for: correlating the tracked current position to navigation directions along a navigated route;generating a new set of navigation directions upon detecting that the device is no longer on the navigated route based on the tracked current position of the device; andproviding the new set of navigation directions during the navigation presentation that describes a new navigated route. 17. The non-transitory machine readable medium of claim 16, wherein the set of instructions for tracking the position of the device comprises a set of instructions for using global positioning system (GPS) data generated by a GPS receiver of the device to identify the tracked current location of the device. 18. The non-transitory machine readable medium of claim 15, wherein the program further comprises sets of instructions for: detecting that the device is reaching an intersection while the program is displaying the 3D navigation presentation; andswitching from the 3D navigation presentation to the 2D navigation presentation to provide a better view of the intersection along a navigated route. 19. The non-transitory machine readable medium of claim 18, wherein the program further comprises sets of instructions for: detecting that the device has passed the intersection after switching to the 2D navigation presentation; andswitching from the 2D navigation presentation to the 3D navigation presentation to resume the 3D navigation presentation. 20. A device comprising at least one processing unit, the device comprising a touch-sensitive screen and a multi-touch interface, the device storing a navigation program for execution by said at least one processing unit, the navigation program comprising a user interface (UI), the UI comprising: a display area for displaying a two-dimensional (2D) navigation presentation or a three-dimensional (3D) navigation presentation; anda selectable 3D control for directing the program to transition between the 2D and 3D navigation presentations, wherein the selectable 3D control has at least three different appearances, each appearance corresponding to a different state of navigation presentation. 21. The device of claim 20, wherein the selectable 3D control has a first appearance that corresponds to the 2D navigation presentation and a second appearance that corresponds to the 3D navigation presentation. 22. The device of claim 20, wherein the multi-touch interface is for receiving multi-touch gestural inputs through the touch-sensitive screen,wherein the UI further comprises a gesture processing module for receiving gestural inputs from the multi-touch interface and translating a first type of gestural input to an instruction that directs the program to switch between the 2D and 3D presentations. 23. The device of claim 22, wherein the first type of gestural input is a drag operation in a particular direction. 24. The device of claim 23, wherein the first type of gestural input comprises at least two different touch contacts that are dragged along the screen in the particular direction or within a threshold of the particular direction.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (29)
Aguilera, Jaime; Alonso, Fernando; Gomez, Juan Bautista, Generating three-dimensional virtual tours from two-dimensional images.
Robertson,George G.; Czerwinski,Mary P.; Hinckley,Kenneth P.; Risden,Kristen C.; Robbins,Daniel C.; van Dantzich,Maarten R., Method and apparatus for supporting two-dimensional windows in a three-dimensional environment.
Matas, Michael; Blumenberg, Chris; Boule, Andre M. J.; Williamson, Richard, Portable multifunction device, method, and graphical user interface for providing maps and directions.
Jeffrey Alan Millington ; Larry E. Spencer, II ; Donald J. Long ; Richard Eklund ; Michael G. Lambie, Vehicle navigation system with location-based multi-media annotation.
Bjorke, Kevin Allen; Bell, Matthew Tschudy, Determining and/or generating a navigation path through a captured three-dimensional model rendered on a device.
Amin, Shalin; Radhakrishnan, Mina; Holden, Paul-Phillip; Kalanick, Travis Cordell, Display screen of a computing device with a computer-generated electronic panel for providing information of a service.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or a portion thereof with a graphical user interface.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or a portion thereof with a graphical user interface.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or a portion thereof with a graphical user interface.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or a portion thereof with a graphical user interface.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or a portion thereof with a graphical user interface.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or a portion thereof with a graphical user interface.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with animated graphical user interface.
Mariet, Robertus Christianus Elisabeth; Cullinane, Brian Douglas; Clement, Manuel Christian; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with animated graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas; Szybalski, Andrew Timothy; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian; Szybalski, Andrew; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian; Szybalski, Andrew; Dolgov, Dmitri A., Display screen or portion thereof with graphical user interface.
Yanagita, Satoshi; Misawa, Atsushi, Stereoscopic display apparatus and stereoscopic shooting apparatus, dominant eye judging method and dominant eye judging program for use therein, and recording medium.
Wang, David X.; Broadfoot, Stephen; Tamp, Fabian; Mariet, Robertus Christianus Elisabeth; Hasaballah, Taylah, Systems and methods for controlling viewport movement in view of user context.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian, User interface for displaying object-based indications in an autonomous driving system.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas, User interface for displaying object-based indications in an autonomous driving system.
Mariet, Robertus Christianus Elisabeth; Clement, Manuel Christian; Nemec, Philip; Cullinane, Brian Douglas, User interface for displaying object-based indications in an autonomous driving system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.