최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0753407 (2013-01-29) |
등록번호 | US-8676587 (2014-03-18) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 5 인용 특허 : 216 |
Computerized apparatus and methods for obtaining and displaying information, such as for example directions to a desired entity or organization. In one embodiment, the computerized apparatus is configured to receive user speech input and enable local performance of various tasks, such as obtaining d
Computerized apparatus and methods for obtaining and displaying information, such as for example directions to a desired entity or organization. In one embodiment, the computerized apparatus is configured to receive user speech input and enable local performance of various tasks, such as obtaining desired information relating to entities, maps or directions, or any number of other topics. The obtained data may also, in various variants, be displayed in various formats and relative to other entities nearby.
1. A method of enabling a user to locate an entity, the method comprising: downloading data from a server to a computerized information apparatus, the data relating to a particular geographic location;storing the data on a storage device of the computerized information apparatus;receiving a user's s
1. A method of enabling a user to locate an entity, the method comprising: downloading data from a server to a computerized information apparatus, the data relating to a particular geographic location;storing the data on a storage device of the computerized information apparatus;receiving a user's speech input via a microphone of the computerized apparatus;digitizing the speech input;utilizing at least one speech recognition algorithm to evaluate the digitized speech input so as to identify at least one word or phrase therein;using the identified at least one word or phrase to identify at least one matching entity, the identification of the at least one matching entity comprising accessing at least a portion of the downloaded and stored data;determining a location of the at least one matching entity within the particular geographic location; anddisplaying a graphical representation of the location of the at least one matching entity, including surroundings thereof and one or more other entities proximate thereto. 2. The method of claim 1, wherein the digitizing comprises using an algorithm running locally on the computerized apparatus. 3. The method of claim 2, wherein the using an algorithm running locally on the computerized apparatus comprises using a code-excited linear predictive (CELP)-based algorithm. 4. The method of claim 1, wherein the using of the at least one speech recognition algorithm comprises using an algorithm running locally on the computerized apparatus. 5. The method of claim 4, wherein the using an algorithm running locally on the computerized apparatus comprises using a Hidden Markov Model (HMM)-based algorithm utilizing at least phoneme recognition. 6. The method of claim 5, wherein the digitizing comprises using an algorithm running locally on the computerized apparatus, the algorithm comprising a code-excited linear predictive (CELP)-based algorithm. 7. The method of claim 4, wherein the using an algorithm running locally on the computerized apparatus comprises using a Neural Network (NN)-based algorithm utilizing at least phoneme recognition. 8. The method of claim 7, wherein the digitizing comprises using an algorithm running locally on the computerized apparatus, the algorithm comprising a code-excited linear predictive (CELP)-based algorithm. 9. The method of claim 1, wherein the user's speech input comprises a speech input relating to a name of an organization or entity which the user desires to locate. 10. The method of claim 1, wherein the identification of the at least one matching entity is conducted without further access to the server after the download has occurred. 11. The method of claim 10, wherein the downloading comprises downloading data relating to a current geographic location of the user and their immediate surroundings only. 12. The method of claim 11, wherein the downloading is performed via a wireless link between the computerized information apparatus and the server, the server disposed outside of the current geographical location and the immediate surroundings. 13. The method of claim 1, wherein: the downloading comprises downloading data relating to a current geographic location of the user and their immediate surroundings only; andthe downloading is performed via a wireless link between the computerized information apparatus and the server, the server disposed outside of the current geographical location and the immediate surroundings. 14. The method of claim 1, wherein at least a portion of the graphical representation of the location of the at least one matching entity, including surroundings thereof and one or more other entities proximate thereto, comprises a hypertext markup language (HTML)-based representation. 15. The method of claim 1, wherein the downloading comprises downloading data relating to a current geographic location of user and immediate surroundings of the current geographic location, and the method further comprises displaying directions from the current geographic location to the location. 16. The method of claim 15, wherein the displaying directions from the current geographic location to the location comprises displaying the graphical representation of the location of the at least one matching entity, including surroundings and the one or more other entities proximate thereto, along with a differentiated arrow or line disposed thereon which shows the user a particular path to follow. 17. The method of claim 16, wherein the differentiated arrow or line disposed thereon which shows the user a particular path to follow comprise an arrow or line that is different in at least one of color and/or luminosity, and which passes by at least a portion of the one or more other entities. 18. The method of claim 1, wherein: the utilizing a speech recognition algorithm comprises using an algorithm running locally on the computerized apparatus, the algorithm comprising a MFCC (Mel Frequency Cepstral Coefficients)-based algorithm that further utilizes at least one of neural networks (NNs) and/or Hidden Markov Modeling (HMM); andthe digitizing comprises using an algorithm running locally on the computerized apparatus, the algorithm comprising a code-excited linear predictive (CELP)-based algorithm. 19. The method of claim 1, wherein the using the identified at least one word or phrase to identify at least one matching entity comprises identifying a plurality of possible matching entities, and the method further comprises prompting the user for a subsequent user input to identify one of the plurality of possible matching entities is a correct one that the user desires. 20. The method of claim 19, wherein the prompting further comprises synthesis of audible signals comprising at least one human-intelligible word or phrase, and the subsequent user input comprises a speech input comprising one or more additional pieces of information relating to the desired entity. 21. The method of claim 19, wherein the prompting further comprises causing display of a listing of the plurality of possible matching entities on a touch-screen input and display device viewable by the user, and the subsequent user input comprises a tactile input via the display device to select one of the plurality. 22. The method of claim 19, further comprising, based at least in part on an identification of which of the plurality of possible matching entities is the one that the user desires, alerting the user that they are presently at an incorrect location. 23. The method of claim 22, wherein the alerting of the user that they are presently at an incorrect location comprises producing an audible alert generated by a speech synthesis apparatus. 24. The method of claim 22, wherein the location comprises a location indoors within a building having a plurality of other entities commonly disposed indoors within the building, and the alerting of the user that they are presently at an incorrect location comprises providing an alert indicating that the user needs to go to another building. 25. The method of claim 1, wherein the location comprises a location within a building, and the one or more other entities proximate to the location are disposed within the same building, the building further comprising a plurality of floors and at least one elevator capable of accessing the plurality of floors, and the location and the one or more other organizations or entities are disposed on at least a common floor. 26. The method of claim 1, further comprising generating one or more hyperlinks relating to topics of interest to the user in their travels and displaying them on a touch-screen input and display device, the one or more hyperlinks configured to allow the user to select them via a tactile input via the touch-screen device, and access a remote server via a wireless interface, and wherein the method further comprises downloading information from the remote server to the computerized information apparatus for display on the touch-screen device. 27. The method of claim 1, further comprising providing a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which aids the user in navigating to the location. 28. The method of claim 27, wherein the providing a synthesized speech output comprises providing a code-excited linear prediction (CELP)-based representation. 29. The method of claim 1, wherein the displaying comprises displaying on a capacitive touch-screen input and display device configured to generate a plurality of soft keys thereon, the soft keys each having at least one function associated therewith, and the method further comprises, based at least in part on the user's selection of at least one of the soft keys, causing selection of advertising content relating at least in part to the function associated with the selected at least one soft key, and causing display of the selected content on the display device. 30. A computerized apparatus configured to enable a user to locate an entity, the apparatus comprising: a wireless interface;a capacitive touch-screen input and display device;a microphone;speech digitization apparatus in signal communication with the microphone;a processor;a data storage device in data communication with the processor; andcomputerized logic in data communication with the processor, the computerized logic configured to: download data from a server to the computerized apparatus, the data relating to a particular geographic location;store the data on the storage device of the computerized apparatus;receive a user's speech input via the microphone;digitize the speech input using at least the speech digitization apparatus;utilize at least one speech recognition algorithm to evaluate the digitized speech input so as to identify at least one word or phrase therein;use the identified at least one word or phrase to identify at least one matching entity, the identification of the at least one matching entity comprising access of at least a portion of the downloaded and stored data;determine a location of the at least one matching entity within the particular geographic location; anddisplay a graphical representation of the location of the at least one matching entity, including surroundings thereof and one or more other entities proximate thereto, on the display device. 31. The apparatus of claim 30, wherein the digitization comprises use of an algorithm configured to run locally on the computerized apparatus. 32. The apparatus of claim 31, wherein the use of an algorithm configured to run locally on the computerized apparatus comprises use of a code-excited linear predictive (CELP)-based algorithm. 33. The apparatus of claim 30, wherein the use of the at least one speech recognition algorithm comprises use of an algorithm configured to run locally on the computerized apparatus. 34. The apparatus of claim 33, wherein the use of an algorithm configured to run locally on the computerized apparatus comprises use of a Hidden Markov Model (HMM)-based algorithm utilizing at least phoneme recognition. 35. The apparatus of claim 34, wherein the digitization comprises use of an algorithm configured to run locally on the computerized apparatus, the algorithm comprising a code-excited linear predictive (CELP)-based algorithm. 36. The apparatus of claim 31, wherein the use of an algorithm configured to run locally on the computerized apparatus comprises use of a Neural Network (NN)-based algorithm utilizing at least phoneme recognition. 37. The apparatus of claim 36, wherein the digitization comprises use of an algorithm configured to run locally on the computerized apparatus, the algorithm comprising a code-excited linear predictive (CELP)-based algorithm. 38. The apparatus of claim 30, wherein the user's speech input comprises a speech input relating to a name of an organization or entity which the user desires to locate. 39. The apparatus of claim 30, wherein the computerized apparatus is configured to conduct the identification of the at least one matching entity without further access to the server after the download has occurred. 40. The apparatus of claim 39, wherein: the download comprises a download of data relating to a current geographic location of user and its immediate surroundings only; andthe download is performed via a wireless link established via the wireless interface between the computerized apparatus and the server, the server disposed outside of the current geographical location and the immediate surroundings. 41. The apparatus of claim 30, wherein at least a portion of the graphical representation of the location of the at least one matching entity, including surroundings thereof and one or more other entities proximate thereto, comprises a hypertext markup language (HTML)-based representation. 42. The apparatus of claim 30, wherein the download comprises a download data relating to a current geographic location of the user and its immediate surroundings only. 43. The apparatus of claim 30, wherein the download comprises a download data relating to a current geographic location of user and the immediate surroundings of the current location, and the computerized apparatus is further configured to display directions from the current location to the location. 44. The apparatus of claim 43, wherein the display of directions from the current location to the location comprises a display of the graphical representation of the location of the at least one matching entity, including surroundings and one or more other entities proximate thereto, along with a differentiated arrow or line disposed thereon which shows the user a particular path to follow. 45. The apparatus of claim 44, wherein the differentiated arrow or line disposed thereon which shows the user a particular path to follow comprises an arrow or line that is different in at least one of color and/or luminosity, and which passes by at least a portion of the one or more other entities. 46. The apparatus of claim 30, wherein: the utilization of a speech recognition algorithm comprises use of an algorithm configured to run locally on the computerized apparatus, the algorithm comprising a MFCC (Mel Frequency Cepstral Coefficients)-based algorithm that further utilizes at least one of neural networks (NNs) and/or Hidden Markov Modeling (HMM); andthe digitization comprises use of an algorithm configured to run locally on the computerized apparatus, the algorithm comprising a code-excited linear predictive (CELP)-based algorithm. 47. The apparatus of claim 30, wherein the use of the identified at least one word or phrase to identify at least one matching entity comprises an identification of a plurality of possible matching entities, and the computerized apparatus is further configured to prompt a user for a subsequent user input to identify one of the plurality of possible matching entities is a correct one that the user desires. 48. The apparatus of claim 47, wherein the prompt further comprises a synthesis of audible signals comprising at least one human-intelligible word or phrase, and the subsequent user input comprises a second speech input comprising one or more additional pieces of information relating to the desired entity. 49. The apparatus of claim 47, wherein the prompt further comprises causing display of a listing of the plurality of possible matching entities on the touch-screen input and display device viewable by the user, and the subsequent user input comprises a tactile input via the display device to select one of the plurality of possible matching entities. 50. The apparatus of claim 47, wherein the computerized apparatus is further configured to, based at least in part on an identification of which of the plurality of possible matching entities is the one that the user desires, alert the user that they are presently at an incorrect location. 51. The apparatus of claim 50, wherein the alert of the user that they are presently at an incorrect location comprises production of an audible alert generated by a speech synthesis apparatus. 52. The apparatus of claim 50, wherein the location comprises a location indoors within a building having a plurality of other entities commonly disposed indoors within the building, and the alert of the user that they are presently at an incorrect location comprises provision of an alert indicating that the user needs to go to another building. 53. The apparatus of claim 30, wherein the location comprises a location within a building, and the one or more other entities proximate to the location are disposed within the same building, the building further comprising a plurality of floors and at least one elevator capable of accessing the plurality of floors, and the location and the one or more other organizations or entities are disposed on at least a common floor. 54. The apparatus of claim 30, wherein the computerized apparatus is further configured to generate one or more hyperlinks relating to topics of interest to the user in their travels and displaying the one or more hyperlinks on the touch-screen input and display device, the one or more hyperlinks configured to allow the user to select them via a tactile input via the touch-screen device, and access a remote server via the wireless interface, and wherein the computerized apparatus is further configured to download information from the server to the computerized information apparatus for display on the touch-screen device. 55. The apparatus of claim 30, wherein the computerized apparatus is further configured to provide a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which aids the user in navigating to the location. 56. The apparatus of claim 55, wherein the provision of a synthesized speech output comprises provision of a code-excited linear prediction (CELP)-based representation. 57. The apparatus of claim 30, wherein the display of the representation comprises display on the capacitive touch-screen input and display device, the capacitive touch-screen input and display device configured to generate a plurality of soft function keys thereon, the soft function keys each having at least one function associated therewith, and the computerized apparatus is further configured to, based at least in part on a user's selection of at least one of the soft function keys, cause selection of advertising content relating at least in part to the function associated with the selected at least one soft key, and cause display of the selected content on the display device. 58. The apparatus of claim 30, wherein the computerized apparatus is part of a transport device having a passenger compartment and doors and configured to move from one location to another on a regular basis and carry one or more passengers, the microphone and capacitive touch-screen device each disposed at least partly within the passenger compartment and accessible to the user while operating the transport device. 59. The apparatus of claim 58, further comprising at least one video apparatus including at least one camera, the at least one video apparatus configured to generate video from outside the passenger compartment using the at least one camera, and provide the generated video to the computerized apparatus for display on the touch-screen device such that the user can view the video while operating the transport device and be made aware of one or more hazards that they cannot see directly from the passenger compartment while operating the transport device. 60. The apparatus of claim 30, wherein the computerized logic is further configured to cause provision of a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which aids the user in navigating to the location. 61. The apparatus of claim 30, wherein the use of the digitized input to identify the at least one matching entity comprises evaluation of the digitized input using a speech evaluation algorithm which is configured to identify at least one word or phrase, and access a database of data relating at least to names of organizations or entities that are disposed proximate to a current location of the user. 62. The apparatus of claim 30, wherein the computerized apparatus is further configured to enable establishment of an ad-hoc or temporary communication link with a portable personal electronic device of the user. 63. The apparatus of claim 62, wherein the communication link with a portable personal electronic device of the user comprises a wired link established by the user by placing the portable personal electronic device in communication with the computerized apparatus via at least a connector of the computerized apparatus and a second connector of the portable personal electronic device. 64. The apparatus of claim 63, wherein the communication link comprises a universal serial bus (USB) or other serialized bus protocol link. 65. The apparatus of claim 64, wherein the computerized apparatus is further configured to download user-specific data to the portable device via the communication link. 66. The apparatus of claim 65, wherein the computerized apparatus further comprises a short-range wireless interface configured to communicate data with a corresponding short range integrated circuit radio frequency device of the user, the short range integrated circuit radio frequency device of the user configured to uniquely identify at least one of itself and/or the user so as to enable the computerized apparatus to configure the user-specific data according to one or more data parameters or profiles specific to the user. 67. The apparatus of claim 62, wherein the computerized apparatus is further configured to download user-specific data to the portable device via the communication link. 68. The apparatus of claim 30, wherein the computerized apparatus further comprises a short-range wireless interface configured to communicate data with a corresponding short range integrated circuit radio frequency device of the user, the short range integrated circuit radio frequency device of the user configured to uniquely identify at least one of itself and/or the user so as to enable the computerized apparatus to configure data according to one or more data parameters or profiles specific to the user. 69. A computerized apparatus configured to enable a user to locate an entity, the apparatus comprising: a wireless interface;a capacitive touch-screen input and display device;a microphone;speech digitization apparatus in signal communication with the microphone;a processor;a data storage device in data communication with the processor; andat least one computer program, the at least one computer program configured to, when executed: download data from a server to the computerized apparatus, the data relating to and limited to a particular area only, the particular area including the user's current location;store the data on the storage device of the computerized apparatus;receive a user's speech input via the microphone;digitize the speech input using at least the speech digitization apparatus;cause utilization of at least one speech recognition algorithm to evaluate the digitized speech input so as to identify at least one word or phrase therein;use the identified at least one word or phrase to identify at least one matching entity, the identification of the at least one matching entity comprising access of at least a portion of the downloaded and stored data;determine a location of at least two matching entities within the particular area;advise the user of the at least two matching entities;thereafter, receive an input from the user, the input causing or enabling identification of one of the at least two entities as the entity which the user desires to locate; anddisplay a graphical representation of the location of the one entity, including surroundings thereof and one or more other entities proximate thereto, on the display device. 70. The apparatus of claim 69, wherein: the user's speech input comprises a speech input relating to a name of an organization or entity which the user desires to locate;the download is performed via a wireless link established via the wireless interface between the computerized apparatus and the server, the server disposed outside of the particular area;the computerized apparatus is further configured to display directions from the current location to the location, the display of directions from the current location to the location comprising a display of the graphical representation of the location of the at least one matching entity, including surroundings and one or more other entities proximate thereto, along with a differentiated arrow or line disposed thereon which shows the user a particular path to follow, the differentiated arrow or line comprising an arrow or line that is different in at least one of color and/or luminosity, and which passes by at least a portion of the one or more other entities;wherein the advisement to the user of the at least two matching entities comprises at least one of: (i) a synthesis of audible signals comprising at least one human-intelligible word or phrase, and/or (ii) causing display of a listing of the plurality of possible matching entities on the touch-screen input and display device viewable by the user;wherein the thereafter user input comprises at least one of: (i) a speech input comprising one or more additional pieces of information relating to the desired entity; and/or (ii) a tactile input via the display device to select one of the plurality; andthe computerized apparatus is further configured to generate one or more hyperlinks relating to topics of interest to the user in their travels and displaying them on the touch-screen input and display device, the one or more hyperlinks configured to allow the user to select them via the tactile input via the touch-screen device, and access a remote server via the wireless interface, and wherein the computerized apparatus is further configured to download information from the remote server to the computerized information apparatus for display on the touch-screen device. 71. The apparatus of claim 69, wherein: the computerized apparatus is further configured to provide a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which aids the user in navigating to the location;the computerized apparatus is part of a transport device having a passenger compartment and doors and configured to move from one location to another on a regular basis and carry one or more passengers, the microphone and capacitive touch-screen device each disposed at least partly within the passenger compartment and accessible to the user while operating the transport device;the computerized apparatus further comprise at least one video apparatus including at least one camera, the at least one video apparatus configured to generate video from outside the passenger compartment using the at least one camera, and provide the generated video to the computerized apparatus for display on the touch-screen device such that the user can view the video while operating the transport device and be made aware of one or more hazards that they cannot see directly from the passenger compartment while operating the transport device; andthe computerized apparatus is further configured to enable establishment of an ad-hoc or temporary communication link with a portable personal electronic device of the user. 72. The apparatus of claim 69, wherein: the user's speech input comprises a speech input relating to a name of an organization or entity which the user desires to locate;the download is performed via a wireless link established via the wireless interface between the computerized apparatus and the server;the computerized apparatus is further configured to display directions from the current location to the location, the display of directions from the current location to the location comprising a display of the graphical representation of the location of the at least one matching entity, along with a differentiated arrow or line disposed thereon which shows the user a particular path to follow, the differentiated arrow or line comprising an arrow or line that is different in at least one of color and/or luminosity;wherein the advisement to the user of the at least two matching entities comprises at least one of: (i) a synthesis of audible signals comprising at least one human-intelligible word or phrase, and/or (ii) causing display of a listing of the plurality of possible matching entities on the touch-screen input and display device viewable by the user;wherein the thereafter user input comprises at least one of: (i) a second speech input comprising one or more additional pieces of information relating to the desired entity; and/or (ii) a tactile input via the display device to select one of the plurality. 73. The apparatus of claim 69, wherein the computerized apparatus is part of a wheeled terrestrial transport vehicle having a passenger compartment and doors and configured to move from one location to another and carry one or more passengers, the microphone and capacitive touch-screen device each disposed at least partly within the passenger compartment and accessible to the user while operating the transport vehicle. 74. The apparatus of claim 73, further comprising at least one video apparatus including at least one camera, the at least one video apparatus configured to generate video from outside the passenger compartment using the at least one camera, and provide the generated video to the computerized apparatus for display on the touch-screen device such that the user can view the video while operating the transport vehicle and be made aware of one or more hazards that they cannot readily see directly from the passenger compartment while operating the transport vehicle.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.