최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0752222 (2013-01-28) |
등록번호 | US-8719038 (2014-05-06) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 6 인용 특허 : 214 |
Computerized apparatus for obtaining and displaying information, such as for example directions to a desired entity or organization. In one embodiment, the computerized apparatus is configured to receive user speech input and enable performance of various tasks, such as obtaining desired information
Computerized apparatus for obtaining and displaying information, such as for example directions to a desired entity or organization. In one embodiment, the computerized apparatus is configured to receive user speech input and enable performance of various tasks, such as obtaining desired information relating to indoor entities, maps or directions, or any number of other topics. The obtained data may also, in various variants, be displayed in various formats and relative to other entities nearby.
1. Computer readable apparatus configured to aid a user in locating an organization or entity, the apparatus comprising a storage medium having a computer program configured to run on a processor, the program configured to, when executed on the processor: obtain a representation of a first speech in
1. Computer readable apparatus configured to aid a user in locating an organization or entity, the apparatus comprising a storage medium having a computer program configured to run on a processor, the program configured to, when executed on the processor: obtain a representation of a first speech input from the user, the first speech input relating to a name of a desired organization or entity;cause use of at least a speech recognition algorithm to process the representation to identify at least one word or phrase therein;use at least the identified at least one word or phrase to identify a plurality of possible matches for the name;cause the user to be prompted to enter a subsequent input in order to aid in identification of one of the plurality of possible matches which best correlates to the desired organization or entity;receive data relating to the subsequent user input;based at least in part on the data, determine which of the plurality of possible matches is the one that best correlates;determine a location associated with the one of the possible matches that best correlates; andselect and cause presentation of a visual representation of the location, as well as at least an immediate surroundings thereof, on a display viewable by the user, the visual representation further comprising visual representations of one or more other organizations or entities proximate to the location. 2. The apparatus of claim 1, wherein the prompt for the subsequent user input comprises a synthesis of audible signals comprising at least one human-intelligible word or phrase. 3. The apparatus of claim 2, wherein the location comprises a location within a building, the one or more other organizations or entities proximate to the location are disposed within the building, the building further comprising a plurality of floors and at least one elevator capable of accessing the plurality of floors, and the location and the one or more other organizations or entities are disposed on at least a common floor. 4. The apparatus of claim 1, wherein the prompt for the subsequent user input comprises a display of a listing of the plurality of possible matches on a touch-screen input and display device, such that the user can select one of the plurality of possible matches via a touch of the appropriate region of the touch-screen device. 5. The apparatus of claim 4, wherein the location comprises a location within a building, the one or more other organizations or entities proximate to the location are disposed within the building, the building further comprising a plurality of floors and at least one elevator capable of accessing the plurality of floors, and the location and the one or more other organizations or entities are disposed on at least a common floor. 6. The apparatus of claim 1, wherein the computer program is further configured to cause provision of a graphical representation of directions to the location, the graphical representation of directions comprising at least one arrow or line that is highlighted relative to the immediate surroundings so as to aid the user in finding the desired organization or entity. 7. The apparatus of claim 6, wherein the computer program is further configured to cause provision of a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which relates to the location. 8. The apparatus of claim 6, wherein the computer program is further configured to cause provision of a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which aids the user in navigating to the location. 9. The apparatus of claim 8, wherein the synthesized speech output comprises a code-excited linear prediction (CELP)-based representation. 10. The apparatus of claim 8, wherein the synthesized speech output is generated using a code-excited linear prediction (CELP)-based algorithm configured to run on the processor. 11. The apparatus of claim 1, wherein the at least one speech recognition algorithm comprise at least one of a linear predictive coding (LPC)-based spectral analysis algorithm and/or a MFCC (Mel Frequency Cepstral Coefficients)-based algorithm. 12. The apparatus of claim 1, wherein the display comprises a capacitive touch-screen input and display device configured to generate a plurality of soft function keys thereon, the soft function keys each having at least one function associated therewith, and the computer program is further configured to, based at least in part on a user's selection of at least one of the soft function keys, enable selection of advertising content relating at least in part to the function associated with the selected at least one soft function key, and cause display the selected content on the display device. 13. The apparatus of claim 1, wherein the at least one speech recognition algorithm comprises phoneme/word recognition based at least in part on HMM (hidden Markov modeling). 14. The apparatus of claim 1, wherein the at least one speech recognition algorithm comprises phoneme/word recognition based at least in part on at least one of DTW (Dynamic Time Warping) and/or NNs (Neural Networks). 15. The apparatus of claim 1, wherein the obtainment of a representation and the causation of presentation of a visual representation of the location are each conducted at least in part over a wireless link between a client device accessible to the user and at least one networked server in communication therewith, the client device and the server forming a client-server relationship, and the at least one server disposed geographically remote to the client device. 16. The apparatus of claim 1, wherein the causation of use of at least a speech recognition algorithm, the use of at least the identified at least one word or phrase, the causation of the user to be prompted to enter a subsequent input, the receipt of the data relating to the subsequent user input, the determination of which of the plurality of possible matches is the one that best correlates, the determination of the location, and the selection of the visual representation, are each performed by at least one networked server in wireless communication with client device, the client device and the at least one server forming a client-server relationship, and the at least one server disposed geographically remote to the client device. 17. The apparatus of claim 1, wherein the representation of the first speech input is generated using a code-excited linear prediction (CELP)-based algorithm configured to run on the processor. 18. The apparatus of claim 1, wherein at least a portion of the visual representation of the location, as well as at least the immediate surroundings thereof, and the visual representations of one or more other organizations or entities proximate to the location, comprises a hypertext markup language (HTML)-based representation. 19. The apparatus of claim 1, wherein the computer program is further configured to, based at least in part on the determination of which of the plurality of possible matches is the one that best correlates, cause an alert to be provided to the user that they are presently at an incorrect location. 20. The apparatus of claim 19, wherein the alert of the user that they are presently at an incorrect location comprises an audible alert generated by a speech synthesis apparatus. 21. The apparatus of claim 19, wherein the location comprises a location indoors within a building having a plurality of other entities commonly disposed indoors within the building, and the alert of the user that they are presently at an incorrect location comprises an alert indicating that the user needs to go to another building. 22. Computerized information apparatus configured to aid a user in locating an organization or entity, the apparatus comprising: a microphone;a capacitive touch-screen input and display device;a processor in data communication with the display device;speech digitization apparatus in signal communication with the microphone;at least one audio speaker;speech synthesis apparatus in signal communication with the at least one audio speaker; anda storage medium comprising at least one computer program configured to run on at least the processor, the at least one program configured to, when executed on the processor: obtain a representation of a first speech input from the user, the first speech input relating to a name of a desired organization or entity;cause use of at least a speech recognition algorithm to process the representation to identify at least one word or phrase therein;prompt the user for a subsequent input in order to further clarify the first speech input and aid in identification of one of a plurality of possible matches which best correlates to the desired organization or entity;receive the subsequent user input; andcause, based at least in part on the subsequent input, (i) determination of which of the plurality of possible matches is the one that best correlates, (ii) identification of a location associated with the one of the possible matches that best correlates, and (iii) selection of a visual representation of the location, as well as at least an immediate surroundings thereof, capable of display on the display device, the visual representation further comprising visual representations of one or more other organizations or entities proximate to the location, and directions to the location. 23. The apparatus of claim 22, wherein the computerized information apparatus is disposed within a land-mobile transport device capable of moving from one location to another and having a multi-passenger compartment and doors by which passengers can ingress and egress from the compartment, the touch-screen input and display device disposed at least partly within the passenger compartment, the touch-screen device being disposed such that at least the user can view the touch-screen device while operating the transport device, and the microphone disposed within the passenger compartment such that the user can speak into the microphone while operating the transport device. 24. The apparatus of claim 23, wherein the transport device further comprises at least one wireless interface in data communication with the computerized information apparatus and configured to wirelessly transmit and receive signals to a distant entity. 25. The apparatus of claim 24, wherein the causation of use of at least the speech recognition algorithm to process the representation to identify at least one word or phrase therein comprises delivery of the representation to a remote server via the at least one wireless interface. 26. The apparatus of claim 24, wherein the transport device further comprises a control system configured to cause the transport device to move while the user is in the passenger compartment, the control system in data communication with the computerized information apparatus such that inputs to the control system can be communicated to the computerized information apparatus for use thereby in at least selection and presentation of information on the touch-screen input and display device. 27. The apparatus of claim 24, wherein the computerized information apparatus further comprises a function whereby when the user selects the function, the computerized information apparatus enters an operation mode whereby user speech can be input and utilized for information queries. 28. The apparatus of claim 23, wherein the representation of the first speech input comprises a code-excited linear prediction (CELP)-based representation. 29. The apparatus of claim 23, wherein the representation of the first speech input is generated using a code-excited linear prediction (CELP)-based algorithm configured to run on the computerized information apparatus. 30. The apparatus of claim 23, wherein the computerized information apparatus is further configured to, based at least in part on the determination of which of the plurality of possible matches is the one that best correlates, alert the user that they are presently at an incorrect location. 31. The apparatus of claim 23, wherein the prompt for the subsequent user input comprises synthesis of audible signals comprising at least one human-intelligible word or phrase, using at least the speech synthesis apparatus and at least one speaker. 32. The apparatus of claim 22, wherein the location comprises a location within a building, the one or more other organizations or entities proximate to the location are disposed within the building, the building further comprising a plurality of floors and at least one elevator capable of accessing the plurality of floors, the location and the one or more other organizations or entities are disposed on at least a common floor, and the directions comprise at least in part directions within the building. 33. The apparatus of claim 23, wherein the prompt for the subsequent user input comprises display of a listing of the plurality of possible matches on the touch-screen input and display device, such that the user can select one of the matches via touching the appropriate region of the touch-screen device. 34. The apparatus of claim 33, wherein the directions comprises a graphical representation of at least one arrow or line that is highlighted so as to aid the user in finding the desired organization or entity. 35. The apparatus of claim 34, wherein the computer program is further configured to cause provision of a synthesized speech output via the at least one speaker which is audible to the user and which comprises at least one word recognizable by the user which relates to the location. 36. The apparatus of claim 35, wherein the at least one word recognizable by the user and which relates to the location comprises at least one word relating to an address of the location. 37. The apparatus of claim 23, wherein at least a portion of the visual representation of the location, as well as at least the immediate surroundings thereof, and the visual representations of one or more other organizations or entities proximate to the location, comprises a hypertext markup language (HTML)-based representation. 38. The apparatus of claim 24, wherein the at least one speech recognition algorithm comprise a linear predictive coding (LPC)-based spectral analysis algorithm. 39. The apparatus of claim 24, wherein the at least one speech recognition algorithm comprises an MFCC (Mel Frequency Cepstral Coefficients)-based algorithm. 40. The apparatus of claim 24, wherein the at least one speech recognition algorithm comprises phoneme/word recognition based at least in part on HMM (hidden Markov modeling). 41. The apparatus of claim 24, wherein the at least one speech recognition algorithm comprises phoneme/word recognition based at least in part on at least one of DTW (Dynamic Time Warping) and/or NNs (Neural Networks). 42. The apparatus of claim 23, wherein the obtainment of a representation and the causation of selection of a visual representation of the location are each conducted at least in part over a wireless link between the transport device and at least one networked server in communication therewith, the computerized information apparatus and the server forming a client-server relationship, and the at least one server disposed geographically remote to the transport device. 43. The apparatus of claim 23, wherein the obtainment of a representation of a first speech input from the user, the causation of use of at least a speech recognition algorithm, the prompting of the user for a subsequent input, and the receipt of the subsequent user input, are each performed by the computerized information apparatus. 44. The apparatus of claim 23, wherein the computerized information apparatus is further configured to, based at least in part on the determination of which of the plurality of possible matches is the one that best correlates, alert the user that they are presently at an incorrect location. 45. The apparatus of claim 44, wherein the alert of the user that they are presently at an incorrect location comprises an audible alert generated by the speech synthesis apparatus. 46. The apparatus of claim 44, wherein the location comprises a location indoors within a building having a plurality of other entities commonly disposed indoors within the building, and the computerized information apparatus is configured to alert the user that they are presently at an incorrect location and that the user needs to go to another building. 47. The apparatus of claim 23, wherein the capacitive touch-screen device comprises at least one protective coating that does not substantially impede capacitive properties of the touch-screen device, thereby protecting a touch-screen of the touch screen and display device but enabling capacitive functionality. 48. The apparatus of claim 24, wherein the capacitive touch-screen input and display device is configured to generate a plurality of soft function keys thereon, the soft function keys each having at least one function associated therewith, and the computer program is further configured to, based at least in part on a user's selection of at least one of the soft function keys, cause data relating to the selected soft function key or its associated function to be forwarded via the at least one wireless interface so as to enable selection of advertising content relating at least in part to the function associated with the selected at least one soft function key, and cause display the selected content on the display device. 49. The apparatus of claim 23, further comprising at least one video apparatus including at least one camera, the at least one video apparatus configured to generate video from outside the passenger compartment using the at least one camera, and provide the generated video to the computerized information apparatus for display on the touch-screen device such that a user can view the video while operating the transport device and be made aware of one or more hazards that they cannot see directly from the passenger compartment while operating the transport device. 50. The apparatus of claim 49, further configured to generate one or more hyperlinks relating to topics of interest to the user in their travels and display the generated one or more hyperlinks on the touch-screen input and display device. 51. The apparatus of claim 50, wherein the one or more hyperlinks relating to topics of interest to the user in their travels are configured to access respective universal resource locators (URLs) when selected by the user via the touch-screen input and display device. 52. The apparatus of claim 51, wherein the URLs relate to content for directions to local transportation facilities. 53. The apparatus of claim 23, further configured to generate one or more hyperlinks relating to topics of interest to the user in their travels and displaying them on the touch-screen input and display device, the one or more hyperlinks configured to allow the user to select the one or more hyperlinks via a tactile input via the touch-screen device, and access a remote server via a wireless interface of the computerized information apparatus, and download information from the server to the computerized information apparatus for display on the touch-screen device. 54. Smart computerized apparatus capable of interactive information exchange with a human user, the apparatus comprising: a microphone;one or more processors;a capacitive touch-screen input and display device;speech synthesis apparatus and at least one speaker in signal communication therewith;input apparatus configured to cause the computerized apparatus to enter a mode whereby a user can speak a name of an entity into a microphone in signal communication with the computerized apparatus, the entity being an entity to which the user wishes to navigate; andat least one computer program operative to run on the one or more processors and configured to engage the user in an interactive audible interchange, the interchange comprising: digitization of the user's speech received via the microphone to produce a digital representation thereof;causation of use of the digitized representation to identify a plurality of entities which match at least a portion of the name;causation of generation of an audible communication to the user via the speech synthesis apparatus in order to at least inform the user of the identification of the plurality of matches;receipt of a subsequent speech input, the subsequent speech input comprising at least one additional piece of information;digitization of the subsequent speech input to produce a digital representation thereof;causation of utilization of at least the digital representation of the subsequent input to identify one of the plurality of entities which correlates to the entity to which the user wishes to navigate, and a location associated with the entity; andcausation of provision of a graphical representation of the location, including at least the immediate surroundings thereof, and at least one other entity geographically proximate to the entity. 55. The apparatus of claim 54, wherein the smart computerized apparatus is part of a transport device having a passenger compartment and doors and configured to move from one location to another on a regular basis and carry one or more passengers, the microphone and capacitive touch-screen device and at least one speaker each disposed at least partly within the passenger compartment and accessible to the user while operating the transport device. 56. The apparatus of claim 55, further comprising at least one video apparatus including at least one camera, the at least one video apparatus configured to generate video from outside the passenger compartment using the at least one camera, and provide the generated video to the computerized apparatus for display on the touch-screen device such that a user can view the video while operating the transport device and be made aware of one or mode hazards that the user cannot see directly from the passenger compartment while operating the transport device. 57. The apparatus of claim 54, wherein the computer program is further configured to cause provision of a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which aids the user in navigating to the location. 58. The apparatus of claim 54, wherein the causation of use of the digitized representation to identify a plurality of entities which match at least a portion of the name comprises evaluation of the digitized representation of the first input using a speech evaluation algorithm which is configured to identify at least one word or phrase in the digitized representation, and access a database of data relating at least to names of organizations or entities that are disposed proximate to a current location of the user. 59. The apparatus of claim 54, wherein the computerized apparatus is further configured to enable establishment of an ad-hoc or temporary communication link with a portable personal electronic device of the user. 60. The apparatus of claim 59, wherein the communication link with a portable personal electronic device of the user comprises a wired link established by the user by placing the portable personal electronic device in communication with computerized apparatus via at least a connector of the computerized apparatus. 61. The apparatus of claim 60, wherein the communication link comprises a universal serial bus (USB) or other serialized bus protocol link. 62. The apparatus of claim 61, wherein the computerized apparatus is further configured to download user-specific data to the portable device via the communication link. 63. The apparatus of claim 62, wherein the computerized apparatus further comprises a short-range wireless interface configured to communicate data with a corresponding short range integrated circuit radio frequency device of the user, the short-range integrated circuit radio frequency device of the user configured to uniquely identify at least one of itself and/or the user so as to enable the computerized apparatus to configure the user-specific data according to one or more data parameters or profiles specific to the user. 64. The apparatus of claim 59, wherein the computerized apparatus is further configured to download user-specific data to the portable device via the communication link. 65. The apparatus of claim 54, wherein the computerized apparatus further comprises a short-range wireless interface configured to communicate data with a corresponding short range integrated circuit radio frequency device of the user, the short range integrated circuit radio frequency device of the user configured to uniquely identify at least one of itself and/or the user so as to enable the computerized apparatus to configure data according to one or more data parameters or profiles specific to the user. 66. Smart computerized apparatus capable of interactive information exchange with a human user, the apparatus comprising: a microphone;one or more processors;a capacitive touch-screen input and display device;speech synthesis apparatus and at least one speaker in signal communication therewith;input apparatus configured to cause the computerized apparatus to enter a mode whereby a user can speak a name of an entity into a microphone in signal communication with the computerized apparatus, the entity being an entity to which the user wishes to navigate; andat least one computer program operative to run on the one or more processors and configured to engage the user in an interactive audible interchange, the interchange comprising: digitization of the user's speech received via the microphone to produce a digital representation thereof;causation of evaluation of the digitized representation to determine an appropriate subsequent audible communication to be provided to the user via the speech synthesis apparatus in order to at least inform the user of the results;causation of generation of the subsequent audible communication;receipt of a subsequent user input, the subsequent user input comprising at least one additional piece of information useful in identification of the entity;causation of utilization of the at least the at least one piece of information of the subsequent input to identify one of a plurality of entities, the one entity which best correlates to the entity to which the user wishes to navigate, and a location associated with the one entity; andcausation of provision of a graphical representation of the location, including at least the immediate surroundings thereof, and at least one other entity geographically proximate to the one entity. 67. The apparatus of claim 23, wherein the computer program is further configured to cause provision of a synthesized speech output which is audible to the user and which comprises at least one word recognizable by the user and which aids the user in navigating to the location. 68. The apparatus of claim 67, wherein the synthesized speech output is generated using a code-excited linear prediction (CELP)-based algorithm configured to run on the computerized apparatus.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.