최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0690895 (2012-11-30) |
등록번호 | US-8849670 (2014-09-30) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 34 인용 특허 : 443 |
Systems and methods are provided for receiving speech and non-speech communications of natural language questions and/or commands, transcribing the speech and non-speech communications to textual messages, and executing the questions and/or commands. The invention applies context, prior information,
Systems and methods are provided for receiving speech and non-speech communications of natural language questions and/or commands, transcribing the speech and non-speech communications to textual messages, and executing the questions and/or commands. The invention applies context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for one or more users presenting questions or commands across multiple domains. The systems and methods creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context of the speech and non-speech communications and presenting the expected results for a particular question or command.
1. A system for facilitating notification of context changes across multiple electronic devices, the system comprising: one or more physical processors programmed to execute one or more computer program instructions which, when executed, cause the system to: register a library associated with a firs
1. A system for facilitating notification of context changes across multiple electronic devices, the system comprising: one or more physical processors programmed to execute one or more computer program instructions which, when executed, cause the system to: register a library associated with a first electronic device to subscribe the first electronic device to one or more context events;receive a context change event from a second electronic device; andinform the first electronic device of the context change event to synchronize a context across the first electronic device and the second electronic device. 2. The system of claim 1, wherein the system is caused to: receive a subsequent context change event from the first electronic device; andinform the second electronic device of the subsequent context change event in response to the second electronic device having a subscription to the subsequent context change event. 3. The system of claim 1, wherein the system is caused to remove the library to unsubscribe the first electronic device from the one or more context events. 4. A method for facilitating notification of context changes across multiple electronic devices, the method being implemented in a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising: registering a library associated with a first electronic device to subscribe the first electronic device to one or more context events;receiving a context change event from a second electronic device; andinforming the first electronic device of the context change event to synchronize a context across the first electronic device and the second electronic device. 5. The method of claim 4, further comprising: receiving a subsequent context change event from the first electronic device; andinforming the second electronic device of the subsequent context change event in response to the second electronic device having a subscription to the subsequent context change event. 6. The method of claim 4, further comprising removing the library to unsubscribe the first electronic device from the one or more context events. 7. A system for facilitating context-based speech recognition, the system comprising: one or more physical processors programmed to execute one or more computer program instructions which, when executed, cause the system to: access one or more contexts that are stored in a context stack associated with a natural language utterance in response to one or more grammars in a context description grammar failing to correspond to information associated with the natural language utterance;compare the information to one or more context matchers to determine a most likely context associated with the natural language utterance; anduse one or more grammar expression entries in the context description grammar to generate a command or request associated with the most likely context. 8. The system of claim 7, wherein the system is caused to communicate the generated command or request to an agent configured to process the generated command or request in the most likely context and generate a response to the natural language utterance. 9. The system of claim 7, wherein the system is caused to determine an intent or correct a recognition associated with the natural language utterance based on the most likely context. 10. The system of claim 7, wherein the system is caused to dynamically constrain a vocabulary of words associated with a virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that were successfully transcribed by the speech recognition engine. 11. The system of claim 7, wherein the system is caused to use a dictation grammar or a virtual dictation grammar to completely or partially transcribe the natural language utterance. 12. A method for facilitating context-based speech recognition, the method being implemented in a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising: accessing one or more contexts that are stored in a context stack associated with a natural language utterance in response to one or more grammars in a context description grammar failing to correspond to information associated with the natural language utterance;comparing the information to one or more context matchers to determine a most likely context associated with the natural language utterance; andusing one or more grammar expression entries in the context description grammar to generate a command or request associated with the most likely context. 13. The method of claim 12, further comprising communicating the generated command or request to an agent configured to process the generated command or request in the most likely context and generate a response to the natural language utterance. 14. The method of claim 12, further comprising determining an intent or correct a recognition associated with the natural language utterance based on the most likely context. 15. The method of claim 12, further comprising dynamically constraining a vocabulary of words associated with a virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that were successfully transcribed by the speech recognition engine. 16. The method of claim 12, further comprising using a dictation grammar or a virtual dictation grammar to completely or partially transcribe the natural language utterance. 17. A system for facilitating speech recognition using dictation and/or virtual dictation grammars, the system comprising: one or more physical processors programmed to execute one or more computer program instructions which, when executed, cause the system to: receive a natural language utterance; andtranscribe the natural language utterance using a dictation grammar of a speech recognition module or a virtual dictation grammar of the speech recognition module based on whether a platform associated with the speech recognition module has the dictation grammar available. 18. The system of claim 17, wherein the system is caused to dynamically constrain a vocabulary of words associated with the virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that the speech recognition module successfully transcribed. 19. The system of claim 17, wherein the system is caused to: identify a context matching a command or request associated with the transcribed natural language utterance;process, at an agent associated with the identified context, the command or request to generate a response to the natural language utterance; andupdate, at the agent, a context stack with information associated with the identified context, the generated command or request, or the generated response to enable one or more follow-up commands or requests associated with the identified context, the generated command or request, or the generated response. 20. The system of claim 17, wherein transcribing the natural language utterance comprises using the dictation grammar or the virtual dictation grammar to completely or partially transcribe the natural language utterance. 21. A method for facilitating speech recognition using dictation and/or virtual dictation grammars, the method being implemented in a computer system that includes one or more physical processors executing one or more computer program instructions which, when executed, perform the method, the method comprising: receiving a natural language utterance; andtranscribing the natural language utterance using a dictation grammar of a speech recognition module or a virtual dictation grammar of the speech recognition module based on whether a platform associated with the speech recognition module has the dictation grammar available. 22. The method of claim 21, further comprising dynamically constraining a vocabulary of words associated with the virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that the speech recognition module successfully transcribed. 23. The method of claim 21, further comprising: identifying a context matching a command or request associated with the transcribed natural language utterance;processing, at an agent associated with the identified context, the command or request to generate a response to the natural language utterance; andupdating, at the agent, a context stack with information associated with the identified context, the generated command or request, or the generated response to enable one or more follow-up commands or requests associated with the identified context, the generated command or request, or the generated response. 24. The method of claim 21, wherein transcribing the natural language utterance comprises using the dictation grammar or the virtual dictation grammar to completely or partially transcribe the natural language utterance.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.