최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0019834 (2011-02-02) |
등록번호 | US-8326634 (2012-12-04) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 65 인용 특허 : 367 |
Systems and methods are provided for receiving speech and non-speech communications of natural language questions and/or commands, transcribing the speech and non-speech communications to textual messages, and executing the questions and/or commands. The invention applies context, prior information,
Systems and methods are provided for receiving speech and non-speech communications of natural language questions and/or commands, transcribing the speech and non-speech communications to textual messages, and executing the questions and/or commands. The invention applies context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for one or more users presenting questions or commands across multiple domains. The systems and methods creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context of the speech and non-speech communications and presenting the expected results for a particular question or command.
1. A system for multi-pass speech recognition, comprising: an input device configured to receive a natural language utterance; anda multi-pass speech recognition module configured to transcribe the natural language utterance, wherein to transcribe the natural language utterance, the multi-pass speec
1. A system for multi-pass speech recognition, comprising: an input device configured to receive a natural language utterance; anda multi-pass speech recognition module configured to transcribe the natural language utterance, wherein to transcribe the natural language utterance, the multi-pass speech recognition module is further configured to: use a dictation grammar to transcribe the natural language utterance in response to a platform associated with the multi-pass speech recognition module having the dictation grammar available; oruse a virtual dictation grammar to transcribe the natural language utterance in response to the platform associated with the multi-pass speech recognition module not having the dictation grammar available. 2. The system of claim 1, wherein the multi-pass speech recognition module is further configured to dynamically constrain a vocabulary of words associated with the virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that were successfully transcribed. 3. The system of claim 1, further comprising an agent associated with a context matching a command or request associated with the transcribed natural language utterance, wherein the agent is configured to: process the command or request to generate a response to the natural language utterance; andupdate a context stack with information associated with the matching context, the generated command or request, or the generated response to enable one or more follow-up commands or requests associated with the matching context, the generated command or request, or the generated response. 4. The system of claim 1, wherein the multi-pass speech recognition module is configured to use the dictation grammar or the virtual dictation grammar to completely or partially transcribe the natural language utterance. 5. A system for multi-pass speech recognition, comprising: an input device configured to receive a natural language utterance; anda multi-pass speech recognition module configured to: determine whether a platform associated with the multi-pass speech recognition module has a dictation grammar available or a virtual dictation grammar available; anduse the dictation grammar or the virtual dictation grammar to transcribe the natural language utterance based on whether the platform has the dictation grammar available or the virtual dictation grammar available. 6. A method for multi-pass speech recognition, comprising: receiving a natural language utterance at an input device; andtranscribing the natural language utterance with a multi-pass speech recognition module, wherein transcribing the natural language utterance with the multi-pass speech recognition module includes: using a dictation grammar to transcribe the natural language utterance in response to determining that a platform associated with the multi-pass speech recognition module has the dictation grammar available; orusing a virtual dictation grammar to transcribe the natural language utterance in response to determining that the platform associated with the multi-pass speech recognition module does not have the dictation grammar available. 7. The method of claim 6, further comprising dynamically constraining a vocabulary of words associated with the virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that the multi-pass speech recognition module successfully transcribed. 8. The method of claim 6, further comprising: identifying a context matching a command or request associated with the transcribed natural language utterance:processing the command or request at an agent associated with the identified context, wherein the agent processes the command or request to generate a response to the natural language utterance; andupdating a context stack with information associated with the identified context, the generated command or request, or the generated response, wherein the agent updates the context stack with the information associated with the identified context, the generated command or request, or the generated response to enable one or more follow-up commands or requests associated with the identified context, the generated command or request, or the generated response. 9. The method of claim 6, wherein transcribing the natural language utterance with the multi-pass speech recognition module includes the multi-pass speech recognition module using the dictation grammar or the virtual dictation grammar to completely or partially transcribe the natural language utterance. 10. A method for multi-pass speech recognition, comprising: receiving a natural language utterance at an input device;determining whether a platform associated with a multi-pass speech recognition module has a dictation grammar available or a virtual dictation grammar available; andtranscribing the natural language utterance with the multi-pass speech recognition module, wherein the multi-pass speech recognition module uses the dictation grammar or the virtual dictation grammar to transcribe the natural language utterance based on whether the platform has the dictation grammar available or the virtual dictation grammar available. 11. A system for knowledge-enhanced speech recognition, comprising: a context stack configured to store one or more expected contexts associated with a natural language utterance; anda knowledge-enhanced speech recognition engine, wherein the knowledge-enhanced speech recognition engine includes one or more processors configured to: access the one or more expected contexts stored in the context stack in response to one or more active grammars in a context description grammar failing to completely match information associated with the natural language utterance;compare the information associated with the natural language utterance to one or more context specific matchers to determine a most likely context associated with the natural language utterance from the one or more expected contexts stored in the context stack; anduse one or more grammar expression entries in the context description grammar to generate a command or request associated with the most likely context. 12. The system of claim 11, wherein the one or more processors are further configured to communicate the generated command or request to an agent configured to process the generated command or request in the most likely context and generate a response to the natural language utterance. 13. The system of claim 11, wherein the one or more processors are further configured to determine an intent or correct a recognition associated with the natural language utterance based on the most likely context. 14. The system of claim 11, wherein the speech recognition engine is configured to dynamically constrain a vocabulary of words associated with a virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that were successfully transcribed by the speech recognition engine. 15. The system of claim 11, wherein the one or more processors are configured to use a dictation grammar or a virtual dictation grammar to completely or partially transcribe the natural language utterance. 16. A system for knowledge-enhanced speech recognition comprising: a context stack configured to store one or more expected contexts associated with a natural language utterance;a knowledge-enhanced speech recognition engine, wherein the knowledge-enhanced speech recognition engine includes one or more processors configured to: access the one or more expected contexts stored in the context stack in response to one or more active grammars in a context description grammar failing to completely match information associated with the natural language utterance;compare the information associated with the natural language utterance to one or more context specific matchers to determine a most likely context associated with the natural language utterance from the one or more expected contexts stored in the context stack; anduse one or more, grammar expression entries in the context description grammar to generate a command or request associated with the most likely context; andan agent configured to:process the generated command or request in the most likely context to generate a response to the natural language utterance; andupdate an ordered list associated with the one or more expected contexts in the context stack with information associated with one or more of the most likely context, the generated command or request, or the generated response to enable one or more follow-up commands or requests associated with the most likely context, the generated command or request, or the generated response. 17. A system for knowledge-enhanced speech recognition, comprising: a context stack configured to store one or more expected contexts associated with a natural language utterance; anda knowledge-enhanced speech recognition engine, wherein the knowledge-enhanced speech recognition engine includes one or more processors configured to: access the one or more expected contexts stored in the context stack in response to one or more active grammars in a context description grammar failing to completely match information associated with the natural language utterance;compare the information associated with the natural language utterance to one or more context specific matchers to determine a most likely context associated with the natural language utterance from the one or more expected contexts stored in the context stack; anduse one or more grammar expression entries in the context description grammar to generate a command or request associated with the most likely context,wherein the information compared to the one or more context specific matchers includes phonetic information associated with the natural language utterance or text combinations from a transcription associated with the natural language utterance. 18. The system of claim 17, wherein the speech recognition engine is configured to dynamically constrain a vocabulary of words associated with a virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that were successfully transcribed by the speech recognition engine. 19. The system of claim 17, wherein the one or more processors are configured to use a dictation grammar or a virtual dictation grammar to completely or partially transcribe the natural language utterance. 20. A method for knowledge-enhanced speech recognition, comprising: storing one or more expected contexts in a context stack, wherein a knowledge-enhanced speech recognition engine that includes one or more processors accesses the one or more expected contexts in the context stack in response to one or more active grammars in a context description grammar failing to completely match information associated with the natural language utterance;comparing the information associated with the natural language utterance to one or more context specific matchers to determine a most likely context associated with the natural language utterance, wherein the knowledge-enhanced speech recognition engine determines the most likely context from the one or more expected contexts in the context stack; andusing one or more grammar expression entries in the context description grammar to generate a command or request associated with the most likely context. 21. The method of claim 20, further comprising communicating the generated command or request to an agent that processes the generated command or request in the most likely context and generates a response to the natural language utterance. 22. The method of claim 20, wherein the knowledge-enhanced speech recognition engine determines an intent or corrects a recognition associated with the natural language utterance based on the most likely context. 23. The method of claim 20 further comprising: dynamically constraining a vocabulary of words associated with a virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that were successfully transcribed by the speech recognition engine. 24. The method of claim 20 further comprising: using a dictation grammar or a virtual dictation grammar to completely or partially transcribe the natural language utterance. 25. A method for knowledge-enhanced speech recognition, further comprising: storing one or more expected contexts in a context stack, wherein a knowledge-enhanced speech recognition engine that includes one or more processors accesses the one or more expected contexts in the context stack in response to one or more active grammars in a context description grammar failing to completely match information associated with the natural language utterance;comparing the information associated with the natural language utterance to one or more context specific matchers to determine a most likely context associated with the natural language utterance, wherein the knowledge-enhanced speech recognition engine determines the most likely context from the one or more expected contexts in the context stack;using one or more grammar expression entries in the context description grammar to generate a command or request associated with the most likely context;processing the generated command or request with an agent associated with the most likely context, wherein the agent processes the generated command or request to generate a response to the natural language utterance; andupdating an ordered list associated with the one or more expected contexts in the context stack with information associated with one or more of the most likely context, the generated command or request, or the generated response, wherein the agent updates the ordered list to enable one or more follow-up commands or requests associated with the most likely context, the generated command or request, or the generated response. 26. A method for knowledge-enhanced speech recognition, comprising: storing one or more expected contexts in a context stack, wherein a knowledge-enhanced speech recognition engine that includes one or more processors accesses the one or more expected contexts in the context stack in response to one or more active grammars in a context description grammar failing to completely match information associated with the natural language utterance;comparing the information associated with the natural language utterance to one or more context specific matchers to determine a most likely context associated with the natural language utterance, wherein the knowledge-enhanced speech recognition engine determines the most likely context from the one or more expected contexts in the context stack; andusing one or more grammar expression entries in the context description grammar to generate a command or request associated with the most likely context,wherein the information compared to the one or more context specific matchers includes phonetic information associated with the natural language utterance or text combinations from a transcription associated with the natural language utterance. 27. The method of claim 26 further comprising: dynamically constraining a vocabulary of words associated with a virtual dictation grammar to include one or more decoy words for out-of-vocabulary words based on one or more prior utterances that were successfully transcribed by the speech recognition engine. 28. The method of claim 26 further comprising: using a dictation grammar or a virtual dictation grammar to completely or partially transcribe the natural language utterance. 29. A system for synchronizing context across multiple electronic devices, comprising: one or more processors configured to: subscribe a first electronic device to one or more context events;receive a context change event from a second electronic device; andinform the first electronic device of the context change event to synchronize a context across the first electronic device and the second electronic device; anda registration module configured to: register a library specifically associated with the first electronic device to subscribe the first electronic device to the one or more context events; andremove the library specifically associated with the first electronic device to unsubscribe the first electronic device from the one or more context events. 30. The system of claim 29, wherein the one or more processors are further configured to: receive a subsequent context change event from the first electronic device; andinform the second electronic device of the subsequent context change event in response to the second electronic device having a subscription to the subsequent context change event. 31. The system of claim 29, further comprising a context tracking module configured to track the context change event received from the second electronic device and track the context synchronized across the first electronic device and the second electronic device. 32. The system of claim 29, wherein the one or more processors are configured to inform the first electronic device of the context change event in response to the one or more subscribed context events including the context change event. 33. A method for synchronizing context across multiple electronic devices, comprising: subscribing a first electronic device to one or more context events;receiving a context change event from a second electronic device;informing the first electronic device of the context change event to synchronize a context across the first electronic device and the second electronic device;registering a library specifically associated with the first electronic device to subscribe the first electronic device to the one or more context events; andremoving the library specifically associated with the first electronic device to unsubscribe the first electronic device from the one or more context events. 34. The method of claim 33, further comprising: receiving a subsequent context change event from the first electronic device; andinforming the second electronic device of the subsequent context change event in response to the second electronic device having a subscription to the subsequent context change event. 35. The method of claim 33, further comprising tracking the context change event received from the second electronic device and the context synchronized across the first electronic device and the second electronic device at a context tracking module. 36. The method of claim 33, wherein a context manager informs the first electronic device of the context change event in response to the one or more subscribed context events including the context change event.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.