IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0586975
(2012-08-16)
|
등록번호 |
US-8924219
(2014-12-30)
|
발명자
/ 주소 |
- Bringert, Bjorn Erik
- Barra, Hugo
- Cohen, Richard Zarek
|
출원인 / 주소 |
|
대리인 / 주소 |
McDonnell Boehnen Hulbert & Berghoff LLP
|
인용정보 |
피인용 횟수 :
14 인용 특허 :
25 |
초록
▼
In a first speech detection mode, a computing device listens for speech that corresponds to one of a plurality of activation phrases or “hotwords” that cause the computing device to recognize further speech input in a second speech detection mode. Each activation phrase is associated with a respecti
In a first speech detection mode, a computing device listens for speech that corresponds to one of a plurality of activation phrases or “hotwords” that cause the computing device to recognize further speech input in a second speech detection mode. Each activation phrase is associated with a respective application. During the first speech detection mode, the computing device compares detected speech to the activation phrases to identify any potential matches. In response to identifying a matching activation phrase with a sufficiently high confidence, the computing device invokes the application associated with the matching activation phrase and enters the second speech detection mode. In the second speech detection mode, the computing device listens for speech input related to the invoked application.
대표청구항
▼
1. A method for a computing device, the method comprising: during a first speech detection mode, the computing device: capturing first audio,detecting first speech in the captured first audio,comparing the detected first speech to a plurality of activation phrases to identify any potential matches b
1. A method for a computing device, the method comprising: during a first speech detection mode, the computing device: capturing first audio,detecting first speech in the captured first audio,comparing the detected first speech to a plurality of activation phrases to identify any potential matches based on a first language model, wherein the plurality of activation phrases is associated with a plurality of applications on the computing device such that each application in the plurality of applications is associated with a respective activation phrase in the plurality of activation phrases, and wherein the first language model covers the plurality of activation phrases, andin response to identifying a matching activation phrase within a confidence threshold, invoking the application in the plurality of applications associated with the matching activation phrase and entering a second speech detection mode; andduring the second speech detection mode, the computing device: in response to entering the second speech detection mode, reducing the confidence threshold,capturing second audio,detecting second speech in the captured second audio,obtaining a recognition result of the detected second speech based on a second language model, wherein the second language model has a wider coverage than the first language model,determining whether the recognition result is identified within the confidence threshold, andafter determining that the recognition result is identified within the confidence threshold, providing the recognition result to the invoked application. 2. The method of claim 1, further comprising: the computing device detecting a trigger; andthe computing device entering the first speech detection mode in response to the detected trigger. 3. The method of claim 2, wherein the computing device detecting a trigger comprises the computing device detecting that the computing device is docked. 4. The method of claim 2, wherein the computing device detecting a trigger comprises the computing device detecting that the computing device is being powered by an external source. 5. The method of claim 2, wherein the computing device detecting a trigger comprises the computing device detecting that the computing device has received an asynchronous communication. 6. The method of claim 2, wherein the computing device detecting a trigger comprises the computing device detecting a manual actuation on the computing device. 7. The method of claim 1, further comprising: during the second speech detection mode, the computing device visually displaying a plurality of options based on the invoked application, wherein each option is associated with a respective action. 8. The method of claim 7, further comprising: determining that the recognition result selects one of the displayed options; andthe computing device initiating the action associated with the selected option. 9. The method of claim 1, wherein identifying a matching activation phrase within the confidence threshold comprises: determining that the detected first speech matches the matching activation phrase with a confidence that exceeds a first confidence threshold. 10. The method of claim 9, further comprising: determining a confidence of the recognition result; anddetermining that the confidence of the recognition result exceeds a second threshold confidence threshold. 11. The method of claim 9, wherein the first confidence threshold is higher than the second confidence threshold. 12. The method of claim 1, wherein comparing the detected first speech to a plurality of activation phrases to identify any potential matches based on a first language model comprises using a speech recognizer with the first language model, wherein the speech recognizer is internal to the computing device. 13. The method of claim 12, wherein obtaining a recognition result of the detected second speech based on a second language model comprises obtaining the recognition result from a network speech recognizer that is configured to use the second language model, wherein the network speech recognizer is in communication with the computing device. 14. A non-transitory computer readable medium having stored therein instructions executable by at least one processor to cause a computing device to perform functions, the functions comprising: during a first speech detection mode: capturing first audio,detecting first speech in the captured first audio,comparing the detected first speech to a plurality of activation phrases to identify any potential matches based on a first language model, wherein the plurality of activation phrases is associated with a plurality of applications on the computing device such that each application in the plurality of applications is associated with a respective activation phrase in the plurality of activation phrases, and wherein the first language model covers the plurality of activation phrases, andin response to identifying a matching activation phrase within a confidence threshold, invoking the application in the plurality of applications associated with the matching activation phrase and entering a second speech detection mode; andduring the second speech detection mode: in response to entering the second speech detection mode, reducing the confidence threshold,capturing second audio,detecting second speech in the captured second audio,obtaining a recognition result of the detected second speech based on a second language model, wherein the second language model has a wider coverage than the first language model,determining whether the recognition result is identified within the confidence threshold, andafter determining that the recognition result is identified within the confidence threshold, providing the recognition result to the invoked application. 15. The non-transitory computer readable medium of claim 14, wherein the functions further comprise: detecting a trigger; andentering the first speech detection mode in response to detecting the trigger. 16. The non-transitory computer readable medium of claim 14, wherein the functions further comprise: during the second speech detection mode, visually displaying a plurality of options based on the invoked application. 17. A computing device, comprising: at least one processor;data storage;instructions stored in the data storage, wherein the instructions are executable by the at least one processor to cause the computing device to perform functions, the functions comprising: capturing first audio;detecting first speech in the captured first audio;comparing the detected first speech to a plurality of activation phrases to identify any potential matches based on a first language model, wherein the plurality of activation phrases is associated with a plurality of applications on the computing device such that each application in the plurality of applications is associated with a respective activation phrase in the plurality of activation phrases, and wherein the first language model covers the plurality of activation phrases;in response to identifying a matching activation phrase within a confidence threshold, invoking the application in the plurality of applications associated with the matching activation phrase and entering a second speech detection mode;in response to entering the second speech detection mode, reducing the confidence threshold;capturing second audio;detecting second speech in the captured second audio;obtaining a recognition result of the detected second speech based on a second language model, wherein the second language model has a wider coverage than the first language model;determining whether the recognition result is identified within the confidence threshold; andafter determining that the recognition result is identified within the confidence threshold, providing the recognition result to the invoked application. 18. The computing device of claim 17, further comprising a display, wherein the functions further comprise displaying on the display a plurality of options based on the invoked application, wherein each option is associated with a respective action. 19. The computing device of claim 18, wherein the functions further comprise: determining that the recognition result selects one of the displayed options; andinitiating the action associated with the selected option. 20. The computing device of claim 18, wherein the invoked application is configured to populate an input field on the display based on the recognition result.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.