Context-sensitive dynamic update of voice to text model in a voice-enabled electronic device
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G10L-015/00
G10L-015/16
G10L-021/00
G10L-025/00
G10L-015/04
G10L-015/26
H04N-007/14
G10L-015/22
G10L-015/065
G10L-015/18
G10L-015/08
출원번호
US-0723250
(2015-05-27)
등록번호
US-9966073
(2018-05-08)
발명자
/ 주소
Gao, Yuli
Sung, Sangsoo
Murugesan, Prathab
출원인 / 주소
GOOGLE LLC
대리인 / 주소
Middleton Reutlinger
인용정보
피인용 횟수 :
0인용 특허 :
26
초록▼
A voice to text model used by a voice-enabled electronic device is dynamically and in a context-sensitive manner updated to facilitate recognition of entities that potentially may be spoken by a user in a voice input directed to the voice-enabled electronic device. The dynamic update to the voice to
A voice to text model used by a voice-enabled electronic device is dynamically and in a context-sensitive manner updated to facilitate recognition of entities that potentially may be spoken by a user in a voice input directed to the voice-enabled electronic device. The dynamic update to the voice to text model may be performed, for example, based upon processing of a first portion of a voice input, e.g., based upon detection of a particular type of voice action, and may be targeted to facilitate the recognition of entities that may occur in a later portion of the same voice input, e.g., entities that are particularly relevant to one or more parameters associated with a detected type of voice action.
대표청구항▼
1. A method, comprising: receiving a voice input with a voice-enabled electronic device, the voice input including an original request that includes first and second portions, the second portion including a first context sensitive entity among a plurality of context sensitive entities that are assoc
1. A method, comprising: receiving a voice input with a voice-enabled electronic device, the voice input including an original request that includes first and second portions, the second portion including a first context sensitive entity among a plurality of context sensitive entities that are associated with a context sensitive parameter and that potentially may be spoken in the voice input; andin the voice-enabled electronic device, and responsive to receiving the first portion of the voice input: performing local processing of the first portion of the voice input to dynamically build at least a portion of a voice action prior to completely receiving the voice input with the voice-enabled electronic device;determining during the local processing whether the voice action is associated with the context sensitive parameter; andin response to a determination that the voice action is associated with the context sensitive parameter and prior to performing local processing of the second portion of the voice input including the first context sensitive entity, initiating a dynamic update to a local voice to text model used by the voice-enabled electronic device prior to completing the voice action to facilitate recognition of the first context sensitive entity. 2. The method of claim 1, wherein performing the local processing includes: converting a digital audio signal of the voice input to text using a streaming voice to text module of the voice-enabled electronic device, wherein the streaming voice to text module dynamically generates a plurality of text tokens from the digital audio signal; anddynamically building the portion of the voice action from at least a portion of the plurality of text tokens using a streaming semantic processor of the voice-enabled electronic device. 3. The method of claim 2, wherein determining whether the voice action is associated with the context sensitive parameter is performed by the streaming semantic processor, and wherein initiating the dynamic update to the local voice to text model includes communicating data from the streaming semantic processor to the streaming voice to text module to initiate the dynamic update of the local voice to text model. 4. The method of claim 1, wherein the local voice to text model comprises at least one decoding graph, and wherein initiating the dynamic update of the local voice to text model includes adding a decoding path to the at least one decoding graph corresponding to each of the plurality of context sensitive entities. 5. The method of claim 1, further comprising, in response to a determination that the voice action is associated with the context sensitive parameter, prefetching from an online service voice to text model update data associated with the plurality of context sensitive entities, wherein initiating the dynamic update of the local voice to text model includes communicating the prefetched voice to text model update data to dynamically update the local voice to text model. 6. The method of claim 1, wherein determining during the local processing whether the voice action is associated with a context sensitive parameter includes determining whether the voice action is a request to play a media item, wherein the context sensitive parameter includes a media item identifier for use in identifying the media item, and wherein the plurality of context sensitive entities identify a plurality of media items playable by the voice-enabled electronic device. 7. The method of claim 1, wherein determining during the local processing whether the voice action is associated with a context sensitive parameter includes determining whether the voice action is a request to communicate with a contact, wherein the context sensitive parameter includes a contact identifier for use in initiating a communication with the contact, and wherein the plurality of context sensitive entities identify a plurality of contacts accessible by the voice-enabled electronic device. 8. The method of claim 1, wherein the context sensitive parameter is a location-dependent parameter, and wherein the plurality of context sensitive entities identify a plurality of points of interest disposed in proximity to a predetermined location. 9. The method of claim 8, wherein the predetermined location comprises a current location of the voice-enabled electronic device, the method further comprising, in response to a determination that the voice action is associated with the context sensitive parameter, communicating the current location to an online service and prefetching from the online service voice to text model update data associated with the plurality of context sensitive entities. 10. A method, comprising: receiving a voice input including an original request including first and second portions with a voice-enabled electronic device, the voice input associated with a voice action having a context sensitive parameter, the first and second portions being different from one another and the second portion including a first context sensitive entity among a plurality of context sensitive entities that potentially may be spoken in the voice input;performing voice to text conversion locally in the voice-enabled electronic device on the first portion of the voice input using a local voice to text model to generate text for the first portion of the voice input;in response to determining that the voice action is associated with the context sensitive parameter, initiating a dynamic update to the local voice to text model after generating the text for the first portion of the voice input and prior to attempting to generate text for the second portion of the voice input to facilitate recognition of the plurality of context sensitive entities; andperforming voice to text conversion locally in the voice-enabled electronic device on the second portion of the voice input using the dynamically updated local voice to text model to generate text for the second portion of the voice input, the generated text including text for the first context sensitive entity. 11. The method of claim 10, wherein performing the voice to text conversion includes converting a digital audio signal of the voice input to text using a streaming voice to text module of the voice-enabled electronic device, wherein the streaming voice to text module dynamically generates a plurality of text tokens from the digital audio signal, the method further comprising dynamically building at least a portion of the voice action prior to completely receiving the voice input with the voice-enabled electronic device from at least a portion of the plurality of text tokens using a streaming semantic processor of the voice-enabled electronic device. 12. The method of claim 11, wherein initiating the dynamic update to the local voice to text model is performed by the streaming semantic processor. 13. An apparatus including memory and one or more processors operable to execute instructions stored in the memory, comprising instructions to: receive a voice input with a voice-enabled electronic device, the voice input including an original request that includes first and second portions, the second portion including a first context sensitive entity among a plurality of context sensitive entities that are associated with a context sensitive parameter and that potentially may be spoken in the voice input; andin the voice-enabled electronic device, and responsive to receiving the first portion of the voice input: perform local processing of the first portion of the voice input to dynamically build at least a portion of a voice action prior to completely receiving the voice input with the voice-enabled electronic device;determine during the local processing whether the voice action is associated with the context sensitive parameter; andin response to a determination that the voice action is associated with the context sensitive parameter and prior to performing local processing of the second portion of the voice input including the first context sensitive entity, initiate a dynamic update to a local voice to text model used by the voice-enabled electronic device prior to completing the voice action to facilitate recognition of the first context sensitive entity. 14. The apparatus of claim 13, wherein the instructions include: first instructions implementing a streaming voice to text module that converts a digital audio signal of the voice input to text, wherein the first instructions dynamically generate a plurality of text tokens from the digital audio signal; andsecond instructions implementing a streaming semantic processor that dynamically builds the portion of the voice action from at least a portion of the plurality of text tokens. 15. The apparatus of claim 14, wherein the instructions that implement the streaming semantic processor determine whether the voice action is associated with the context sensitive parameter, and wherein the instructions that implement the streaming semantic processor communicate data from the streaming semantic processor to the streaming voice to text module to initiate the dynamic update of the local voice to text model. 16. The apparatus of claim 14, further comprising instructions that, in response to a determination that the voice action is associated with the context sensitive parameter, prefetch from an online service voice to text model update data associated with the plurality of context sensitive entities, wherein the instructions that initiate the dynamic update of the local voice to text model communicate the prefetched voice to text model update data to dynamically update the local voice to text model. 17. The apparatus of claim 14, wherein the instructions that determine during the local processing whether the voice action is associated with a context sensitive parameter determine whether the voice action is a request to play a media item, wherein the context sensitive parameter includes media item data for use in identifying the media item, and wherein the plurality of context sensitive entities includes identifiers for a plurality of media items playable by the voice-enabled electronic device. 18. The apparatus of claim 14, wherein the instructions that determine during the local processing whether the voice action is associated with a context sensitive parameter determine whether the voice action is a request to communicate with a contact, wherein the context sensitive parameter includes contact data for use in initiating a communication with the contact, and wherein the plurality of context sensitive entities includes identifiers for a plurality of contacts accessible by the voice-enabled electronic device. 19. The apparatus of claim 14, wherein the context sensitive parameter is a location-dependent parameter, wherein the plurality of context sensitive entities includes identifiers for a plurality of points of interest disposed in proximity to a predetermined location, the apparatus further comprising instructions that, in response to a determination that the voice action is associated with the context sensitive parameter, communicate the predetermined location to an online service and prefetch from the online service voice to text model update data associated with the identifiers for the plurality of points of interest disposed in proximity to the predetermined location. 20. A non-transitory computer readable storage medium storing computer instructions executable by one or more processors to perform a method comprising: receiving a voice input with a voice-enabled electronic device, the voice input including an original request that includes first and second portions, the second portion including a first context sensitive entity among a plurality of context sensitive entities that are associated with a context sensitive parameter and that potentially may be spoken in the voice input; andin the voice-enabled electronic device, and responsive to receiving the first portion of the voice input: performing local processing of the first portion of the voice input to dynamically build at least a portion of a voice action prior to completely receiving the voice input with the voice-enabled electronic device;determining during the local processing whether the voice action is associated with the context sensitive parameter; andin response to a determination that the voice action is associated with the context sensitive parameter and prior to performing local processing of the second portion of the voice input including the first context sensitive entity, initiating a dynamic update to a local voice to text model used by the voice-enabled electronic device prior to completing the voice action to facilitate recognition of the first context sensitive entity. 21. The method of claim 1, wherein initiating the dynamic update is further performed after receiving the at least a portion of the voice input and prior to performing local processing of any voice input that includes the first context sensitive entity.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (26)
Hwang, Kwangil, Apparatus, method, and medium for generating grammar network for use in speech recognition and dialogue speech recognition.
Weider, Chris; Kennewick, Richard; Kennewick, Mike; Di Cristo, Philippe; Kennewick, Robert A.; Menaker, Samuel; Armstrong, Lynn Elise, Mobile systems and methods of supporting natural language human-machine interactions.
Weider, Chris; Kennewick, Richard; Kennewick, Mike; Di Cristo, Philippe; Kennewick, Robert A.; Menaker, Samuel; Armstrong, Lynn Elise, Mobile systems and methods of supporting natural language human-machine interactions.
Harris,Paul Evert; Deckert,David Grant; Murray,Douglas G.; Denny,Thomas W., Providing access to information of multiple types via coordination of distinct information services.
Mitchell John C.,GB2 ; Heard Allen James,GB2 ; Corbett Steven Norman,GB2 ; Daniel Nicholas John,GB2, Speech-to-text dictation system with audio message capability.
Bachran, Michael; Besling, Stefan; Mauelshagen, Martin; Wiegard, Hanno, System and method for integrating runtime usage statistics with developing environment.
Stern, Benjamin J.; Bocchieri, Enrico Luigi; Conkie, Alistair D.; Giulianelli, Danilo, System and method for managing models for embedded speech and language processing.
Dragosh, Pamela Leigh; Roe, David Bjorn; Sharp, Robert Douglas, System and method for providing remote automatic speech recognition and text-to-speech services via a packet network.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.