최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0941249 (2015-11-13) |
등록번호 | US-9633660 (2017-04-25) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 38 인용 특허 : 2059 |
The present disclosure generally relates to systems and methods for processing received voice inputs for user identification. In an example process, voice input can be processed using a subset of words from a library used to identify the words or phrases of the voice input. The subset can be selecte
The present disclosure generally relates to systems and methods for processing received voice inputs for user identification. In an example process, voice input can be processed using a subset of words from a library used to identify the words or phrases of the voice input. The subset can be selected such that voice inputs provided by the user are more likely to include words from the subset. The subset of the library can be selected using any suitable approach, including for example based on the user's interests and words that relate to those interests. For example, the subset can include one or more words related to media items stored by the user on the electronic device, names of the user's contacts, applications or processes used by the user, or any other words relating to the user's interactions with the device.
1. A method for processing a voice input, comprising: receiving a voice input;identifying a user providing the voice input;identifying a subset of library words associated with the identified user; andprocessing the received voice input using the identified subset. 2. The method of claim 1, further
1. A method for processing a voice input, comprising: receiving a voice input;identifying a user providing the voice input;identifying a subset of library words associated with the identified user; andprocessing the received voice input using the identified subset. 2. The method of claim 1, further comprising: identifying an electronic device operation corresponding to the processed voice input. 3. The method of claim 2, further comprising: retrieving at least one instruction from the processed voice input; andidentifying at least one electronic device operation corresponding to the retrieved at least one input. 4. The method of claim 3, wherein: the at least one instruction comprises an operation and an argument qualifying the operation. 5. The method of claim 4, wherein: the operation comprises a media playback operation; and;the argument comprises a particular media item. 6. The method of claim 1, further comprising: identifying the user's interests; andselecting a subset of library words that relate to the user's interests. 7. The method of claim 1, wherein processing further comprises: detecting a plurality of words in the received voice input;comparing the identified plurality of words with the identified subset of library words; andidentifying a plurality of words from the identified subset that correspond to the detected plurality of words. 8. The method of claim 7, further comprising: extracting an instruction for the identified plurality of words; andidentifying an operation corresponding to the extracted instruction. 9. The method of claim 1, wherein identifying the user further comprises: extracting a voice print from the received voice input;comparing the extracted voice print with a library of known voice prints; andidentifying the user having a voice print in the library of known voice prints that corresponds to the received voice print. 10. An electronic device controllable by voice inputs, comprising a processor, an input interface, and an output interface, the processor operative to: direct the input interface to receive a voice input from a user;identify the user providing the received voice input;provide the identity of the user to a library of words used to process voice inputs;receive a subset of the library of words, wherein the subset includes words likely to be used by the identified user;process the voice input using the received subset; anddirect the output interface to provide an output based on the processed voice input. 11. The electronic device of claim 10, wherein the processor is further operative to: direct the output interface to play back a media item. 12. The electronic device of claim 11, wherein the processor is further operative to: identify a media playback operation from the voice input; andidentify a media item qualifying the media playback operation from the voice input. 13. The electronic device of claim 10, wherein the processor is further operative to identify the user from at least one of: the content of the voice input;the time at which the voice input was provided; andthe voice signature of the voice print. 14. The electronic device of claim 10, wherein: the subset of media item words includes words corresponding to metadata values of content selected by the user for storage on the electronic device. 15. The electronic device of claim 14, wherein the content selected by the user for storage on the electronic device comprises at least one of: media items;contact information;applications;calendar information; andsettings. 16. A method for defining a subset of a library used for processing voice inputs, comprising: providing a library of words from which to process voice inputs;identifying a user's interests;extracting, from the user's interests, words that the user is likely to use to provide a voice input; anddefining a subset of the library, wherein the subset comprises at least the words of the library matching the extracted words. 17. The method of claim 16, further comprising: identifying particular media items of interest to the user; andincluding metadata values for the identified particular media items in the defined subset. 18. The method of claim 17, wherein the metadata values comprise at least one of: artist;title;album;genre;year;play count;rating; andplaylist. 19. The method of claim 16, further comparing the extracted words to the words of the library; identifying words of the library that share at least a common root with at least one extracted word; andincluding the identified words of the library in the defined subset. 20. A non-transitory computer readable media for processing a voice input, the computer readable media comprising computer program logic recorded thereon for: receiving a voice input;identifying a user providing the voice input;identifying a subset of library words associated with the identified user; andprocessing the received voice input using the identified subset. 21. The computer readable media of claim 20, further comprising additional computer program logic recorded thereon for: detecting a plurality of words in the received voice input;comparing the identified plurality of words with the identified subset of library words; andidentifying a plurality of words from the identified subset that correspond to the detected plurality of words. 22. The computer readable media of claim 21, further comprising additional computer program logic recorded thereon for: extracting an instruction for the identified plurality of words; andidentifying an operation corresponding to the extracted instruction.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.