최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0604556 (2012-09-05) |
등록번호 | US-8762469 (2014-06-24) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 38 인용 특허 : 519 |
An electronic device may capture a voice command from a user. The electronic device may store contextual information about the state of the electronic device when the voice command is received. The electronic device may transmit the voice command and the contextual information to computing equipment
An electronic device may capture a voice command from a user. The electronic device may store contextual information about the state of the electronic device when the voice command is received. The electronic device may transmit the voice command and the contextual information to computing equipment such as a desktop computer or a remote server. The computing equipment may perform a speech recognition operation on the voice command and may process the contextual information. The computing equipment may respond to the voice command. The computing equipment may also transmit information to the electronic device that allows the electronic device to respond to the voice command.
1. A method for operating an automated assistant, comprising: at a server computer system comprising a processor and memory storing instructions for execution by the processor: receiving, from a speech recognition service operated separately from the server computer system, a text string correspondi
1. A method for operating an automated assistant, comprising: at a server computer system comprising a processor and memory storing instructions for execution by the processor: receiving, from a speech recognition service operated separately from the server computer system, a text string corresponding to a voice command received at a portable electronic device;receiving contextual information from the portable electronic device;processing the text string and the contextual information; andtransmitting results associated with processing the text string and the contextual information to the portable electronic device. 2. The method of claim 1, further comprising: prior to receiving the text string from the speech recognition service: receiving the voice command from the portable electronic device; andsending the voice command to the speech recognition service. 3. The method of claim 1, wherein the text string and the contextual information are received by the server computer system substantially simultaneously. 4. The method of claim 1, wherein the contextual information includes information from one or more sensors on the portable electronic device. 5. The method of claim 4, wherein the one or more sensors include a location sensor. 6. The method of claim 1, wherein processing the text string and the contextual information comprises: sending at least one of the text string and the contextual information to an online service operated separately from the server computer system; andreceiving, from the online service, the results associated with processing the text string and the contextual information. 7. The method of claim 6, wherein the online service is selected from the group consisting of: a search service;an email service;a media service;a software update service; andan online business service. 8. The method of claim 1, wherein processing the text string and the contextual information comprises: identifying a search query in the text string;identifying a geographical constraint in the text string; andperforming a search based at least in part on the search query and the geographical constraint;wherein transmitting the results comprises transmitting results of the search to the portable electronic device. 9. The method of claim 1, wherein the contextual information is a geographical location of the portable electronic device. 10. The method of claim 1, wherein the contextual information is information associated with a current or a previous telephone call. 11. The method of claim 10, wherein the information associated with the current or the previous telephone call is at least one of a telephone number or contact information. 12. The method of claim 1, wherein the contextual information is information from a software application running on the portable electronic device. 13. The method of claim 12, wherein the software application is selected from the group consisting of: a business productivity application;an email application; anda calendar application. 14. The method of claim 1, wherein the contextual information is information related to an operation occurring in the background of the portable electronic device. 15. The method of claim 1, wherein the results associated with processing the text string are displayed at the portable electronic device. 16. The method of claim 1, wherein the server computer system is provided by a first entity, and the speech recognition service is provided by a second entity different from the first entity. 17. The method of claim 1, wherein the speech recognition service comprises a software application executed by a second computer system remote from the server computer system. 18. A server computer system configured to communicate with a portable electronic device over a communications path in order to process a voice command received by the portable electronic device, the server computer system comprising: one or more processors; andmemory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, from a speech recognition service operated separately from the server computer, a text string corresponding to a voice command received at a portable electronic device;receiving contextual information from the portable electronic device;processing the text string and the contextual information; andtransmitting results associated with processing the text string and the contextual information to the portable electronic device. 19. A non-transitory computer readable storage medium storing instructions that, when executed by a server computer with one or more processors, cause the processors to perform operations comprising: receiving, from a speech recognition service operated separately from the server computer, a text string corresponding to a voice command received at a portable electronic device;receiving contextual information from the portable electronic device;processing the text string and the contextual information; andtransmitting results associated with processing the text string and the contextual information to the portable electronic device. 20. A method for operating an automated assistant, comprising: at a server computer system provided by a first entity, the server computer system comprising a processor and memory storing instructions for execution by the processor: receiving a voice command and contextual information from the portable electronic device;processing the voice command, using a speech recognition service provided by a second entity different from the first entity, to generate a text string from the voice command;processing the text string and the contextual information; andtransmitting results associated with processing the text string and the contextual information to the portable electronic device. 21. The method of claim 20, wherein the results associated with processing the text string are displayed at the portable electronic device. 22. The method of claim 20, wherein the speech recognition service is a standalone software component that is executed by the server computer system. 23. The method of claim 20, wherein the text string and the contextual information are received by the server computer system substantially simultaneously. 24. The method of claim 20, wherein the contextual information includes information from one or more sensors on the portable electronic device. 25. The method of claim 24, wherein the one or more sensors include a location sensor. 26. The method of claim 20, wherein processing the text string and the contextual information comprises: sending at least one of the text string and the contextual information to an online service operated separately from the server computer system; andreceiving, from the online service, the results associated with processing the text string and the contextual information. 27. The method of claim 26, wherein the online service is selected from the group consisting of: a search service;an email service;a media service;a software update service; andan online business service. 28. The method of claim 20, wherein processing the text string and the contextual information comprises: identifying a search query in the text string;identifying a geographical constraint in the text string; andperforming a search based at least in part on the search query and the geographical constraint;wherein transmitting the results comprises transmitting results of the search to the portable electronic device. 29. The method of claim 20, wherein the contextual information is a geographical location of the portable electronic device. 30. The method of claim 20, wherein the contextual information is information associated with a current or a previous telephone call. 31. The method of claim 30, wherein the information associated with the current or the previous telephone call is at least one of a telephone number or contact information. 32. The method of claim 20, wherein the contextual information is information from a software application running on the portable electronic device. 33. The method of claim 32, wherein the software application is selected from the group consisting of: a business productivity application;an email application; anda calendar application. 34. The method of claim 20, wherein the contextual information is information related to an operation occurring in the background of the portable electronic device. 35. The method of claim 20, wherein the server computer system is provided by a first entity, and the speech recognition service is provided by a second entity different from the first entity. 36. A server computer system provided by a first entity and configured to communicate with a portable electronic device over a communications path in order to process a voice command received by the portable electronic device, the server computer system comprising: one or more processors; andmemory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a voice command and contextual information from the portable electronic device;processing the voice command, using a speech recognition service provided by a second entity different from the first entity, to generate a text string from the voice command;processing the text string and the contextual information; andtransmitting results associated with processing the text string and the contextual information to the portable electronic device. 37. A non-transitory computer readable storage medium storing instructions that, when executed by a server computer provided by a first entity and having one or more processors, cause the processors to perform operations comprising: receiving a voice command and contextual information from the portable electronic device;processing the voice command, using a speech recognition service provided by a second entity different from the first entity, to generate a text string from the voice command;processing the text string and the contextual information; andtransmitting results associated with processing the text string and the contextual information to the portable electronic device. 38. The method of claim 4, wherein the one or more sensors include an orientation sensor. 39. The method of claim 24, wherein the one or more sensors include an orientation sensor.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.