최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0831669 (2013-03-15) |
등록번호 | US-9280610 (2016-03-08) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 71 인용 특허 : 508 |
A user request is received from a mobile client device, where the user request includes at least a speech input and seeks an informational answer or performance of a task. A failure to provide a satisfactory response to the user request is detected. In response to detection of the failure, informati
A user request is received from a mobile client device, where the user request includes at least a speech input and seeks an informational answer or performance of a task. A failure to provide a satisfactory response to the user request is detected. In response to detection of the failure, information relevant to the user request is crowd-sourced by querying one or more crowd sourcing information sources. One or more answers are received from the crowd sourcing information sources, and the response to the user request is generated based on at least one of the one or more answers received from the one or more crowd sourcing information sources.
1. A method for providing a response to a user request, comprising: at a server computer with one or more processors and memory: receiving a user request from a mobile client device, the user request including at least a speech input and seeks an informational answer or performance of a task;detecti
1. A method for providing a response to a user request, comprising: at a server computer with one or more processors and memory: receiving a user request from a mobile client device, the user request including at least a speech input and seeks an informational answer or performance of a task;detecting a failure to provide a satisfactory response to the user request;in response to detecting the failure, crowd-sourcing information relevant to the user request by querying one or more crowd sourcing information sources;receiving one or more answers from the crowd sourcing information sources; andgenerating a response to the user request based on at least one of the one or more answers received from the one or more crowd sourcing information sources. 2. The method of claim 1, wherein crowd-sourcing the information relevant to the user request further comprises: generating one or more queries based on the user request; andsending the one or more queries to the one or more crowd sourcing information sources. 3. The method of claim 1, wherein the crowd-sourcing further comprises identifying, from a set of crowd sourcing information sources, the one or more crowd sourcing information sources to be queried. 4. The method of claim 1, further comprising, prior to the crowd-sourcing: requesting permission from the user to send the information contained in the user request to the one or more crowd sourcing information sources; andreceiving permission from the user to send the information contained in the user request to the one or more crowd sourcing information sources. 5. The method of claim 1, further comprising: receiving at least one real-time answer from a real-time answer-lookup database;upon receipt of the at least one real-time answer, sending to the mobile client device the at least one real-time answer;receiving at least one non-real-time answer from a non-real-time expert service after receiving the at least one real-time answer; andupon receipt of the at least one non-real-time answer, sending to the mobile client device the at least one non-real-time answer. 6. The method of claim 1, further comprising: not receiving any answer from at least one of the one or more crowd sourcing information sources before generating the remedial response. 7. The method of claim 1, further comprising: when more than one answer is received from the one or more crowd sourcing information sources, ranking the answers in accordance with predetermined criteria. 8. The method of claim 1, wherein receiving the one or more answers from the crowd sourcing information sources further comprises: receiving at least one of the one or more answers from individual members of the public in non-real-time. 9. A non-transitory computer-readable medium storing instructions, the instructions, when executed by one or more processors, cause the processors to perform operations comprising: receiving a user request from a mobile client device, the user request including at least a speech input and seeks an informational answer or performance of a task;detecting a failure to provide a satisfactory response to the user request;in response to detecting the failure, crowd-sourcing information relevant to the user request by querying one or more crowd sourcing information sources;receiving one or more answers from the crowd sourcing information sources; andgenerating a response to the user request based on at least one of the one or more answers received from the one or more crowd sourcing information sources. 10. The computer-readable medium of claim 9, wherein crowd-sourcing the information relevant to the user request further comprises: generating one or more queries based on the user request; andsending the one or more queries to the one or more crowd sourcing information sources. 11. The computer-readable medium of claim 9, wherein the crowd-sourcing further comprises identifying, from a set of crowd sourcing information sources, the one or more crowd sourcing information sources to be queried. 12. The computer-readable medium of claim 9, wherein the operations further comprise: prior to the crowd-sourcing: requesting permission from the user to send the information contained in the user request to the one or more crowd sourcing information sources; andreceiving permission from the user to send the information contained in the user request to the one or more crowd sourcing information sources. 13. The computer-readable medium of claim 9, wherein the operations further comprise: receiving at least one real-time answer from a real-time answer-lookup database;upon receipt of the at least one real-time answer, sending to the mobile client device the at least one real-time answer;receiving at least one non-real-time answer from a non-real-time expert service after receiving the at least one real-time answer; andupon receipt of the at least one non-real-time answer, sending to the mobile client device the at least one non-real-time answer. 14. The computer-readable medium of claim 9, wherein the method further comprise: not receiving any answer from at least one of the one or more crowd sourcing information sources before generating the remedial response. 15. The computer-readable medium of claim 9, wherein the operations further comprise: when more than one answer is received from the one or more crowd sourcing information sources, ranking the answers in accordance with predetermined criteria. 16. The computer-readable medium of claim 9, wherein receiving the one or more answers from the crowd sourcing information sources further comprises: receiving at least one of the one or more answers from individual members of the public in non-real-time. 17. A system, comprising: one or more processors; andmemory storing instructions, the instructions, when executed by the one or more processors, cause the processors to perform operations comprising: receiving a user request from a mobile client device, the user request including at least a speech input and seeks an informational answer or performance of a task;detecting a failure to provide a satisfactory response to the user request;in response to detecting the failure, crowd-sourcing information relevant to the user request by querying one or more crowd sourcing information sources;receiving one or more answers from the crowd sourcing information sources; andgenerating a response to the user request based on at least one of the one or more answers received from the one or more crowd sourcing information sources. 18. The system of claim 17, wherein crowd-sourcing the information relevant to the user request further comprises: generating one or more queries based on the user request; andsending the one or more queries to the one or more crowd sourcing information sources. 19. The system of claim 17, wherein the crowd-sourcing further comprises identifying, from a set of crowd sourcing information sources, the one or more crowd sourcing information sources to be queried. 20. The system of claim 17, wherein the operations further comprise: prior to the crowd-sourcing: requesting permission from the user to send the information contained in the user request to the one or more crowd sourcing information sources; andreceiving permission from the user to send the information contained in the user request to the one or more crowd sourcing information sources. 21. The system of claim 17, wherein the operations further comprise: receiving at least one real-time answer from a real-time answer-lookup database;upon receipt of the at least one real-time answer, sending to the mobile client device the at least one real-time answer;receiving at least one non-real-time answer from a non-real-time expert service after receiving the at least one real-time answer; andupon receipt of the at least one non-real-time answer, sending to the mobile client device the at least one non-real-time answer. 22. The system of claim 17, wherein the operations further comprise: not receiving any answer from at least one of the one or more crowd sourcing information sources before generating the remedial response. 23. The system of claim 17, wherein the operations further comprise: when more than one answer is received from the one or more crowd sourcing information sources, ranking the answers in accordance with predetermined criteria. 24. The system of claim 17, wherein receiving the one or more answers from the crowd sourcing information sources further comprises: receiving at least one of the one or more answers from individual members of the public in non-real-time. 25. The method of claim 1, wherein the detecting a failure to provide a satisfactory response to the user request comprises determining that a web-search based on information contained in the user request is unsatisfactory to the user. 26. The method of claim 1, wherein the detecting a failure to provide a satisfactory response to the user request comprises receiving feedback from the user that a previous response provided to the user was unsatisfactory. 27. The method of claim 1, wherein the detecting a failure to provide a satisfactory response to the user request comprises analyzing usage logs associated with the user.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.