최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | UP-0096704 (2005-04-01) |
등록번호 | US-7599580 (2009-10-20) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 106 인용 특허 : 578 |
A system for processing a text capture operation is described. The system receives text captured from a rendered document in the text capture operation. The system also receives supplemental information distinct from the captured text. The system determines an action to perform in response to the t
A system for processing a text capture operation is described. The system receives text captured from a rendered document in the text capture operation. The system also receives supplemental information distinct from the captured text. The system determines an action to perform in response to the text capture operation based upon both the captured text and the supplemental information.
We claim: 1. A method in a computing system for processing a distinguished text capture operation, comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental in
We claim: 1. A method in a computing system for processing a distinguished text capture operation, comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and automatically determining, by the computing system in response to the distinguished text capture operation and based upon both the captured text and the supplemental information, which one of a predetermined plurality of actions is likely optimal for said user. 2. The method of claim 1 wherein the received captured text is captured from at least one of a dynamic display, a voice input and a printed document. 3. The method of claim 1, further comprising: applying optical character recognition techniques to an image captured from the distinguished rendered document to generate the captured text. 4. The method of claim 3, wherein said image captured from the distinguished rendered document depicts fewer than ten words. 5. The method of claim 1, further comprising: applying voice recognition techniques to an audio clip of a person reading aloud from the distinguished rendered document to generate the captured text. 6. The method of claim 1 wherein the received supplemental information includes text captured in previous text capture operations by said user, and wherein the determined action is identifying a single electronic document as corresponding to the distinguished rendered document. 7. The method of claim 1 wherein the received supplemental information includes a document rendering characteristic observed in a previous text capture operation by said user and a corresponding document rendering characteristic observed in the distinguished text capture operation, and wherein the determined action is identifying different electronic documents as corresponding to the rendered documents in which the distinguished text capture operation and the previous text capture operation were performed based upon a difference between the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation. 8. The method of claim 7 wherein the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation are different fonts. 9. The method of claim 7 wherein the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation are different font sizes. 10. The method of claim 7 wherein the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation are different paper types. 11. The method of claim 1 wherein the determined action is identifying a position in a distinguished electronic document that is subsequent to a position identified in the distinguished electronic document as the site of a previous text capture operation by the distinguished user as the site of the distinguished text capture operation. 12. The method of claim 1 wherein the received supplemental information indicates a geographic location at which the distinguished capture operation was performed, and wherein the determining comprises: identifying a plurality of electronic documents containing the received captured text; and ranking the likelihood of the identified electronic documents corresponding to the distinguished rendered document based upon the likelihood that a rendered version of each identified electronic document would be available in the indicated geographic location. 13. The method of claim 1, further comprising using the received captured text to identify an electronic document corresponding to the distinguished rendered document, wherein the received supplemental information indicates a geographic location at which rendered copies of the identified electronic document are likely to be available, and wherein the determining comprises determining that the text capture operation was performed at the indicated geographic location. 14. The method of claim 1 wherein the distinguished text capture operation is performed using a text capture device, and wherein the received supplemental information includes information about an ambient environment in which the distinguished text capture operation was performed, and wherein the determining comprises determine a configuration of a distinguished capability of the device. 15. The method of claim 14 wherein the distinguished text capture operation involves capturing an image of the captured text from the distinguished rendered document, and wherein the received supplemental information includes an indication of a frequency of light received in the image capture. 16. The method of claim 14 wherein the distinguished text capture operation involves capturing an audio clip of a person reading aloud from the distinguished rendered document, and wherein the received supplemental information includes an indication of background sounds contained in the audio clip. 17. The method of claim 14 wherein determining a configuration of a distinguished capability of the device comprises determining to disable the distinguished capability. 18. The method of claim 1 wherein the received supplemental information includes an indication of the time of day at which the distinguished text capture operation was performed, and the determined action is identifying an electronic document as corresponding to the distinguished rendered document. 19. The method of claim 1 wherein the received supplemental information includes an indication of the day of week at which the distinguished text capture operation was performed, and wherein the determined action is identifying an electronic document as corresponding to the distinguished rendered document. 20. The method of claim 1 wherein the received supplemental information includes an indication of recent interactions of said user with electronic documents, and wherein the determined action is identifying an electronic document as corresponding to the distinguished rendered document. 21. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently accessed. 22. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently saved. 23. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently emailed. 24. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently received via email. 25. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently sent via instant messaging. 26. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently received via instant messaging. 27. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently printed. 28. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently searched. 29. The method of claim 1 wherein the received supplemental information includes an indication of a time at which the distinguished text capture operation was performed, and wherein the determining is performed at time distinct from the time at which the distinguished text capture operation was performed, and wherein the determining is performed based on the captured text, the indication of the time at which the distinguished text capture operation was performed, and the time at which the determining is performed. 30. The method of claim 1 wherein the received supplemental information includes information about the conditions under which the text capture operation was performed, and wherein the determined action is associating the information about the conditions under which the text capture operation was performed with said user. 31. The method of claim 1 wherein the received supplemental information includes information about previous text capture operations performed by users other than the distinguished user, and wherein the determined action is identifying a single electronic document as corresponding to the distinguished rendered document. 32. The method of claim 1 wherein the received supplemental information includes information about previous text capture operations performed by users other than said user and about the conditions under which the previous text capture operation was performed, and wherein the determined action is identifying a single electronic document as corresponding to the distinguished rendered document. 33. The method of claim 1 wherein the distinguished text capture operation is performed using a distinguished text capture device, and wherein the determined action is identifying data that is likely to be useful for processing future text capture operations and storing the identified data in at least one of the computing system and the distinguished text capture device. 34. The method of claim 1 wherein the distinguished rendered document contains at least one line of text, and wherein the received text is a proper subset of the text contained in a single line of the distinguished rendered document. 35. The method of claim 1 wherein the distinguished rendered document contains at least one page having text, and wherein the received text is a proper subset of the text contained in a single page of the distinguished rendered document. 36. The method of claim 1 wherein the received text is comprised of words, and wherein the distinguished text capture operation involved specific user interactions with each of the words of the received text. 37. The method of claim 36 wherein said specific user interactions comprise the user speaking each of the words of the received text. 38. The method of claim 36 wherein the user directed an optical sensor at each of the words of the received text. 39. The method of claim 1 wherein the received text is comprised of ordered words, and wherein the distinguished text capture operation involved capturing a physical phenomenon corresponding to each of the words of the received text in the order of the received text. 40. The method of claim 1 wherein the received text is comprised of ordered words, and wherein the distinguished text capture operation involved capturing physical phenomenon corresponding to each of the words of the received text in the reverse order of the received text. 41. The method of claim 1 wherein the distinguished text capture operation involved manually moving an optical sensor across the distinguished rendered document. 42. The method of claim 1 wherein the distinguished text capture operation involved capturing image data from a non-rectangular region of the distinguished rendered document. 43. The method of claim 1 wherein the received text comprises fewer than ten words. 44. A system for processing a distinguished text capture operation, comprising: a capture component that receives human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; an information component that receives supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and an action component that automatically determines, in response to the distinguished text capture operation and based upon both the captured text and the supplemental information, which one of a predetermined plurality of actions is likely optimal for said user. 45. A method in a computing system for processing a distinguished text capture operation, comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and identifying an electronic document similar to the rendered document based upon the captured text and the supplemental information. 46. A computer-readable medium whose contents cause a computing system to perform a method for processing a distinguished text capture operation, the method comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and automatically determining in response to the distinguished text capture operation and based upon both the captured text and the supplemental information, which one of a predetermined plurality of actions is likely optimal for said user.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.