최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
SAI
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | UP-0096704 (2005-04-01) |
등록번호 | US-7599580 (2009-10-20) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 106 인용 특허 : 578 |
A system for processing a text capture operation is described. The system receives text captured from a rendered document in the text capture operation. The system also receives supplemental information distinct from the captured text. The system determines an action to perform in response to the t
A system for processing a text capture operation is described. The system receives text captured from a rendered document in the text capture operation. The system also receives supplemental information distinct from the captured text. The system determines an action to perform in response to the text capture operation based upon both the captured text and the supplemental information.
We claim: 1. A method in a computing system for processing a distinguished text capture operation, comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental in
We claim: 1. A method in a computing system for processing a distinguished text capture operation, comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and automatically determining, by the computing system in response to the distinguished text capture operation and based upon both the captured text and the supplemental information, which one of a predetermined plurality of actions is likely optimal for said user. 2. The method of claim 1 wherein the received captured text is captured from at least one of a dynamic display, a voice input and a printed document. 3. The method of claim 1, further comprising: applying optical character recognition techniques to an image captured from the distinguished rendered document to generate the captured text. 4. The method of claim 3, wherein said image captured from the distinguished rendered document depicts fewer than ten words. 5. The method of claim 1, further comprising: applying voice recognition techniques to an audio clip of a person reading aloud from the distinguished rendered document to generate the captured text. 6. The method of claim 1 wherein the received supplemental information includes text captured in previous text capture operations by said user, and wherein the determined action is identifying a single electronic document as corresponding to the distinguished rendered document. 7. The method of claim 1 wherein the received supplemental information includes a document rendering characteristic observed in a previous text capture operation by said user and a corresponding document rendering characteristic observed in the distinguished text capture operation, and wherein the determined action is identifying different electronic documents as corresponding to the rendered documents in which the distinguished text capture operation and the previous text capture operation were performed based upon a difference between the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation. 8. The method of claim 7 wherein the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation are different fonts. 9. The method of claim 7 wherein the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation are different font sizes. 10. The method of claim 7 wherein the document rendering characteristic observed in the previous text capture operation and the corresponding document rendering characteristic observed in the distinguished text capture operation are different paper types. 11. The method of claim 1 wherein the determined action is identifying a position in a distinguished electronic document that is subsequent to a position identified in the distinguished electronic document as the site of a previous text capture operation by the distinguished user as the site of the distinguished text capture operation. 12. The method of claim 1 wherein the received supplemental information indicates a geographic location at which the distinguished capture operation was performed, and wherein the determining comprises: identifying a plurality of electronic documents containing the received captured text; and ranking the likelihood of the identified electronic documents corresponding to the distinguished rendered document based upon the likelihood that a rendered version of each identified electronic document would be available in the indicated geographic location. 13. The method of claim 1, further comprising using the received captured text to identify an electronic document corresponding to the distinguished rendered document, wherein the received supplemental information indicates a geographic location at which rendered copies of the identified electronic document are likely to be available, and wherein the determining comprises determining that the text capture operation was performed at the indicated geographic location. 14. The method of claim 1 wherein the distinguished text capture operation is performed using a text capture device, and wherein the received supplemental information includes information about an ambient environment in which the distinguished text capture operation was performed, and wherein the determining comprises determine a configuration of a distinguished capability of the device. 15. The method of claim 14 wherein the distinguished text capture operation involves capturing an image of the captured text from the distinguished rendered document, and wherein the received supplemental information includes an indication of a frequency of light received in the image capture. 16. The method of claim 14 wherein the distinguished text capture operation involves capturing an audio clip of a person reading aloud from the distinguished rendered document, and wherein the received supplemental information includes an indication of background sounds contained in the audio clip. 17. The method of claim 14 wherein determining a configuration of a distinguished capability of the device comprises determining to disable the distinguished capability. 18. The method of claim 1 wherein the received supplemental information includes an indication of the time of day at which the distinguished text capture operation was performed, and the determined action is identifying an electronic document as corresponding to the distinguished rendered document. 19. The method of claim 1 wherein the received supplemental information includes an indication of the day of week at which the distinguished text capture operation was performed, and wherein the determined action is identifying an electronic document as corresponding to the distinguished rendered document. 20. The method of claim 1 wherein the received supplemental information includes an indication of recent interactions of said user with electronic documents, and wherein the determined action is identifying an electronic document as corresponding to the distinguished rendered document. 21. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently accessed. 22. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently saved. 23. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently emailed. 24. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently received via email. 25. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently sent via instant messaging. 26. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently received via instant messaging. 27. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently printed. 28. The method of claim 20 wherein the received supplemental information identifies documents that said user has recently searched. 29. The method of claim 1 wherein the received supplemental information includes an indication of a time at which the distinguished text capture operation was performed, and wherein the determining is performed at time distinct from the time at which the distinguished text capture operation was performed, and wherein the determining is performed based on the captured text, the indication of the time at which the distinguished text capture operation was performed, and the time at which the determining is performed. 30. The method of claim 1 wherein the received supplemental information includes information about the conditions under which the text capture operation was performed, and wherein the determined action is associating the information about the conditions under which the text capture operation was performed with said user. 31. The method of claim 1 wherein the received supplemental information includes information about previous text capture operations performed by users other than the distinguished user, and wherein the determined action is identifying a single electronic document as corresponding to the distinguished rendered document. 32. The method of claim 1 wherein the received supplemental information includes information about previous text capture operations performed by users other than said user and about the conditions under which the previous text capture operation was performed, and wherein the determined action is identifying a single electronic document as corresponding to the distinguished rendered document. 33. The method of claim 1 wherein the distinguished text capture operation is performed using a distinguished text capture device, and wherein the determined action is identifying data that is likely to be useful for processing future text capture operations and storing the identified data in at least one of the computing system and the distinguished text capture device. 34. The method of claim 1 wherein the distinguished rendered document contains at least one line of text, and wherein the received text is a proper subset of the text contained in a single line of the distinguished rendered document. 35. The method of claim 1 wherein the distinguished rendered document contains at least one page having text, and wherein the received text is a proper subset of the text contained in a single page of the distinguished rendered document. 36. The method of claim 1 wherein the received text is comprised of words, and wherein the distinguished text capture operation involved specific user interactions with each of the words of the received text. 37. The method of claim 36 wherein said specific user interactions comprise the user speaking each of the words of the received text. 38. The method of claim 36 wherein the user directed an optical sensor at each of the words of the received text. 39. The method of claim 1 wherein the received text is comprised of ordered words, and wherein the distinguished text capture operation involved capturing a physical phenomenon corresponding to each of the words of the received text in the order of the received text. 40. The method of claim 1 wherein the received text is comprised of ordered words, and wherein the distinguished text capture operation involved capturing physical phenomenon corresponding to each of the words of the received text in the reverse order of the received text. 41. The method of claim 1 wherein the distinguished text capture operation involved manually moving an optical sensor across the distinguished rendered document. 42. The method of claim 1 wherein the distinguished text capture operation involved capturing image data from a non-rectangular region of the distinguished rendered document. 43. The method of claim 1 wherein the received text comprises fewer than ten words. 44. A system for processing a distinguished text capture operation, comprising: a capture component that receives human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; an information component that receives supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and an action component that automatically determines, in response to the distinguished text capture operation and based upon both the captured text and the supplemental information, which one of a predetermined plurality of actions is likely optimal for said user. 45. A method in a computing system for processing a distinguished text capture operation, comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and identifying an electronic document similar to the rendered document based upon the captured text and the supplemental information. 46. A computer-readable medium whose contents cause a computing system to perform a method for processing a distinguished text capture operation, the method comprising: receiving human-readable text captured by a user via a portable capture device from a distinguished rendered document in the distinguished text capture operation; receiving supplemental information distinct from the captured text, said supplemental information comprising an identity associated with said user; and automatically determining in response to the distinguished text capture operation and based upon both the captured text and the supplemental information, which one of a predetermined plurality of actions is likely optimal for said user.
해당 특허가 속한 카테고리에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
IPC | Description |
---|---|
A | 생활필수품 |
A62 | 인명구조; 소방(사다리 E06C) |
A62B | 인명구조용의 기구, 장치 또는 방법(특히 의료용에 사용되는 밸브 A61M 39/00; 특히 물에서 쓰이는 인명구조 장치 또는 방법 B63C 9/00; 잠수장비 B63C 11/00; 특히 항공기에 쓰는 것, 예. 낙하산, 투출좌석 B64D; 특히 광산에서 쓰이는 구조장치 E21F 11/00) |
A62B-1/08 | .. 윈치 또는 풀리에 제동기구가 있는 것 |
내보내기 구분 |
|
---|---|
구성항목 |
관리번호, 국가코드, 자료구분, 상태, 출원번호, 출원일자, 공개번호, 공개일자, 등록번호, 등록일자, 발명명칭(한글), 발명명칭(영문), 출원인(한글), 출원인(영문), 출원인코드, 대표IPC 관리번호, 국가코드, 자료구분, 상태, 출원번호, 출원일자, 공개번호, 공개일자, 공고번호, 공고일자, 등록번호, 등록일자, 발명명칭(한글), 발명명칭(영문), 출원인(한글), 출원인(영문), 출원인코드, 대표출원인, 출원인국적, 출원인주소, 발명자, 발명자E, 발명자코드, 발명자주소, 발명자 우편번호, 발명자국적, 대표IPC, IPC코드, 요약, 미국특허분류, 대리인주소, 대리인코드, 대리인(한글), 대리인(영문), 국제공개일자, 국제공개번호, 국제출원일자, 국제출원번호, 우선권, 우선권주장일, 우선권국가, 우선권출원번호, 원출원일자, 원출원번호, 지정국, Citing Patents, Cited Patents |
저장형식 |
|
메일정보 |
|
안내 |
총 건의 자료가 검색되었습니다. 다운받으실 자료의 인덱스를 입력하세요. (1-10,000) 검색결과의 순서대로 최대 10,000건 까지 다운로드가 가능합니다. 데이타가 많을 경우 속도가 느려질 수 있습니다.(최대 2~3분 소요) 다운로드 파일은 UTF-8 형태로 저장됩니다. ~ |
Copyright KISTI. All Rights Reserved.
AI-Helper는 오픈소스 모델을 사용합니다. 사용하고 있는 오픈소스 모델과 라이센스는 아래에서 확인할 수 있습니다.
AI-Helper uses Open Source Models. You can find the source code of these open source models, along with applicable license information below. (helpdesk@kisti.re.kr)
OpenAI의 API Key를 브라우저에 등록하여야 ChatGPT 모델을 사용할 수 있습니다.
등록키는 삭제 버튼을 누르거나, PDF 창을 닫으면 삭제됩니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.