최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
SAI
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | UP-0097828 (2005-04-01) |
등록번호 | US-7742953 (2010-07-12) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 178 인용 특허 : 546 |
An action plan data structure for one or more selected rendered documents is described. The data structure contains information specifying an action to perform automatically in response to a text capture from any of the selected rendered documents.
We claim: 1. A method in a mobile device for responding to a capture of text from a display of information that includes text, the method comprising: capturing, using an imaging component of the mobile device, an image of a display of information, wherein the image includes text; recognizing, using
We claim: 1. A method in a mobile device for responding to a capture of text from a display of information that includes text, the method comprising: capturing, using an imaging component of the mobile device, an image of a display of information, wherein the image includes text; recognizing, using a text recognition component of the mobile device, at least a portion of the text within the captured image; identifying, via the mobile device, one or more actions associated with the captured text; presenting, via a display of the mobile device, a menu of user-selectable options associated with the identified one or more actions; and performing, at the mobile device, the identified one or more actions via a display and speakers of the mobile device, wherein when the action to be performed includes visual elements, performing the action using the display of the mobile device, and when the action to be performed includes audio elements, using the speakers of the mobile device. 2. The method of claim 1, wherein the identified action includes presenting, via the display of the mobile device, information associated with the captured image. 3. The method of claim 1 further comprising: identifying, at the mobile device, a digital counterpart of the display of information based on the captured text; and wherein identifying one or more actions associated with the captured text includes identifying one or more actions associated with the identified digital counterpart. 4. The method of claim 1, further comprising: determining a location at which the image capture occurred; wherein identifying one or more actions associated with the captured text includes identifying one or more actions associated with the determined location. 5. The method of claim 1, further comprising: determining an identity of a user that captured the image; wherein identifying one or more actions associated with the captured text includes identifying one or more actions associated with the determined identify. 6. A computer-readable medium whose contents cause a mobile device to perform a method of performing an action in response to an image of a display of information, the method comprising: capturing, using an imaging component of the mobile device, an image of a display of information, wherein the image includes text; recognizing, using a text recognition component of the mobile device, at least a portion of the text within the captured image; identifying, via the mobile device, one or more actions associated with the captured text; presenting, via a display of the mobile device, a menu of user-selectable options associated with the identified one or more actions; and performing, at the mobile device, the identified one or more actions via a display and speakers of the mobile device, wherein when the action to be performed includes visual elements, performing the action using the display of the mobile device, and when the action to be performed includes audio elements, using the speakers of the mobile device. 7. A system of one or more components stored in memory of a mobile device and configured to cause a processor of the mobile device to perform an action in response to an image of a display of information, the system comprising: an imaging component, wherein the imaging component is configured to capture an image of a display of information that includes text; a text recognition component, wherein the text recognition component is configured to recognize at least a portion of the text within the captured image; an action identification component, wherein the action identification component is configured to identify one or more performable actions associated with the captured image; a menu component, wherein the menu component is configured to present, via a display of the mobile device, a menu of user-selectable options associated with the identified one or more actions; and an action performance component, wherein the action performance component is configured to perform actions that are associated with options selected by a user using a display and speaker of the mobile device. 8. The system of claim 7, wherein the action identification component is configured to identify at least one of the one or more actions by transmitting the recognized text to a search engine remote from the mobile device and receiving a search result from the remote search engine that includes the identified action. 9. The system of claim 7, wherein the identified one or more performable actions includes an action that retrieves real-time information associated with the captured display of information and wherein the action performance component presents the real-time information to the user. 10. The system of claim 7, wherein the identified one or more performable actions includes an action that retrieves location-based information associated with the captured display of information and wherein the action performance component presents the location information to the user. 11. The system of claim 7, wherein the identified one or more performable actions includes an action that stores the information associated with the captured display of information in memory of the mobile device and wherein the action performance component presents the stored contact information to the user.
해당 특허가 속한 카테고리에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
IPC | Description |
---|---|
A | 생활필수품 |
A62 | 인명구조; 소방(사다리 E06C) |
A62B | 인명구조용의 기구, 장치 또는 방법(특히 의료용에 사용되는 밸브 A61M 39/00; 특히 물에서 쓰이는 인명구조 장치 또는 방법 B63C 9/00; 잠수장비 B63C 11/00; 특히 항공기에 쓰는 것, 예. 낙하산, 투출좌석 B64D; 특히 광산에서 쓰이는 구조장치 E21F 11/00) |
A62B-1/08 | .. 윈치 또는 풀리에 제동기구가 있는 것 |
내보내기 구분 |
|
---|---|
구성항목 |
관리번호, 국가코드, 자료구분, 상태, 출원번호, 출원일자, 공개번호, 공개일자, 등록번호, 등록일자, 발명명칭(한글), 발명명칭(영문), 출원인(한글), 출원인(영문), 출원인코드, 대표IPC 관리번호, 국가코드, 자료구분, 상태, 출원번호, 출원일자, 공개번호, 공개일자, 공고번호, 공고일자, 등록번호, 등록일자, 발명명칭(한글), 발명명칭(영문), 출원인(한글), 출원인(영문), 출원인코드, 대표출원인, 출원인국적, 출원인주소, 발명자, 발명자E, 발명자코드, 발명자주소, 발명자 우편번호, 발명자국적, 대표IPC, IPC코드, 요약, 미국특허분류, 대리인주소, 대리인코드, 대리인(한글), 대리인(영문), 국제공개일자, 국제공개번호, 국제출원일자, 국제출원번호, 우선권, 우선권주장일, 우선권국가, 우선권출원번호, 원출원일자, 원출원번호, 지정국, Citing Patents, Cited Patents |
저장형식 |
|
메일정보 |
|
안내 |
총 건의 자료가 검색되었습니다. 다운받으실 자료의 인덱스를 입력하세요. (1-10,000) 검색결과의 순서대로 최대 10,000건 까지 다운로드가 가능합니다. 데이타가 많을 경우 속도가 느려질 수 있습니다.(최대 2~3분 소요) 다운로드 파일은 UTF-8 형태로 저장됩니다. ~ |
Copyright KISTI. All Rights Reserved.
AI-Helper는 오픈소스 모델을 사용합니다. 사용하고 있는 오픈소스 모델과 라이센스는 아래에서 확인할 수 있습니다.
AI-Helper uses Open Source Models. You can find the source code of these open source models, along with applicable license information below. (helpdesk@kisti.re.kr)
OpenAI의 API Key를 브라우저에 등록하여야 ChatGPT 모델을 사용할 수 있습니다.
등록키는 삭제 버튼을 누르거나, PDF 창을 닫으면 삭제됩니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.