Determining actions involving captured information and electronic content associated with rendered documents
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-007/00
G06F-017/00
G06F-017/24
G06F-003/048
G06F-017/30
출원번호
US-0887473
(2010-09-21)
등록번호
US-8903759
(2014-12-02)
발명자
/ 주소
King, Martin T.
Grover, Dale L.
Kushler, Clifford A.
Stafford-Fraser, James Q.
출원인 / 주소
Google Inc.
대리인 / 주소
Fish & Richardson P.C.
인용정보
피인용 횟수 :
2인용 특허 :
265
초록▼
Information is captured from a rendered document with a handheld document data capture device. Electronic information associated with the rendered document is applied to determine the system's actions and/or behaviors in response to the data capture. In some embodiments, the electronic information i
Information is captured from a rendered document with a handheld document data capture device. Electronic information associated with the rendered document is applied to determine the system's actions and/or behaviors in response to the data capture. In some embodiments, the electronic information is markup data or an action map associated with the rendered document. In some embodiments, an electronic counterpart corresponding to the rendered document is located, and information associated with the electronic counterpart is applied to determine actions and/or behaviors available to a user of the handheld document data capture device.
대표청구항▼
1. A method in a computing system, comprising: receiving captured text, the captured text being a text transcription of an image of a human-perceptible textual portion of a printed document captured by a handheld information capture device;identifying an electronic counterpart to the printed documen
1. A method in a computing system, comprising: receiving captured text, the captured text being a text transcription of an image of a human-perceptible textual portion of a printed document captured by a handheld information capture device;identifying an electronic counterpart to the printed document from the captured text, wherein the printed document has one or more context points, each of the context points is a visible feature of the printed document, each of the context points is associated in the electronic counterpart with an action, and each of the context points has a location within the printed document;in response to receiving the captured text, determining a plurality of actions that would be appropriate to perform in response to the capture of the captured text based on the captured text or the electronic counterpart to the printed document;determining a location within the printed document from which the captured text was captured;determining distances between the location within the printed document of the captured text and the locations of one or more context points within the printed document; andselecting an action that is associated with one or more of the context points, wherein selecting is based at least in part on the determined distances. 2. The method of claim 1, further comprising receiving information related to the determined location within the printed document of the captured text. 3. The method of claim 2, wherein receiving information related to the determined location further comprises: information related to at least one of a title, heading, caption, footnote, endnote, or citation. 4. The method of claim 1, wherein selecting an action that is associated with one or more of the context points further comprises: determining a likelihood that a user of the handheld information capture device will select the action. 5. The method of claim 1, wherein a determined distance of the determined distance is at least one of a word distance, a sentence distance, a paragraph distance, a page distance, or a physical distance. 6. The method of claim 1, wherein the context points within the printed document are located in at least one or more of a title, a heading, a photo, an illustration, a caption, a footnote, an endnote, a citation, a keyword, a name, a location, a phone number, and or an address. 7. The method of claim 1, further comprising: identifying supplemental information in the captured text; and determining the action to be performed at least in part on the identified information. 8. The method of claim 1, wherein selecting an action that is associated with one or more of the context points further comprises: identifying at least one keyword in the captured text;locating a keyword in information associated with the printed document, wherein the information maps regions of the printed document to regions of the electronic counterpart to the printed document and wherein the information comprises at least one of actions or rules for determining actions; andperforming the action associated with the located keyword. 9. The method of claim 1, further comprising: receiving markup data stored in the electronic counterpart to the printed document, wherein the markup data comprises at least one of actions or rules for determining actions, and wherein the markup data is applied to perform one or more actions that involve the captured text. 10. The method of claim 9, wherein the context information is in the form of digital information associated with the printed document comprising multimedia data, reference materials, or a link to the online discussion forum. 11. The method of claim 1, further comprising: determining at least one category to which the printed document belongs; andselecting an action based at least in part on the determined category. 12. The method of claim 11, wherein determining at least one category further comprises: referring to categories previously determined for the printed document. 13. The method of claim 1, wherein receiving captured text from a printed document further comprises: receiving information identifying a region of the printed document from which the captured text was captured; anddetermining the action at least in part on the received information related to the region. 14. The method of claim 13, wherein receiving information from a printed document further comprises: receiving information identifying a type of the identified region of the printed document. 15. The method of claim 14, wherein receiving information identifying a type of the identified region of the printed document further comprises: identifying at least one or more of an advertising region, a product or products region, a sports region, a news region, a science region, a technology region, a hobby region, an entertainment region, an opinion region, a finance region, a business region, a classified region, an arts region, a real estate region, or a subject region. 16. The method of claim 13, wherein determining the action at least in part on the received information related to the region further comprises: combining actions determined from more than one region identified as comprising the captured information. 17. The method of claim 13, wherein determining the action at least in part on the received information related to the region further comprises: combining actions determined from regions of more than one related document. 18. The method of claim 13, wherein determining the action at least in part on the received information related to the region further comprises: combining actions determined from regions of multiple documents in a document hierarchy. 19. The method of claim 18, wherein the document hierarchy further comprises: a page within an article within a magazine, a page within a section within a newspaper, or a page within a chapter within a book within an anthology. 20. The method of claim 1, further comprising: displaying to a user an indication of the action to be performed; andupon the user selecting the action to be performed, performing the determined action. 21. A method in a computing system for providing a choice of actions for selection by a user, comprising: receiving captured text, the captured text being a text transcription of an image of a human-perceptible textual portion of a printed document captured by a handheld information capture device;identifying an electronic counterpart to the printed document from the captured text, wherein the printed document has one or more context points, each of the context points is a visible feature of the printed document, each of the context points is associated in the electronic counterpart with an action, and each of the context points has a location within the printed document;determining a location within the printed document from which the captured text was captured;determining distances between the location within the printed document of the captured text and the locations of one or more context points within the printed document;identifying markup data that maps regions of the printed document to regions of the electronic counterpart to the printed document, wherein the markup data comprises at least one of actions or rules for determining actions;determining a plurality of actions to be performed, based at least in part on the determined distances, and based at least in part on markup data associated with the received text; andproviding a menu of choices to the user that includes at least a portion of the determined plurality of actions. 22. The method of claim 21, further comprising automatically performing one or more of the determined plurality of actions. 23. A system that performs an action in response to a capture of data from a printed document, comprising: a hand-held device, the hand-held device comprising a data capture component that stores captured text, the captured text being a text transcription of an image of human-perceptible textual data captured from the printed document and a context component that determines context information that is current at a time of the data capture; andan action component that receives the captured textual data and the determined context information to identify an electronic counterpart to the printed document from the captured text, wherein the printed document has one or more context points, each of the context points is a visible feature of the printed document, each of the context points is associated in the electronic counterpart with an action, and each of the context points has a location within the printed document, and that performs an action based on the received captured data and the received context information by determining a location within the printed document from which the captured text was captured, and by determining distances between the location of the captured text and context points within the printed document; andwherein the action is performed based at least in part on the determined distances. 24. The method of claim 21, wherein the plurality of actions include launching an application. 25. The method of claim 21, wherein the plurality of action include scrolling to the captured text in the electronic counterpart. 26. The method of claim 21, wherein the plurality of actions include highlighting the captured text in the electronic counterpart. 27. The method of claim 21, wherein the plurality of actions include placing a bookmark in the electronic counterpart. 28. The method of claim 21, wherein the plurality of actions comprise providing access to a product sale or promotion, access to a chat room, creation of or access to a discussion thread, or access to a bulletin board. 29. The method of claim 21, wherein the plurality of actions comprise obtaining access to the electronic counterpart. 30. The method of claim 21, wherein the plurality of actions comprise storing a copy of the electronic counterpart or a link to the electronic counterpart in a personal archive. 31. The system of claim 23, wherein the action comprises launching an application. 32. The system of claim 23, wherein the action comprises scrolling to the captured text in the electronic counterpart. 33. The system of claim 23, wherein the action comprises highlighting the captured text in the electronic counterpart. 34. The system of claim 23, wherein the action comprises placing a bookmark in the electronic counterpart. 35. The system of claim 23, wherein the action comprises providing access to a product sale or promotion, access to a chat room, creation of or access to a discussion thread, or access to a bulletin board. 36. The system of claim 23, wherein the action comprises obtaining access to the electronic counterpart. 37. The system of claim 23, wherein the action comprises storing a copy of the electronic counterpart or a link to the electronic counterpart in a personal archive.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Adding information or functionality to a rendered document via association with an electronic counterpart.
Mandella, Michael J.; Gonzalez-Banos, Hector H.; Alboszta, Marek, Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features.
Buermann,Dale H.; Gonzalez Banos,Hector H.; Mandella,Michael J.; Carl,Stewart R., Apparatus and method for determining an inclination of an elongate object contacting a plane surface.
Zhang,Guanghua G.; Buermann,Dale H.; Mandella,Michael J.; Gonzalez Banos,Hector H.; Carl,Stewart R., Apparatus and method for determining orientation parameters of an elongate object.
Hill,Richard A.; Penn,Richard; Schoemig,Ewald; Summers,Egil K.; Lace,William H.; Pheil,Louis D., Apparatus and method for gathering and utilizing data.
Nelson Douglas J. ; Schone Patrick John ; Bates Richard Michael, Automatically generating a topic description for text and searching and sorting text by topic using the same.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford Fraser, James Q., Capturing text from rendered documents using supplemental information.
Ueda Toru (Nara JPX) Ishizuka Yasushi (Yamatokooriyama OR JPX) Togawa Fumio (Hillsboro OR), Character recognition device which divides a single character region into subregions to obtain a character code.
Ueda Toru (Nara JPX) Ishizuka Yasushi (Yamatokooriyama OR JPX) Togawa Fumio (Hillsboro OR) Aramaki Takashi (Hillsboro OR), Character recognition device which divides a single character region into subregions to obtain a character code.
Mandella, Michael J.; Gonzalez-Banos, Hector H.; Alboszta, Marek, Computer interface employing a manipulated object with absolute pose detection component and a display.
Johnson Noel L. (San Jose CA) Huang Jyh-yi T. (Sunnyvale CA) Chang Tao (Saratoga CA), Control of a multi-channel drug infusion pump using a pharmacokinetic model.
Shepard Howard M. (Great River NY) Barkan Edward D. (South Setauket NY) Swartz Jerome (Stonybrook NY), Hand held bar code reader with input and display device and processor.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device.
Grajski Kamil A. (San Jose CA) Chow Yen-Lu (Saratoga CA) Lee Kai-Fu (Saratoga CA), Handwriting signal processing front-end for handwriting recognizers.
Deng,Zhong John; Gowda,Sudhir Muniswamy; Karidis,John P.; Pearson,Dale Jonathan; Singh,Rama Nand; Wong,Hon Sum Philip; Yang,Jungwook, Image capture system for mobile communications.
Carl,Stewart R.; Alboszta,Marek; Mandella,Michael J.; Gonzalez,Hector H; Hawks,Timothy, Implement for optically inferring information from a jotting surface and environmental landmarks.
Perkowski, Thomas J., Internet-based system for managing and delivering consumer product information at points along the world wide web using consumer product information (CPI) requesting and graphical user interface (GUI) displaying subsystems driven by server-side components and managed by consumer product manufacturers and/or authorized parties.
Kyu-Young Whang KR; Byung-Kwon Park KR; Wook-Shin Han KR; Young-Koo Lee KR, Inverted index storage structure using subindexes and large objects for tight coupling of information retrieval with database management systems.
Lamming,Michael G.; MacLean,Allan; Frayling,Anthony F., Method and apparatus for controlling document service requests using a mobile computing device.
Buermann,Dale H.; Mandella,Michael J.; Carl,Stewart R.; Zhang,Guanghua G.; Gonzalez Banos,Hector H., Method and apparatus for determining absolute position of a tip of an elongate object on a plane surface with invariant features.
Lamming, Michael G.; MacLean, Allan; Frayling, Anthony F., Method and apparatus for processing document service requests originating from a mobile computing device.
Nicholson Dennis G. (Atherton CA) King James C. (San Jose CA), Method and apparatus for producing a hybrid data structure for displaying a raster image.
Withgott M. Margaret ; Newman William,GB2 ; Bagley Steven C. ; Huttenlocher Daniel P. ; Kaplan Ronald M. ; Cass Todd A. ; Halvorsen Per-Kristian ; Brown John Seely ; Kay Martin, Method and apparatus for supplementing significant portions of a document selected without document image decoding wit.
Carro, Fernando Incertis, Method and system for accessing interactive multimedia information or services by touching highlighted items on physical documents.
Leban,Roy; Sellers,Timothy D.; Matlock,Stephen; Parveen,Shaheeda, Method and system for automatic insertion of context information into an application program module.
Charlotte S. Lombardo ; Eleanor F. Stryker ; David A. Brenner ; Jayson W. Dymond ; Dave R. Lemieux, Method and system for automating the communication of business information.
Deaton David W. (Abilene) Gabriel Rodney G. (Abilene TX), Method and system for building a database and performing marketing based upon prior shopping history.
Williams William J. ; Zalubas Eugene J. ; Nickel Robert M. ; Hero ; III Alfred O. ; O'Neill Jeffrey C., Method and system for extracting features in a pattern recognition system.
Dockter, Michael J.; Doerre, Jochen F.; Lynn, Ronald W.; Munoz, Joseph A.; Richardt, Randal J.; Seiffert, Roland, Method and system for improving a text search.
Williams, Peter, Method and system for interactively providing product related information on demand and providing personalized transactional benefits at a point of purchase.
Braun,John F.; Rojas,John W.; Norris,James R.; Coffy,Jean Hiram; Parkos,Arthur; Leung,Alan; Leung,Wendy Chui Fen, Method and system for remote form completion.
Ramkumar, Gurumurthy D; Manmatha, Raghavan; Bhattacharyya, Supratik; Bhargava, Gautam; Ruzon, Mark, Method and system for searching for information on a network in response to an image query sent by a user from a mobile communications device.
Kwatinetz Andrew ; Leblond Antoine ; Peters G. Christopher ; Hirsch Stephen M., Method and system for selecting text with a mouse input device in a computer system.
Fan Zhigang ; Cooperman Robert ; Shuchatowitz Robert ; Hadden Lucy ; Rainero Emil ; Roberts ; Jr. Frederick, Method of estimating at least one run-based font attribute of a group of characters.
Pasqualini, Andrea; Bos, Dennis Erwin, Method of moving a device provided with a camera to a desired position by means of a control system, and such a system.
King,Martin T.; Grover,Dale L.; Kushler,Clifford A.; Stafford Fraser,James Q., Methods, systems and computer program products for data gathering in a digital and hard copy document environment.
Glickman David (Frederick MD) Repass James T. (Round Rock TX) Rosenbaum Walter S. (Bethesda MD) Russell Janet G. (Bethesda MD), Office correspondence storage and retrieval system.
Yamaguchi Mikio (Osaka JPX) Maeda Naoki (Osaka JPX), Optical character reader for outputting a character from combinations of possible representations of the character.
Gaborski Roger S. (Pittsford NY) Beato Louis J. (Rochester NY) Barski Lori L. (Pittsford NY) Tan Hin-Leong (Rochester NY) Assad Andrew M. (N. Chili NY) Dutton Dawn L. (Buffalo NY), Optical character recognition neural network system for machine-printed characters.
Roustaei Alexander R. ; Lawrence Roland L. ; Lebaschi Ali ; Bian Long-Xiang ; Fisher Donald, Optical scanner for reading and decoding one- and-two-dimensional symbologies at variable depths of field including mem.
Swift Philip ; Jenkins Ian ; Barkan Edward ; Curry Daniel ; Oppenheim Ellen ; Ryder Michael ; Wild Robert ; Carricato John ; Tu Jean ; Giordano Joseph, Optical scanner with hand-held and hands-free modes of use.
Kumar Rajendra (Akron OH) Ritchie George D. V. (Akron OH), Portable device for handsfree data entry with variably-positionable display/scanner module detachable for handheld use.
King, Martin T.; Kushler, Clifford A.; Stafford-Fraser, James Q.; Grover, Dale L., Processing techniques for visual capture data from a rendered document.
Howard Shepard ; Edward D. Barkan ; Paul Dvorkis ; Boris Metlitsky ; Raj Bridgelall ; Vladimir Gurevich ; Mark Krichever ; Yajun Li ; Joseph Katz ; Vincent Luciano, Retro-reflective scan module for electro-optical readers.
Piersol, Kurt, SYSTEM AND METHOD OF MANAGING QUEUES BY MAINTAINING METADATA FILES HAVING ATTRIBUTES CORRESPONDING TO CAPTURE OF ELECTRONIC DOCUMENT AND USING THE METADATA FILES TO SELECTIVELY LOCK THE ELECTRONIC DO.
Jain Ramesh ; Horowitz Bradley ; Fuller Charles E. ; Gupta Amarnath ; Bach Jeffrey R. ; Shu Chiao-fe, Similarity engine for content-based retrieval of images.
Lazarus Michael A. ; Caid William R. ; Pugh Richard S. ; Kindig Bradley D. ; Russell Gerald S. ; Brown Kenneth B. ; Dunning Ted E. ; Carleton Joel L., System and method for optimal adaptive matching of users to most relevant entity and information in real-time.
Carro, Fernando Incertis; Barbero, Jose Maria Varona, System and method for selecting, ordering and accessing copyrighted information from physical documents.
Embry Leo J. ; Franke Daniel G., System for coupling a host computer to an image scanner in which high level functions are migrated to the attached host computer.
Meshinsky John ; Hammond James ; Sherman David ; Sweeting Thomas ; Branche Stan ; Tighe Kenneth ; Fleischer Timothy, System for performing multiple processes on images of scanned documents.
Suda Aruna Rohra,JPX ; Ibaraki Shouichi,JPX ; Takayama Masayuki,JPX ; Wakai Masanori,JPX ; Mikame Shuichi,JPX ; Fujii Kenichi,JPX ; Takahashi Satomi,JPX ; Jeyachandran Suresh,JPX, System for transferring jobs between processing units based upon content of job and ability of unit to perform job.
Altman, Gerald, Systems, processes, and products for storage and retrieval of physical paper documents, electro-optically generated electronic documents, and computer generated electronic documents.
King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
King, Martin Towle; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
Hull, Jonathan J.; Erol, Berna; Graham, Jamey; Hart, Peter E.; Lee, Dar-Shyang; Piersol, Kurt, Triggering applications based on a captured text in a mixed media environment.
Yutaka Nio JP; Naoji Okumura JP; Katsumi Terai JP; Kazuto Tanaka JP; Satoshi Okamoto JP; Masaaki Fujita JP; Minoru Miyata JP, Video display apparatus with scan conversion and reversion and a video display method using scan conversion and reversion.
Arputharaj, Vinothkumar; Duraibabu, Bala Vijay; Sreekumar, Aravind; Prabhat, Saurabh, Methods and systems for capturing, sharing, and printing annotations.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.