A recognition tool according to various examples of the invention intelligently recognizes natural input before it is passed to a destination or target application. More particularly, the recognition tool according to various examples of the invention provides better formatting for text recognized f
A recognition tool according to various examples of the invention intelligently recognizes natural input before it is passed to a destination or target application. More particularly, the recognition tool according to various examples of the invention provides better formatting for text recognized from natural input, based upon the context in which the text is being inserted into a target application. The recognition tool also provides various tools for correcting inaccurately recognized text. The recognition tool may allow a user to select only a part of an inaccurate text, and then identify alternate text candidates based upon the selected portion of the inaccurate text. Further, when the user selects text containing multiple words for correction, the recognition tool provides cross combinations of alternate text candidates for the user's selection. Still further, if the user replaces inaccurate text by submitting a new natural input object, the recognition tool ensures that the text recognized from the new natural input object is different from the inaccurate text been replaced. The recognition tool additionally affects the recognition experience after recognized text has been provided to the target application. The recognition tool provides the target application with the original natural input object for the recognized text, along with the alternate text candidates for that original natural input object. Thus, the target application can use the alternate text candidates to correct inaccurately recognized text. Further, a user can insert the original natural input object for recognized text within the target application.
대표청구항▼
What is claimed is: 1. A method of generating text from a natural input object for insertion into a target application, comprising: receiving a natural input object from which a recognized text is to be recognized; recognizing the recognized text from the natural input object; obtaining from the ta
What is claimed is: 1. A method of generating text from a natural input object for insertion into a target application, comprising: receiving a natural input object from which a recognized text is to be recognized; recognizing the recognized text from the natural input object; obtaining from the target application a context for insertion of recognized text recognized from the natural input object into an insertion location of the target application, wherein the context depends upon a length of the insertion location; assigning a category to a field in the target application into which the recognized text will be inserted, wherein the field includes the insertion location; formatting the recognized text by providing spacing for the recognized text according to the category for the field in the target application into which the recognized text will be inserted; and generating recognized text from the natural input object based upon the obtained context. 2. The method of generating text recited in claim 1, further including: determining if context information is provided for the field in the target application into which the recognized text will be inserted; and if context information is not provided for the field in the target application into which the recognized text will be inserted, then categorizing the field into a first category. 3. The method of generating text recited in claim 2, further including determining if the context information allows for control of insertion of text into the field in the target application into which the recognized text will be inserted; and if the context information allows for control of insertion of text into the field in the target application into which the recognized text will be inserted, then categorizing the field into a second category. 4. The method of generating text recited in claim 1, further including: providing spacing for the recognized text based upon a language of the recognized text. 5. The method of generating text recited in claim 1, further comprising: determining if text before the insertion location forms a word when concatenated with the recognized text; and if text before the insertion location forms a word when concatenated with the recognized text, omitting a space before the recognized text. 6. The method of generating text recited in claim 1, further comprising: determining if text after the insertion location forms a word when concatenated with the recognized text; and if text after the insertion location forms a word when concatenated with the recognized text, omitting a space after the recognized text. 7. The method of generating text recited in claim 1, wherein formatting the recognized text includes determining capitalization for the recognized text. 8. The method of generating text recited in claim 7, wherein formatting the recognized text includes capitalizing every character in the recognized text. 9. The method of generating text recited in claim 7, wherein formatting the recognized text includes retaining a capitalization case, recognized from the natural input object case, in the recognized text. 10. The method of generating text recited in claim 1, wherein formatting the recognized text includes determining punctuation for the recognized text. 11. The method of generating text recited in claim 10, wherein formatting the recognized text includes adding punctuation to the recognized text. 12. The method of generating text recited in claim 1, wherein generating recognized text from the natural input object includes: determining if the recognized text is replacing text existing in the target application; and if the recognized text is replacing text existing in the target application, recognizing the recognized text from the natural input object to be different from the existing text being replaced. 13. The method of generating text recited in claim 1, wherein the context is obtained from a correction process for correcting existing text. 14. The method of generating text recited in claim 13, further comprising: determining if the recognized text is replacing existing text; and if The recognized text is replacing text existing in the target application, recognizing the recognized text from the natural input object to be different from the existing text being replaced. 15. A computer-readable storage medium storing computer-executable instructions for implementing a recognition tool for recognizing text from a natural input object, the computer-executable instructions comprising: a recognition context module that, when executed by a computer, determines a context of an insertion location into which a recognized text will be inserted wherein the context depends upon a length of the insertion location; and a recognition module that, when executed by the computer: recognizes the recognized text from the natural input object; assigns a category to a field in a target application into which the recognized text will be inserted, wherein the field includes the insertion location; formats the recognized text b providing spacing for the recognized text according to the category for the field in the target application into which the recognized text will be inserted; identifies one or more text candidates corresponding to the natural input object; selects one of the one or more text candidates that most closely corresponds to the natural input object, and generates and displays text from the selected text candidate based upon the context determined by the recognition context module. 16. The computer-readable storage medium recited in claim 15, wherein when executed by the computer, the recognition module selects the one of the one or more text candidates based upon the context determined by the recognition context module. 17. The computer-readable storage medium recited in claim 16, wherein when executed by the computer, the recognition module formats the selected text candidate based upon the context determined by the recognition context module. 18. The computer-readable storage medium recited in claim 15, wherein when executed by the computer, the recognition module formats the selected text candidate based upon the context determined by the recognition context module. 19. The computer-readable storage medium recited in claim 18, wherein when executed by the computer, the recognition module provides spacing for the selected text candidate based upon the context determined by the recognition context module. 20. The computer-readable storage medium recited in claim 18, wherein when executed by the computer, the recognition module determines capitalization for the selected text candidate based upon the context determined by the recognition context module. 21. The computer-readable storage medium recited in claim 18, wherein when executed by the computer, the recognition module determines punctuation for the selected text candidate based upon the context determined by the recognition context module. 22. The computer-readable storage medium recited in claim 15, wherein the computer-executable instructions further comprise: an object model module that, for each of a plurality of selected text candidates, stores the natural input object for which the selected text candidate was selected, and farther stores the one or more text candidates corresponding to the natural input object for which the selected text candidate was selected. 23. The computer-readable storage medium recited in claim 15, wherein the computer-executable instructions further comprise: a correction module that, when executed by the computer, provides a user interface for correcting text generated by the correction module. 24. The computer-readable storage medium recited in claim 23, wherein when executed by the computer: if the correction module receives a natural input object to replace existing text, the correction module notifies the recognition context module of the existing text that is being replaced; and the recognition context module prevents the recognition module from generating text identical to the existing text that is being replaced.
Mitchell John C.,GB2 ; Heard Alan James,GB2 ; Corbett Steven Norman,GB2 ; Daniel Nicholas John,GB2, Automated proofreading using interface linking recognized words to their audio data while text is being changed.
Shieber Stuart M. ; Armstrong John ; Baptista Rafael Jose ; Bentz Bryan A. ; Ganong ; III William F. ; Selesky Donald Bryant, Command parsing and rewrite system.
Young Jonathan Hood ; Parmenter David Wilsberg ; Roth Robert ; Dubach Joev ; Gadbois Gregory J. ; Van Even Stijn, Error correction in speech recognition.
Vanbuskirk Ronald E. ; Lewis James R. ; Ortega Kerry A. ; Wang Huifang, Method and apparatus for correcting misinterpreted voice commands in a speech recognition system.
de Hita Carolina Rubio,BEX ; Akker David van den,BEX ; Govaers Erik C. E.,BEX ; Platteau Frank M. J.,BEX ; Deun Kurt Van,BEX ; Macpherson Melissa ; de Bie Peter,BEX ; Laviolette Sophie,BEX, Natural language information retrieval system and method.
Gould, Joel M.; Bamberg, Paul G.; Ingold, Charles E.; Bayse, Kenneth J.; Elkins, Michael L.; Matus, Roger L.; Fieleke, Eric, Performing actions identified in recognized speech.
Gould Joel M. ; McGrath Frank J. ; Squires Steven D. ; Parke Joel W. ; Roberts Jed M., Speech recognition system which creates acoustic models by concatenating acoustic models of individual words.
Gruber, Thomas R.; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Actionable reminder entries.
Gruber, Thomas Robert; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Active transport based notifications.
Carson, David A.; Keen, Daniel; Dibiase, Evan; Saddler, Harry J.; Iacono, Marco; Lemay, Stephen O.; Pitschel, Donald W.; Gruber, Thomas R., Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant.
Davis, Shawna Julie; Chin, Peter G.; Sengupta, Tirthankar; Singhal, Priyanka; Carter, Benjamin F.; Davis, Peter Gregory, Easy word selection and selection ahead of finger.
Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
Os, Marcel Van; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
Van Os, Marcel; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
Gruber, Thomas Robert; Saddler, Harry Joseph; Cheyer, Adam John; Kittlaus, Dag; Brigham, Christopher Dean; Giuli, Richard Donald; Guzzoni, Didier Rene; Bastea-Forte, Marcello, Paraphrasing of user requests and results by automated digital assistant.
Nagatomo, Kentarou, Speech recognition system, method, and computer readable medium that display recognition result formatted in accordance with priority.
Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
Gruber, Thomas Robert; Brigham, Christopher Dean; Keen, Daniel S.; Novick, Gregory; Phipps, Benjamin S., Using context information to facilitate processing of commands in a virtual assistant.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.