System and method for issuing commands based on pen motions on a graphical keyboard
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06F-003/048
G06F-003/033
G06F-003/041
G06F-003/02
A63F-009/24
G06K-009/00
출원번호
US-0121637
(2005-05-04)
등록번호
US-7487461
(2009-02-03)
발명자
/ 주소
Zhai,Shumin
Kristensson,Per Ola
출원인 / 주소
International Business Machines Corporation
대리인 / 주소
Shimokaji & Associates, P.C.
인용정보
피인용 횟수 :
107인용 특허 :
11
초록▼
A command pattern recognition system based on a virtual keyboard layout combines pattern recognition with a virtual, graphical, or on-screen keyboard to provide a command control method with relative ease of use. The system allows the user conveniently issue commands on pen-based computing or commun
A command pattern recognition system based on a virtual keyboard layout combines pattern recognition with a virtual, graphical, or on-screen keyboard to provide a command control method with relative ease of use. The system allows the user conveniently issue commands on pen-based computing or communication devices. The system supports a very large set of commands, including practically all commands needed for any application. By utilizing shortcut definitions it can work with any existing software without any modification. In addition, the system utilizes various techniques to achieve reliable recognition of a very large gesture vocabulary. Further, the system provides feedback and display methods to help the user effectively use and learn command gestures for commands.
대표청구항▼
What is claimed is: 1. A method of issuing a command using a graphical keyboard, comprising: inputting a movement on the graphical keyboard; automatically recognizing the inputted movement as an actual command, based on the inputted movement in relation to a layout of the graphical keyboard by: ana
What is claimed is: 1. A method of issuing a command using a graphical keyboard, comprising: inputting a movement on the graphical keyboard; automatically recognizing the inputted movement as an actual command, based on the inputted movement in relation to a layout of the graphical keyboard by: analyzing pattern aspects of the inputted movement by channels, wherein said analyzing utilizes said channels to determine location of said inputted movement and to determine shape of a single said inputted movement; comparing the pattern aspects to a command gesture database; if no match exists between the pattern aspects and command gesture database, using context clues that are based on one or more previous commands gestured by a user and comparing the context clues with potential commands; executing the actual command; utilizing multiple command template representations for each of said actual commands whether or not said graphical keyboard contains duplicate keys; utilizing shortcut commands of a current application as command template representations of said actual commands; teaching the user with a dynamic morphing, process comprising: projecting of said command template representation onto said graphical keyboard, allowing the user to see which parts of a shape least match said actual command, wherein future inputted movements more closely match said command template; and teaching the user by coloring points on a morphed command gesture, based on how closely said points match said command template, wherein: said coloring points comprise outputting colored points to said graphical keyboard, said morphed command gesture comprises inputted movements more closely matching said command template, resulting from the user adjusting to said command template representation projected onto said graphical keyboard with colored points, thereby allowing the user to see which parts of a shape least match said actual command. 2. The method of claim 1, wherein recognizing the movement as a command comprises identifying the pattern aspects of a stroke trace in relation to a layout of the graphical keyboard. 3. The method of claim 2, wherein the command gesture database are stored command templates. 4. The method of claim 3, further comprising defining the command templates. 5. The method of claim 4, wherein defining the command templates comprises using trained models; and wherein recognizing the movement comprises classifying gestures according to the trained models. 6. The method of claim 2, wherein recognizing the movement comprises analyzing geometric properties of the movement trajectory. 7. The method of claim 1, wherein the input movement starting within the vicinity of a designated key, triggers the recognition of the movement as an actual command. 8. The method of claim 1, wherein the graphical keyboard includes any one or more of: a virtual keyboard operated by any one of: a stylus or a digital pen; a virtual keyboard presenting a plurality of keys comprising alphabet-numeric characters; a virtual keyboard presenting a plurality of keys comprising a plurality of commands labeled with any one of: a symbol or a word, and wherein a movement trajectory is recognized as a set of commands; a virtual keyboard on a touch sensing surface sensitive to finger movement; a virtual keyboard projected onto a surface; and a virtual keyboard rendered in three dimensional space in a three dimensional display. 9. The method of claim 1, wherein the movement comprises any one of: a pen stroke, a stylus stroke, or a finger trace. 10. The method of claim 1, wherein the channels comprise any one or more of: a shape channel; a location information; and a context model channel. 11. The method of claim 1, further comprising sensing the movement by means of a digital pen. 12. The method of claim 1, further comprising sensing the movement by means of a finger motion on a touch sensor. 13. The method of claim 1, further comprising sensing the movement by tracking a three-dimensional motion of any one of: a user's hand, finger, pointer, or stylus. 14. The method of claim 1, further comprising sensing the movement by means of a tracked eye-gaze movement. 15. A first computer program product having a plurality of instruction codes stored on a computer-readable medium, for issuing a command using a graphical keyboard, the computer program product comprising: a first set of instruction codes for capturing a gesture from an input movement on the graphical keyboard such that said input movement is capturable from one of an electronic white board, a touch screen monitor, a personal digital assistant, an eye-tracker, a cellular phone, a tablet computer, an electronic pen, a court reporting system, a dictation system, and a retail sales terminal; a second set of instruction codes for automatically recognizing the movement on said graphical keyboard as an actual command, based on: determining whether said movement on said graphical keyboard is short or long if said movement on said graphical keyboard is short, said second set of instruction codes relates said input movement on the graphical keyboard with a single letter matched to said graphical keyboard at the location of said movement on said graphical keyboard; analyzing the captured gesture using at least one shape channel and at least one location channel, matching an analyzed movement on said graphical keyboard trajectory with at least one command template in a stored command template database; a third set of instructions for comparing an ambiguous movement on said graphical keyboard trajectory with a context model channel to provide an actual command if there is no match in the command template database; a fourth set of instruction codes for executing the actual command; a fifth set of instruction codes for adapting said first computer program product to an executing second computer program product such that said second computer program product captures said gesture from said input movement on said graphical keyboard; a sixth set of instruction codes for teaching the user to match said analyzed movement trajectory with one of said command templates such that said sixth set of codes outputs to the user how far said command templates differ from said analyzed movement trajectory; a seventh set of instruction codes for teaching said user to match said analyzed movement trajectory with one of said command templates by outputting colored points along said analyzed movement trajectory on said graphical keyboard, wherein said outputting colored points comprises outputting colored points to said graphical keyboard; and an eighth set of instruction codes for gradually changing said analyzed movement trajectory to match one of said command templates. 16. The computer program product of claim 15, wherein the second set of instruction codes analyzes the captured gesture using shape and location channels, matching a stroke trace with a stored command template database. 17. The computer program product of claim 16, wherein the stroke trace starting within the vicinity of a designated key triggers the recognition of the movement as a command. 18. An apparatus for issuing a command using a graphical keyboard, the apparatus comprising: a first sensing interface for recording movement on said graphical keyboard; a plurality of channels for automatically recognizing said movement on said graphical keyboard as a command, based on said movement on said graphical keyboard trajectory in relation to a layout of said graphical keyboard, wherein the plurality of channels identifies more than one set of pattern aspects of said movement on said graphical keyboard trajectory in relation to a layout of said graphical keyboard, wherein the plurality of channels analyze pattern aspects of said movement on said graphical keyboard trajectory and compare the pattern aspects to a command gesture database; an integrator connected to the plurality of channels, for analyzing the movement trajectory, wherein the integrator uses context clues from a context model channel, wherein the context clues are based on one or more previous commands gestured by a user and are compared with a potential command provided by the plurality of channels, wherein the integrator then outputs a best-matched command to be executed; wherein the best-matched command comprises a menu action; wherein said first sensing interface adapts to a second sensing interface installed on said apparatus wherein said movement on said graphical keyboard records movement on said second sensing interface; wherein said apparatus uses a plurality of said channels to analyze the shape of said movement on said graphical keyboard and to analyze location of said movement on said graphical keyboard in relation to said graphical keyboard; wherein said integrator teaches the user to match movement on said graphical keyboard to said command gesture by outputting information to said first sensing interface, showing a comparison of information from said command gesture database to said pattern aspects of said movement on said graphical keyboard for teaching the user to see which parts of said movement on said graphical keyboard least match said command gesture database; and wherein said apparatus teaches the user to match said movement on said graphical keyboard to said command gesture database by outputting colored points to said first sensing interface where said movement is on said graphical keyboard based on how closely said pattern aspects match said command gesture database, wherein said outputting colored points comprises outputting colored points to said graphical keyboard.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (11)
Moran Thomas P. ; Chiu Patrick ; Melle William Van ; Kurtenbach Gordon,CAX, Apparatus and method for implementing visual animation illustrating results of interactive editing operations.
Rhyne James R. (New Canaan CT) Anthony Nicos J. (Purdys NY) Levy Stephen E. (Valhalla NY) Wolf Catherine G. (Katonah NY), Stylus-input recognition correction manager computer program product.
Lui, Charlton E.; Thacker, Charles P.; Mathews, James E.; Keely, Leroy B.; Switzer, David; Vong, William H.; Lampson, Butler W., System and method for accepting disparate types of user input.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered AR eyepiece interface to external devices.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered control of AR eyepiece applications.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and user action control of external applications.
Ng, Oliver; Langlois, Michael George; Steele, Joel Paul; Lindsay, Donald James, Display screen of a mobile communication device with graphical user interface.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Border, John N.; Miller, Gregory D.; Stovall, Ross W., Eyepiece with uniformly illuminated reflective display.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Grating in a light transmissive illumination system for see-through near-eye display glasses.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses.
Pasquero, Jerome; Pinheiro, Gil; McKenzie, Donald Somerset McCulloch; Griffin, Jason Tyler; Fyke, Steven Henry; McCarty, Stephanie Elizabeth, Portable electronic device including touch-sensitive display and method of controlling same.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film.
Border, John N.; Haddick, John D.; Osterhout, Ralph F., See-through near-eye display glasses including a partially reflective, partially transmitting optical element.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear.
Border, John N.; Haddick, John D.; Osterhout, Ralph F., See-through near-eye display glasses with a light transmissive wedge shaped illumination system.
Border, John N.; Haddick, John D.; Lohse, Robert Michael; Osterhout, Ralph F., See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light.
Spencer, Stephen Thomas; Creed, Páidí; Medlock, Benjamin William; Orr, Douglas Alexander Harper, System and method for inputting text into electronic devices.
Medlock, Benjamin William; Reynolds, Jonathan Paul, System and method for inputting text into electronic devices based on text and text category predictions.
Griffin, Jason Tyler; Pinheiro, Gil; McKenzie, Donald Somerset McCulloch; Pasquero, Jerome; Walker, David Ryan, Touchscreen keyboard predictive display and generation of a set of characters.
Griffin, Jason Tyler; Pasquero, Jerome; Mckenzie, Donald Somerset, Touchscreen keyboard providing selection of word predictions in partitions of the touchscreen keyboard.
Pasquero, Jerome; Mckenzie, Donald Somerset; Griffin, Jason Tyler, Touchscreen keyboard providing word predictions in partitions of the touchscreen keyboard in proximate association with candidate letters.
Griffin, Jason Tyler; Hamilton, Alistair Robert; Bocking, Andrew Douglas; Lazaridis, Mihal, Virtual keyboard display having a ticker proximate to the virtual keyboard.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.