Voice actuation with contextual learning for intelligent machine control
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G10L-021/06
G10L-015/04
G10L-015/14
G05D-001/00
출원번호
US-0796799
(2001-03-02)
발명자
/ 주소
Sepe, Jr., Raymond
출원인 / 주소
Electro Standards Laboratories
대리인 / 주소
Lacasse &
인용정보
피인용 횟수 :
176인용 특허 :
19
초록▼
An interactive voice actuated control system for a testing machine such as a tensile testing machine is described. Voice commands are passed through a user-command predictor and integrated with a graphical user interface control panel to allow hands-free operation. The user-command predictor learns
An interactive voice actuated control system for a testing machine such as a tensile testing machine is described. Voice commands are passed through a user-command predictor and integrated with a graphical user interface control panel to allow hands-free operation. The user-command predictor learns operator command patterns on-line and predicts the most likely next action. It assists less experienced operators by recommending the next command, and it adds robustness to the voice command interpreter by verbally asking the operator to repeat unlikely commanded actions. The voice actuated control system applies to industrial machines whose normal operation is characterized by a nonrandom series of commands.
대표청구항▼
1. A voice actuated system with contextual learning for intelligent machine control, said system comprising:speech recognition system receiving voice inputs and identifying one or more voice commands from said received voice input; command predictor identifying a probability of likeliness of occurre
1. A voice actuated system with contextual learning for intelligent machine control, said system comprising:speech recognition system receiving voice inputs and identifying one or more voice commands from said received voice input; command predictor identifying a probability of likeliness of occurrence of said identified one or more voice commands via a statistical likelihood estimation, said command predictor validating, based upon said identified probability of likeliness of occurrence, said identified one or more voice commands for execution in said machine; and command processor receiving and executing in said machine said validated one or more voice commands, wherein a command corresponding to the highest probability of likeliness is visually modified to indicate that it is the next likely command. 2. A voice actuated system with contextual learning for intelligent machine control, as per claim 1, wherein said command predictor validates said identified one or more commands for execution in said machine if said probability of likeliness of occurrence is greater than a threshold probability and said command predictor updates said probability of likeliness of occurrence.3. A voice actuated system with contextual learning for intelligent machine control, as per claim 2, said speech recognition system requests for additional voice inputs for clarification if said probability of likeliness of occurrence of said one or more commands is below said threshold value, and said command predictor validates said one or more commands for execution in said machine upon receiving said additional voice inputs, and said command processor updates said probability of likeliness of occurrence.4. A voice actuated system with contextual learning for intelligent machine control, as per claim 1, wherein said system further comprises a user interface providing intelligent assistance by revealing a list of probabilities defining the likelihood that one or more commands are next to be executed in said machine.5. A voice actuated system with contextual learning for intelligent machine control, as per claim 4, wherein said user interface includes displaying:said list of probabilities; one or more parameters associated with said machine; one or more graphs illustrating an output associated with said machine; and one or more commands for controlling said machine. 6. A voice actuated system with contextual learning for intelligent machine control, as per claim 1, wherein said command predictor is based on a statistical Markov model.7. A voice actuated system with contextual learning for intelligent machine control, as per claim 1, wherein said machine is an intelligent vehicle.8. A voice actuated system with contextual learning for intelligent machine control, as per claim 7, wherein said intelligent vehicle learns a driver's operating patterns and adjusts vehicle handling and performance based on said learned information.9. A voice actuated system with contextual learning for intelligent machine control, as per claim 1, wherein said machine is any of the following: testing equipment, programmable device or programmable industrial instruments.10. A voice actuated system with contextual learning for intelligent machine control, as per claim 1, wherein said machine is a tensile testing machine.11. A voice actuated system with contextual learning for intelligent machine control, as per claim 1, wherein, in one mode, said command predictor is disabled.12. A method for voice actuated contextual learning for intelligent machine control, said machine functionally partitioned into one or more discrete states and associating a present condition of said machine with a current state, said method comprising:receiving one or more voice inputs; identifying one or more commands from said received voice inputs; identifying commands, from said one or more commands, that cause transition between said current state and any of said one or more discrete states and that identify probabilities associated with said transitions; displaying a graphical user interface providing intelligent help by displaying a list of probabilities defining the likelihood that said one or more commands are next to be executed in said machine; validating one or more commands, and executing said validated commands in said machine. 13. A method for voice actuated contextual learning for intelligent machine control, said machine functionally partitioned into one or more discrete states and associating a present condition of said machine with a current state, as per claim 12, wherein said method further comprises:checking if each of said identified probabilities is greater than a threshold ‘t’, and if so updating corresponding probabilities and validating corresponding commands for execution in said machine, else requesting another voice input for clarification, and upon clarification, validating corresponding commands, updating corresponding probabilities, and executing said validated one or more commands in said machine.14. A method for voice actuated contextual learning for intelligent machine control, said machine functionally partitioned into one or more discrete states and associating a present condition of said machine with a current state, as per claim 12, wherein said probabilities are Markov chain probabilities.15. A method for voice actuated contextual learning for intelligent machine control, said machine functionally partitioned into one or more discrete states and associating a present condition of said machine with a current state, as per claim 12, wherein said method is used in an industrial setting to: reduce operator fatigue, allow freedom of movement, or assist the physically challenged.16. A method for voice actuated contextual learning for intelligent machine control, said machine functionally partitioned into one or more discrete states and associating a present condition of said machine with a current state, as per claim 12, wherein said machine is a tensile testing machine.17. A graphical user interface for providing intelligent help in a voice actuated system with contextual learning for intelligent machine control, said interface comprising:a graphical user interface panel displaying various parameters associated with said machine, and a probabilities panel displaying Markov state of said machine and probabilities associated with one or more commands, said probabilities defining the likelihood that said one or more commands are next to be executed.18. A graphical user interface for providing intelligent help in a voice actuated system with contextual learning for intelligent machine control, as per claim 17, wherein a command corresponding to the highest probability is visually modified to indicate that it is the next likely command.19. An article of manufacture comprising computer usable medium having computer readable code embodied therein which provides a graphical user interface for providing intelligent help in a voice actuated system with contextual learning for intelligent machine control, said computer readable code comprising:computer readable program code providing a graphical user interface panel displaying various testing parameters and graphs associated with said machine, and computer readable program code providing a probabilities panel displaying Markov state of said machine and probabilities associated with one or more commands, said probabilities defining the likelihood that said one or more commands are next to be executed.20. A voice actuated intelligent machine control system for a tensile testing machine, said system operable in a plurality of modes, said system comprising:in a first mode, a speech recognition system receiving voice inputs and identifying one or more voice commands from said received voice input to intelligently control specific parts of said tensile testing machine; a command predictor identifying a probability of likeliness of occurrence of said identified one or more voice commands via a statistical likelihood estimation, said command predictor validating said identified one or more voice commands for execution in said machine; in a second mode, a speech recognition system receiving voice inputs and identifying one or more voice commands from said received voice input to intelligently control specific parts of said tensile testing machine; a command validator validating said identified one or more voice commands for execution in said machine; in a third mode, an input recognition system receiving inputs and identifying one or more commands from said received input to intelligently control specific parts of said tensile testing machine; a command predictor identifying a probability of likeliness of occurrence of said identified one or more voice commands via a statistical likelihood estimation, said command predictor validating said identified one or more commands for execution in said machine, and a command processor, in said first, second, and third modes, receiving and executing in said machine said validated one or more commands. 21. A voice actuated intelligent machine control system for a tensile testing machine, said system operable in a plurality of modes, as per claim 20, wherein a command corresponding to the highest probability of likeliness of occurrence is visually modified to indicate that it is the next likely command.22. A voice actuated intelligent machine control system for an intelligent vehicle, said system operable in a plurality of modes, as per claim 20, wherein a command corresponding to the highest probability of likeliness of occurrence is visually modified to indicate that it is the next likely command.23. A system for intelligent machine control with contextual learning, said system comprising:interface, said interface receiving inputs and identifying one or more commands from said received inputs; command predictor identifying a probability of likeliness of occurrence of said identified one or more commands via a statistical likelihood estimation and visually modifying a command corresponding to the highest probability to indicate tat it is the next likely command, said command predictor validating said identified one or more commands for execution in said machine; and command processor receiving and executing in said machine said validated one or more commands. 24. A voice actuated intelligent machine control system for an intelligent vehicle, said system operable in a plurality of modes, said system comprising:in a first mode, a speech recognition system receiving voice inputs and identifying one or more voice commands from said received voice input to intelligently control specific parts of said intelligent vehicle; a command predictor identifying a probability of likeliness of occurrence of said identified one or more voice commands via a statistical likelihood estimation, said command predictor validating said identified one or more voice commands for execution in said intelligent vehicle; in a second mode, a speech recognition system receiving voice inputs and identifying one or more voice commands from said received voice input to intelligently control specific parts of said intelligent vehicle; a command validator validating said identified one or more voice commands for execution in said intelligent vehicle; in a third mode, an input recognition system receiving inputs and identifying one or more commands from said received input to intelligently control specific parts of said intelligent vehicle; a command predictor identifying a probability of likeliness of occurrence of said identified one or more voice commands via a statistical likelihood estimation, said command predictor validating said identified one or more commands for execution in said intelligent vehicle; and a command processor, in said first, second, and third modes, visually modifying a command corresponding to the highest probability of likeliness to indicate that it is the next likely command and said command processor receiving and executing in said intelligent vehicle said validated one or more commands.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (19)
Peck John C. ; Rowland Randy ; Wu Duanpei, Apparatus and method for voice controlled apparel manufacture.
Brecher Virginia H. (West Cornwall CT) Chou Paul B.-L ; (Montvale NJ) Hall Robert W. (Jericho VT) Parisi Debra M. (Carmel NY) Rao Ravishankar (White Plains NY) Riley Stuart L. (Colchester VT) Sturzen, Automated defect classification system.
Hatano Yasukichi (Yokohama JPX) Itoh Yoshimasa (Yokohama JPX) Hirose Yoshiyuki (Yokohama JPX) Miyamae Kazuyuki (Kawasaki JPX) Ishikawa Yuichi (Kawasaki JPX), Industrial playback robot having a teaching mode in which teaching data are given by speech.
Colson James Campbell ; Graham Stephen Glen, Method for representing automotive device functionality and software services to applications using JavaBeans.
Beattie Valerie L. ; Miller David R. H. ; Edmondson Shawn Eric ; Patel Yogen N. ; Talvola Geoffrey A., Multi-dialect speech recognition method and apparatus.
Rajasekaran Periagaram K. (Richardson TX) Yoshino Toshiaki (Tokyo JPX), Speaker-independent word recognition method and system based upon zero-crossing rate and energy measurement of analog sp.
Prunotto Gianpaolo (Turin ITX) Prada Marco (Turin ITX), System for creating command and control signals for a complete operating cycle of a robot manipulator device of a sheet.
Mitchell, Dennis B.; Lewis, Dennis G.; Head, James V. W., System, apparatus and method for providing a portable customizable maintenance support computer communications system.
Gruber, Thomas R.; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Actionable reminder entries.
Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Active input elicitation by intelligent automated assistant.
Gruber, Thomas Robert; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Active transport based notifications.
Rottler, Benjamin A.; Lindahl, Aram M.; Haughay, Jr., Allen Paul; Ellis, Shawn A.; Wood, Jr., Policarpo Bonilla, Adaptive audio feedback system and method.
Carson, David A.; Keen, Daniel; Dibiase, Evan; Saddler, Harry J.; Iacono, Marco; Lemay, Stephen O.; Pitschel, Donald W.; Gruber, Thomas R., Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant.
Guzzoni, Didier Rene; Cheyer, Adam John; Gruber, Thomas Robert; Brigham, Christopher Dean; Saddler, Harry Joseph, Disambiguation based on active input elicitation by intelligent automated assistant.
Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
Os, Marcel Van; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
Van Os, Marcel; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean, Intent deduction based on previous user interactions with voice assistant.
Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean; Kittlaus, Dag, Maintaining context information between user interactions with a voice assistant.
Christie, Gregory N.; Westen, Peter T.; Lemay, Stephen O.; Alfke, Jens, Method and apparatus for displaying information during an instant messaging session.
Coffman, Daniel Mark; Kleindienst, Jan; Ramaswamy, Ganesh N., Method and apparatus for dynamic modification of command weights in a natural language understanding system.
Ramerth, Brent D.; Naik, Devang K.; Davidson, Douglas R.; Dolfing, Jannes G. A.; Pu, Jia, Method for disambiguating multiple readings in language conversion.
Gruber, Thomas Robert; Saddler, Harry Joseph; Cheyer, Adam John; Kittlaus, Dag; Brigham, Christopher Dean; Giuli, Richard Donald; Guzzoni, Didier Rene; Bastea-Forte, Marcello, Paraphrasing of user requests and results by automated digital assistant.
Anzures, Freddy Allen; van Os, Marcel; Lemay, Stephen O.; Matas, Michael, Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars.
Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene; Brigham, Christopher Dean; Saddler, Harry Joseph, Prioritizing selection criteria by automated assistant.
Moore, Jennifer Lauren; Naik, Devang K.; Bellegarda, Jerome R.; Aitken, Kevin Bartlett; Silverman, Kim E., Semantic search using a single-source semantic model.
Gagnon, Jean; Roy, Philippe; Lagassey, Paul J., Speech interface system and method for control and interaction with applications on a computing system.
Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
Vieri, Riccardo, System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system.
Vieri, Riccardo, System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system.
Herman, Kenneth; Rogers, Matthew; James, Bryan, Systems and methods for determining the language to use for speech generated by a text to speech engine.
James, Bryan; Herman, Kenneth; Rogers, Matthew L., Systems and methods for determining the language to use for speech generated by a text to speech engine.
Naik, DeVang; Silverman, Kim; Bellegarda, Jerome, Systems and methods for selective rate of speech and speech preferences for text to speech synthesis.
Silverman, Kim; Naik, Devang; Lenzo, Kevin; Henton, Caroline, Systems and methods of detecting language and natural language strings for text to speech synthesis.
Kalb, Aaron S.; Perry, Ryan P.; Alsina, Thomas Matthieu, Translating phrases from one language into another using an order-based set of declarative rules.
Gruber, Thomas Robert; Brigham, Christopher Dean; Keen, Daniel S.; Novick, Gregory; Phipps, Benjamin S., Using context information to facilitate processing of commands in a virtual assistant.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.