$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

Voice actuation with contextual learning for intelligent machine control 원문보기

IPC분류정보
국가/구분 United States(US) Patent 등록
국제특허분류(IPC7판)
  • G10L-021/06
  • G10L-015/04
  • G10L-015/14
  • G05D-001/00
출원번호 US-0796799 (2001-03-02)
발명자 / 주소
  • Sepe, Jr., Raymond
출원인 / 주소
  • Electro Standards Laboratories
대리인 / 주소
    Lacasse &
인용정보 피인용 횟수 : 176  인용 특허 : 19

초록

An interactive voice actuated control system for a testing machine such as a tensile testing machine is described. Voice commands are passed through a user-command predictor and integrated with a graphical user interface control panel to allow hands-free operation. The user-command predictor learns

대표청구항

1. A voice actuated system with contextual learning for intelligent machine control, said system comprising:speech recognition system receiving voice inputs and identifying one or more voice commands from said received voice input; command predictor identifying a probability of likeliness of occurre

이 특허에 인용된 특허 (19)

  1. Peck John C. ; Rowland Randy ; Wu Duanpei, Apparatus and method for voice controlled apparel manufacture.
  2. Brecher Virginia H. (West Cornwall CT) Chou Paul B.-L ; (Montvale NJ) Hall Robert W. (Jericho VT) Parisi Debra M. (Carmel NY) Rao Ravishankar (White Plains NY) Riley Stuart L. (Colchester VT) Sturzen, Automated defect classification system.
  3. Hoffberg Steven M. ; Hoffberg-Borghesani Linda I., Human factored interface incorporating adaptive pattern recognition based controller apparatus.
  4. Hatano Yasukichi (Yokohama JPX) Itoh Yoshimasa (Yokohama JPX) Hirose Yoshiyuki (Yokohama JPX) Miyamae Kazuyuki (Kawasaki JPX) Ishikawa Yuichi (Kawasaki JPX), Industrial playback robot having a teaching mode in which teaching data are given by speech.
  5. Bellegarda Jerome R. ; Silverman Kim E. A., Method and apparatus for command recognition using data-driven semantic inference.
  6. Gupta Vishwa,CAX, Method and apparatus for generating an a priori advisor for a speech recognition dictionary.
  7. Nakano Fumio (Tokyo JPX), Method and system for controlling an external machine by a voice command.
  8. Armstrong ; III John (Cambridge MA), Method for organizing incremental search dictionary.
  9. Colson James Campbell ; Graham Stephen Glen, Method for representing automotive device functionality and software services to applications using JavaBeans.
  10. Hoffberg Steven M. ; Hoffberg-Borghesani Linda I., Morphological pattern recognition based controller system.
  11. Beattie Valerie L. ; Miller David R. H. ; Edmondson Shawn Eric ; Patel Yogen N. ; Talvola Geoffrey A., Multi-dialect speech recognition method and apparatus.
  12. Chou Wu (Piscataway NJ) Juang Biing-Hwang (Warren NJ), Recognition unit model training based on competing word and word string models.
  13. Rajasekaran Periagaram K. (Richardson TX) Yoshino Toshiaki (Tokyo JPX), Speaker-independent word recognition method and system based upon zero-crossing rate and energy measurement of analog sp.
  14. Dreyfus Jean Albert (5 Avenue de la Grenade Geneva CH), Speech recognition device for controlling a machine.
  15. Prunotto Gianpaolo (Turin ITX) Prada Marco (Turin ITX), System for creating command and control signals for a complete operating cycle of a robot manipulator device of a sheet.
  16. Mitchell, Dennis B.; Lewis, Dennis G.; Head, James V. W., System, apparatus and method for providing a portable customizable maintenance support computer communications system.
  17. Johnstone Richard (Brookfield WI) Kirkham Edward E. (Brookfield WI), Voice actuated machine control.
  18. Hansen Per K. (Burlington VT), Voice control system.
  19. Porter Edward W. (Boston MA), Voice recognition system.

이 특허를 인용한 특허 (176)

  1. Gruber, Thomas R.; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Actionable reminder entries.
  2. Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Active input elicitation by intelligent automated assistant.
  3. Gruber, Thomas Robert; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Active transport based notifications.
  4. Rottler, Benjamin A.; Lindahl, Aram M.; Haughay, Jr., Allen Paul; Ellis, Shawn A.; Wood, Jr., Policarpo Bonilla, Adaptive audio feedback system and method.
  5. Mason, Henry, Analyzing audio input for efficient speech and music recognition.
  6. Huang, Rongqing; Oparin, Ilya, Applying neural network language models to weighted finite state transducers for automatic speech recognition.
  7. Bull, William; Rottler, Ben; Schiller, Jonathan A., Audio user interface.
  8. Rottler, Benjamin; Rogers, Matthew; James, Bryan J.; Wood, Policarpo; Hannon, Timothy, Audio user interface for displayless electronic device.
  9. Huppi, Brian Q.; Fadell, Anthony M.; Barrentine, Derek B.; Freeman, Daniel B., Automated response to and sensing of user activity in portable devices.
  10. Huppi, Brian Q.; Fadell, Anthony M.; Barrentine, Derek B.; Freeman, Daniel B., Automated response to and sensing of user activity in portable devices.
  11. Huppi, Brian; Fadell, Anthony M.; Barrentine, Derek; Freeman, Daniel, Automated response to and sensing of user activity in portable devices.
  12. Huppi, Brian; Fadell, Anthony M.; Barrentine, Derek; Freeman, Daniel, Automated response to and sensing of user activity in portable devices.
  13. Nallasamy, Udhyakumar; Kajarekar, Sachin S.; Paulik, Matthias; Seigel, Matthew, Automatic accent detection using acoustic models.
  14. Davidson, Douglas R.; Ozer, Ali, Automatic language identification for dynamic text processing.
  15. Komer, Joseph L.; Gepner, Joseph E.; Sherwood, Charles Gregory, Automatic speech recognition system and method for aircraft.
  16. Komer,Joseph L.; Gepner,Joseph E.; Sherwood,Charles Gregory, Automatic speech recognition system and method for aircraft.
  17. Winer, Morgan, Automatic supplementation of word correction dictionaries.
  18. Giuli, Richard D.; Treadgold, Nicholas K., Better resolution when referencing to concepts.
  19. Giuli, Richard D.; Treadgold, Nicholas K., Better resolution when referencing to concepts.
  20. Naik, Devang K.; Mohamed, Ali S.; Chen, Hong M., Caching apparatus for serving phonetic pronunciations.
  21. Bellegarda, Jerome R., Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis.
  22. Newendorp, Brandon J.; Dibiase, Evan S., Competing devices responding to voice triggers.
  23. Bellegarda, Jerome, Context-aware unit selection.
  24. Williams, Shaun E.; Mason, Henry G.; Krishnamoorthy, Mahesh; Paulik, Matthias; Agrawal, Neha; Kajarekar, Sachin S.; Uguroglu, Selen; Mohamed, Ali S., Context-based endpoint detection.
  25. Gruber, Thomas R.; Pitschel, Donald W., Context-sensitive handling of interruptions.
  26. Larson, Anthony L.; Dave, Swapnil R.; Varoglu, Devrim, Context-sensitive handling of interruptions.
  27. van Os, Marcel, Context-sensitive handling of interruptions by intelligent digital assistant.
  28. Gruber, Thomas R.; Cheyer, Adam J.; Pitschel, Donald W., Crowd sourcing information to fulfill user requests.
  29. Gruber, Thomas R.; Cheyer, Adam John; Pitschel, Donald W., Crowd sourcing information to fulfill user requests.
  30. Wadycki, Andrew; Douglas, Jason, Customized search or acquisition of digital media assets.
  31. Rhoten, George; Treadgold, Nicholas K., Determining domain salience ranking from ambiguous words in natural speech.
  32. Cheyer, Adam John; Brigham, Christopher Dean; Guzzoni, Didier Rene, Determining user intent based on ontologies of domains.
  33. Cheyer, Adam J., Device access using voice authentication.
  34. Cheyer, Adam John, Device access using voice authentication.
  35. Gronbach, Hans, Device for operating an automated machine for handling, assembling or machining workpieces.
  36. Piernot, Philippe P.; Binder, Justin G., Device voice control for selecting a displayed affordance.
  37. Carson, David A.; Keen, Daniel; Dibiase, Evan; Saddler, Harry J.; Iacono, Marco; Lemay, Stephen O.; Pitschel, Donald W.; Gruber, Thomas R., Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant.
  38. Fleizach, Christopher Brian; Gruber, Thomas Robert, Device, method, and user interface for voice-activated navigation and browsing of a document.
  39. Lindahl, Aram; Wood, Policarpo, Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts.
  40. Lindahl, Aram; Wood, Policarpo, Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts.
  41. Raitio, Tuomo J.; Hunt, Melvyn J.; Richards, Hywel B.; Chinthakunta, Madhusudan, Digital assistant providing whispered speech.
  42. Henton, Caroline; Naik, Devang, Disambiguating heteronyms in speech synthesis.
  43. Guzzoni, Didier Rene; Cheyer, Adam John; Gruber, Thomas Robert; Brigham, Christopher Dean; Saddler, Harry Joseph, Disambiguation based on active input elicitation by intelligent automated assistant.
  44. Wagner, Oliver P., Electronic device with text error correction based on voice recognition data.
  45. Wagner, Oliver P., Electronic device with text error correction based on voice recognition data.
  46. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  47. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  48. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  49. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  50. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  51. Bellegarda, Jerome R., Entropy-guided text prediction using combined word and character n-gram language models.
  52. Bellegarda, Jerome, Exemplar-based latent perceptual modeling for automatic speech recognition.
  53. Futrell, Richard L.; Gruber, Thomas R., Exemplar-based natural language processing.
  54. Futrell, Richard L.; Gruber, Thomas R., Exemplar-based natural language processing.
  55. Bellegarda, Jerome R.; Silverman, Kim E. A., Fast, language-independent method for user authentication by voice.
  56. Bellegarda, Jerome R.; Silverman, Kim E. A., Fast, language-independent method for user authentication by voice.
  57. Gruber, Thomas R.; Sabatelli, Alessandro F.; Pitschel, Donald W., Generating and processing task items that represent tasks to perform.
  58. Fleizach, Christopher Brian; Minifie, Darren C., Handling speech synthesis of content for multiple languages.
  59. Foo, Edwin W.; Hughes, Gregory F., Hearing assistance system for providing consistent human speech.
  60. Orr, Ryan M.; Nell, Garett R.; Brumbaugh, Benjamin L., Intelligent assistant for home automation.
  61. Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
  62. Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
  63. Saddler, Harry J.; Piercy, Aimee T.; Weinberg, Garrett L.; Booker, Susan L., Intelligent automated assistant.
  64. Os, Marcel Van; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
  65. Van Os, Marcel; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
  66. Orr, Ryan M.; Bernardo, Matthew P.; Mandel, Daniel J., Intelligent automated assistant for media exploration.
  67. Piersol, Kurt W.; Orr, Ryan M.; Mandel, Daniel J., Intelligent device arbitration and control.
  68. Booker, Susan L.; Krishnan, Murali; Weinberg, Garrett L.; Piercy, Aimee, Intelligent list reading.
  69. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  70. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  71. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  72. Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean, Intent deduction based on previous user interactions with voice assistant.
  73. Lemay, Stephen O.; Sabatelli, Alessandro Francesco; Anzures, Freddy Allen; Chaudhri, Imran; Forstall, Scott; Novick, Gregory, Interface for a virtual digital assistant.
  74. Cash, Jesse R.; Dave, Swapnil R.; Varoglu, Devrim, Interpreting and acting upon commands that involve sharing information with remote devices.
  75. Bellegarda, Jerome R.; Barman, Bishal, Language identification from short strings.
  76. Hatori, Jun; Yu, Dominic, Language input correction.
  77. Kida, Yasuo; Kocienda, Ken; Furches, Elizabeth Caroline, Language input interface on a device.
  78. Kida, Yasuo; Kocienda, Kenneth; Cranfill, Elizabeth Caroline Furches, Language input interface on a device.
  79. Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean; Kittlaus, Dag, Maintaining context information between user interactions with a voice assistant.
  80. Cheyer, Adam; Guzzoni, Didier, Method and apparatus for building an intelligent automated assistant.
  81. Cheyer, Adam; Guzzoni, Didier, Method and apparatus for building an intelligent automated assistant.
  82. Paulik, Matthias; Evermann, Gunnar; Gillick, Laurence S., Method and apparatus for discovering trending terms in speech requests.
  83. Christie, Gregory N.; Westen, Peter T.; Lemay, Stephen O.; Alfke, Jens, Method and apparatus for displaying information during an instant messaging session.
  84. Coffman, Daniel Mark; Kleindienst, Jan; Ramaswamy, Ganesh N., Method and apparatus for dynamic modification of command weights in a natural language understanding system.
  85. Cheyer, Adam, Method and apparatus for searching using an active ontology.
  86. Freeman, Daniel; Barrentine, Derek B., Method and system for operating a multi-function portable electronic device using voice-activation.
  87. Ramerth, Brent D.; Naik, Devang K.; Davidson, Douglas R.; Dolfing, Jannes G. A.; Pu, Jia, Method for disambiguating multiple readings in language conversion.
  88. Paulik, Matthias; Huang, Rongqing, Method for supporting dynamic grammars in WFST-based ASR.
  89. Lee, Michael M., Methods and apparatus for altering audio output signals.
  90. Bellegarda, Jerome R., Methods and apparatuses for automatic speech recognition.
  91. Mercer, Paul, Methods and apparatuses for display and traversing of links in page character array.
  92. Lee, Michael M.; Gregg, Justin; Seguin, Chad G., Mobile device having human language translation capability with positional feedback.
  93. Lee, Michael M.; Gregg, Justin; Seguin, Chad G., Mobile device having human language translation capability with positional feedback.
  94. Shin, Jong-Ho; Kwak, Jae-Do; Youn, Jong-Keun, Mobile terminal and menu control method thereof.
  95. Shin, Jong-Ho; Kwak, Jae-Do; Youn, Jong-Keun, Mobile terminal and menu control method thereof.
  96. Connor, Robert A., Morphing text by splicing end-compatible segments.
  97. Gruber, Thomas R.; Saddler, Harry J.; Bellegarda, Jerome Rene; Nyeggen, Bryce H.; Sabatelli, Alessandro, Multi-command single utterance input method.
  98. Mason, James Eric; Boettcher, Jesse, Multi-tiered voice feedback in an electronic device.
  99. Mason, James Eric; Boettcher, Jesse, Multi-tiered voice feedback in an electronic device.
  100. Bellegarda, Jerome R.; Davidson, Douglas R., Multilingual word prediction.
  101. Naik, Devang K., Name recognition system.
  102. Naik, Devang K., Name recognition system.
  103. Lindahl, Aram; Williams, Joseph M.; Klimanis, Gints Valdis, Noise profile determination for voice-related feature.
  104. Gruber, Thomas Robert; Saddler, Harry Joseph; Cheyer, Adam John; Kittlaus, Dag; Brigham, Christopher Dean; Giuli, Richard Donald; Guzzoni, Didier Rene; Bastea-Forte, Marcello, Paraphrasing of user requests and results by automated digital assistant.
  105. Bellegarda, Jerome R., Parsimonious continuous-space phrase representations for natural language processing.
  106. Bellegarda, Jerome R.; Yaman, Sibel, Parsimonious handling of word inflection via categorical stem + suffix N-gram language models.
  107. Bellegarda, Jerome, Part-of-speech tagging using latent analogy.
  108. Chen, Lik Harry; Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert, Personalized vocabulary for digital assistant.
  109. Anzures, Freddy Allen; van Os, Marcel; Lemay, Stephen O.; Matas, Michael, Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars.
  110. Wang, Xin; Ramerth, Brent D., Predictive conversion of language input.
  111. Dolfing, Jannes; Ramerth, Brent; Davidson, Douglas; Bellegarda, Jerome; Moore, Jennifer; Eminidis, Andreas; Shaffer, Joshua, Predictive text input.
  112. Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene; Brigham, Christopher Dean; Saddler, Harry Joseph, Prioritizing selection criteria by automated assistant.
  113. Paulik, Matthias; Mason, Henry G.; Seigel, Matthew S., Privacy preserving distributed evaluation framework for embedded personalized systems.
  114. Martel, Mathieu Jean; Deniau, Thomas, Proactive assistance based on dialog communication between devices.
  115. Kim, Yoon, Providing an indication of the suitability of speech recognition.
  116. Yanagihara, Kazuhisa, Providing text input using speech data and non-speech data.
  117. Yanagihara, Kazuhisa, Providing text input using speech data and non-speech data.
  118. Piernot, Philippe P.; Binder, Justin G., Reducing the need for manual start/end-pointing and trigger phrases.
  119. Volkert, Christopher, Search assistant for digital media assets.
  120. Volkert, Christopher, Search assistant for digital media assets.
  121. Volkert, Christopher, Search assistant for digital media assets.
  122. Volkert, Christopher, Search assistant for digital media assets.
  123. Moore, Jennifer Lauren; Naik, Devang K.; Bellegarda, Jerome R.; Aitken, Kevin Bartlett; Silverman, Kim E., Semantic search using a single-source semantic model.
  124. Bellegarda, Jerome, Sentiment prediction from textual data.
  125. Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean, Service orchestration for intelligent automated assistant.
  126. Naik, Devang K.; Piernot, Philippe P., Social reminders.
  127. Naik, Devang K.; Piernot, Philippe P., Social reminders.
  128. Kim, Yoon; Kajarekar, Sachin S., Speaker identification and unsupervised speaker adaptation techniques.
  129. Gagnon, Jean; Roy, Philippe; Lagassey, Paul J., Speech interface system and method for control and interaction with applications on a computing system.
  130. Shapiro, Geoffrey A., Speech recognition for avionic systems.
  131. Hunt, Melvyn; Bridle, John, Speech recognition involving a mobile device.
  132. Chen, Lik Harry, Speech recognition repair using contextual information.
  133. Yamamoto, Hiroki; Yamada, Masayuki, State output probability calculating method and apparatus for mixture distribution HMM.
  134. Sumner, Michael R.; Newendorp, Brandon J.; Orr, Ryan M., Structured dictation using intelligent automated assistants.
  135. Sinha, Anoop K., System and method for detecting errors in interactions with a voice-based digital assistant.
  136. Roberts, Andrew J.; Martin, David L.; Saddler, Harry J., System and method for emergency calls initiated by voice command.
  137. Evermann, Gunnar, System and method for inferring user intent from speech inputs.
  138. Naik, Devang K.; Tackin, Onur E., System and method for updating an adaptive speech recognition model.
  139. Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
  140. Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
  141. Vieri, Riccardo, System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system.
  142. Vieri, Riccardo, System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system.
  143. Rogers, Matthew; Silverman, Kim; Naik, Devang; Rottler, Benjamin, Systems and methods for concatenation of words in text to speech synthesis.
  144. Herman, Kenneth; Rogers, Matthew; James, Bryan, Systems and methods for determining the language to use for speech generated by a text to speech engine.
  145. James, Bryan; Herman, Kenneth; Rogers, Matthew L., Systems and methods for determining the language to use for speech generated by a text to speech engine.
  146. Naik, Devang K., Systems and methods for name pronunciation.
  147. Keen, Daniel, Systems and methods for recognizing textual identifiers within a plurality of words.
  148. Naik, DeVang; Silverman, Kim; Bellegarda, Jerome, Systems and methods for selective rate of speech and speech preferences for text to speech synthesis.
  149. Bellegarda, Jerome; Naik, Devang; Silverman, Kim, Systems and methods for selective text to speech synthesis.
  150. Bellegarda, Jerome R.; Yaman, Sibel, Systems and methods for structured stem and suffix language models.
  151. Silverman, Kim; Naik, Devang; Bellegarda, Jerome; Lenzo, Kevin, Systems and methods for text normalization for text to speech synthesis.
  152. Rogers, Matthew; Silverman, Kim; Naik, Devang; Lenzo, Kevin; Rottler, Benjamin, Systems and methods for text to speech synthesis.
  153. Silverman, Kim; Naik, Devang; Lenzo, Kevin; Henton, Caroline, Systems and methods of detecting language and natural language strings for text to speech synthesis.
  154. Neels, Alice E.; Jong, Nicholas K., Text correction processing.
  155. Willmore, Christopher P.; Jong, Nicholas K.; Hogg, Justin S., Text prediction using combined word N-gram and unigram language models.
  156. Vieri, Riccardo; Vieri, Flavio, Text to speech conversion of text messages from mobile communication devices.
  157. Vieri, Riccardo; Vieri, Flavio, Text to speech conversion of text messages from mobile communication devices.
  158. Pitschel, Donald W.; Cheyer, Adam J.; Brigham, Christopher D.; Gruber, Thomas R., Training an at least partial voice command system.
  159. Kalb, Aaron S.; Perry, Ryan P.; Alsina, Thomas Matthieu, Translating phrases from one language into another using an order-based set of declarative rules.
  160. Bellegarda, Jerome R., Unified ranking with entropy-weighted information for phrase-based semantic auto-completion.
  161. Raitio, Tuomo J.; Prahallad, Kishore Sunkeswari; Conkie, Alistair D.; Golipour, Ladan; Winarsky, David A., Unit-selection text-to-speech synthesis based on predicted concatenation parameters.
  162. Jeon, Woojay, Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks.
  163. Bellegarda, Jerome R., Unsupervised document clustering using latent semantic density analysis.
  164. Haughay, Allen P., User profiling for selecting user specific voice input processing information.
  165. Haughay, Allen P., User profiling for voice input processing.
  166. Haughay, Allen P., User profiling for voice input processing.
  167. Haughay, Allen P., User profiling for voice input processing.
  168. Lindahl, Aram; Paquier, Baptiste Pierre, User-specific noise suppression for voice quality improvements.
  169. Gruber, Thomas Robert; Brigham, Christopher Dean; Keen, Daniel S.; Novick, Gregory; Phipps, Benjamin S., Using context information to facilitate processing of commands in a virtual assistant.
  170. Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene, Using event alert text as input to an automated assistant.
  171. Lemay, Stephen O.; Newendorp, Brandon J.; Dascola, Jonathan R., Virtual assistant activation.
  172. Fleizach, Christopher B., Voice control to diagnose inadvertent activation of accessibility features.
  173. Binder, Justin; Post, Samuel D.; Tackin, Onur; Gruber, Thomas R., Voice trigger for a digital assistant.
  174. Badaskar, Sameer, Voice-based media searching.
  175. Badaskar, Sameer, Voice-based media searching.
  176. Klein, Christian; Wygonik, Gregg, Voice-command suggestions based on user identity.
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로