$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

Application of personality models and interaction with synthetic characters in a computing system 원문보기

IPC분류정보
국가/구분 United States(US) Patent 등록
국제특허분류(IPC7판)
  • G06F-015/18
출원번호 US-0476982 (1999-12-31)
발명자 / 주소
  • Morris, Tonia G.
출원인 / 주소
  • Intel Corporation
대리인 / 주소
    Blakely, Sokoloff, Taylor & Zafman LLP
인용정보 피인용 횟수 : 235  인용 특허 : 9

초록

An apparatus includes a video input unit and an audio input unit. The apparatus also includes a multi-sensor fusion/recognition unit coupled to the video input unit and the audio input unit, and a processor coupled to the multi-sensor fusion/recognition unit. The multi-sensor fusion/recognition unit

대표청구항

An apparatus includes a video input unit and an audio input unit. The apparatus also includes a multi-sensor fusion/recognition unit coupled to the video input unit and the audio input unit, and a processor coupled to the multi-sensor fusion/recognition unit. The multi-sensor fusion/recognition unit

이 특허에 인용된 특허 (9)

  1. Mizokawa Takashi,JPX, Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object.
  2. Mizokawa Takashi,JPX, Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object.
  3. Takashi Mizokawa JP, Control system for controlling object using pseudo-emotions and pseudo-personality generated in the object.
  4. Horvitz Eric ; Breese John S. ; Heckerman David E. ; Hobson Samuel D. ; Hovel David O. ; Klein Adrian C. ; Rommelse Jacobus A.,NLX ; Shaw Gregory L., Intelligent user assistance facility.
  5. Eves David A.,GB2 ; Allen Richard J.,GB2, Interactive audio entertainment apparatus.
  6. Breese John S. ; Ball John Eugene, Modeling a user's emotion and personality in a computer user interface.
  7. Ball John Eugene ; Breese John S., Modeling and projecting emotion and personality from a computer user interface.
  8. Breese John S. ; Ball John Eugene, Modeling emotion and personality in a computer user interface.
  9. Chen Homer H. (Lincroft NJ), Sound-synchronized video system.

이 특허를 인용한 특허 (235)

  1. Gruber, Thomas R.; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Actionable reminder entries.
  2. Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Active input elicitation by intelligent automated assistant.
  3. Gruber, Thomas Robert; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Active transport based notifications.
  4. Rottler, Benjamin A.; Lindahl, Aram M.; Haughay, Jr., Allen Paul; Ellis, Shawn A.; Wood, Jr., Policarpo Bonilla, Adaptive audio feedback system and method.
  5. Hoffberg, Steven M.; Hoffberg-Borghesani, Linda I., Adaptive pattern recognition based controller apparatus and method and human-interface therefore.
  6. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Adding information or functionality to a rendered document via association with an electronic counterpart.
  7. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Adding value to a rendered document.
  8. Fagundes, Luciano Godoy; Moran, Thomas; Desai, Dhaval; Kohler, Joylee; Michaelis, Paul Roller, Agent matching based on video analysis of customer presentation.
  9. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Aggregate analysis of text captures performed by multiple users from rendered documents.
  10. Mason, Henry, Analyzing audio input for efficient speech and music recognition.
  11. Huang, Rongqing; Oparin, Ilya, Applying neural network language models to weighted finite state transducers for automatic speech recognition.
  12. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Applying scanned information to identify content.
  13. King, Martin; Grover, Dale; Kushler, Clifford; Stafford-Fraser, James; Mannby, Claes-Fredrik, Archive of text captures from rendered documents.
  14. King,Martin T.; Grover,Dale L.; Kushler,Clifford A.; Stafford Fraser,James Q., Archive of text captures from rendered documents.
  15. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Association of a portable scanner with input/output and storage devices.
  16. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Association of a portable scanner with input/output and storage devices.
  17. Bull, William; Rottler, Ben; Schiller, Jonathan A., Audio user interface.
  18. Rottler, Benjamin; Rogers, Matthew; James, Bryan J.; Wood, Policarpo; Hannon, Timothy, Audio user interface for displayless electronic device.
  19. Huppi, Brian Q.; Fadell, Anthony M.; Barrentine, Derek B.; Freeman, Daniel B., Automated response to and sensing of user activity in portable devices.
  20. Huppi, Brian Q.; Fadell, Anthony M.; Barrentine, Derek B.; Freeman, Daniel B., Automated response to and sensing of user activity in portable devices.
  21. Huppi, Brian; Fadell, Anthony M.; Barrentine, Derek; Freeman, Daniel, Automated response to and sensing of user activity in portable devices.
  22. Huppi, Brian; Fadell, Anthony M.; Barrentine, Derek; Freeman, Daniel, Automated response to and sensing of user activity in portable devices.
  23. Nallasamy, Udhyakumar; Kajarekar, Sachin S.; Paulik, Matthias; Seigel, Matthew, Automatic accent detection using acoustic models.
  24. Davidson, Douglas R.; Ozer, Ali, Automatic language identification for dynamic text processing.
  25. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Automatic modification of web pages.
  26. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Automatic modification of web pages.
  27. King, Martin T.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Automatic modification of web pages.
  28. Winer, Morgan, Automatic supplementation of word correction dictionaries.
  29. King, Martin T.; Stephens, Redwood; Mannby, Claes-Fredrik; Peterson, Jesse; Sanvitale, Mark; Smith, Michael J., Automatically capturing information, such as capturing information using a document-aware device.
  30. King, Martin T.; Stephens, Redwood; Mannby, Claes-Fredrik; Peterson, Jesse; Sanvitale, Mark; Smith, Michael J.; Daley-Watson, Christopher J., Automatically providing content associated with captured information, such as information captured in real-time.
  31. Giuli, Richard D.; Treadgold, Nicholas K., Better resolution when referencing to concepts.
  32. Giuli, Richard D.; Treadgold, Nicholas K., Better resolution when referencing to concepts.
  33. Naik, Devang K.; Mohamed, Ali S.; Chen, Hong M., Caching apparatus for serving phonetic pronunciations.
  34. King, Martin Towle; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Capturing text from rendered documents using supplement information.
  35. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford Fraser, James Q., Capturing text from rendered documents using supplemental information.
  36. King, Martin Towle; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Capturing text from rendered documents using supplemental information.
  37. Bellegarda, Jerome R., Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis.
  38. Newendorp, Brandon J.; Dibiase, Evan S., Competing devices responding to voice triggers.
  39. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford Fraser, James Q., Content access with handheld document data capture devices.
  40. Bellegarda, Jerome, Context-aware unit selection.
  41. Williams, Shaun E.; Mason, Henry G.; Krishnamoorthy, Mahesh; Paulik, Matthias; Agrawal, Neha; Kajarekar, Sachin S.; Uguroglu, Selen; Mohamed, Ali S., Context-based endpoint detection.
  42. Gruber, Thomas R.; Pitschel, Donald W., Context-sensitive handling of interruptions.
  43. Larson, Anthony L.; Dave, Swapnil R.; Varoglu, Devrim, Context-sensitive handling of interruptions.
  44. van Os, Marcel, Context-sensitive handling of interruptions by intelligent digital assistant.
  45. Seshadri, Nambi, Correlating video images of lip movements with audio signals to improve speech recognition.
  46. Gruber, Thomas R.; Cheyer, Adam J.; Pitschel, Donald W., Crowd sourcing information to fulfill user requests.
  47. Gruber, Thomas R.; Cheyer, Adam John; Pitschel, Donald W., Crowd sourcing information to fulfill user requests.
  48. Wadycki, Andrew; Douglas, Jason, Customized search or acquisition of digital media assets.
  49. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford Fraser, James Q., Data capture from rendered documents using handheld device.
  50. King, Martin Towle; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Data capture from rendered documents using handheld device.
  51. Rhoten, George; Treadgold, Nicholas K., Determining domain salience ranking from ambiguous words in natural speech.
  52. Cheyer, Adam John; Brigham, Christopher Dean; Guzzoni, Didier Rene, Determining user intent based on ontologies of domains.
  53. Cheyer, Adam J., Device access using voice authentication.
  54. Cheyer, Adam John, Device access using voice authentication.
  55. Kauffmann, Alejandro Jose; Plagemann, Christian, Device interaction with spatially aware gestures.
  56. Kauffmann, Alejandro Jose; Plagemann, Christian, Device interaction with spatially aware gestures.
  57. Kauffmann, Alejandro Jose; Plagemann, Christian, Device interaction with spatially aware gestures.
  58. Piernot, Philippe P.; Binder, Justin G., Device voice control for selecting a displayed affordance.
  59. Carson, David A.; Keen, Daniel; Dibiase, Evan; Saddler, Harry J.; Iacono, Marco; Lemay, Stephen O.; Pitschel, Donald W.; Gruber, Thomas R., Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant.
  60. Fleizach, Christopher Brian; Gruber, Thomas Robert, Device, method, and user interface for voice-activated navigation and browsing of a document.
  61. Lindahl, Aram; Wood, Policarpo, Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts.
  62. Lindahl, Aram; Wood, Policarpo, Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts.
  63. Raitio, Tuomo J.; Hunt, Melvyn J.; Richards, Hywel B.; Chinthakunta, Madhusudan, Digital assistant providing whispered speech.
  64. Sun, Jian; Wang, Qiang; Zhang, Weiwei; Tang, Xiaoou; Shum, Heung-Yeung, Digital video effects.
  65. Henton, Caroline; Naik, Devang, Disambiguating heteronyms in speech synthesis.
  66. Guzzoni, Didier Rene; Cheyer, Adam John; Gruber, Thomas Robert; Brigham, Christopher Dean; Saddler, Harry Joseph, Disambiguation based on active input elicitation by intelligent automated assistant.
  67. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Document enhancement system and method.
  68. Amer, Mohamed R.; Siddiquie, Behjat; Divakaran, Ajay; Richey, Colleen; Khan, Saad; Sawhney, Hapreet S.; Shields, Timothy J., Dynamic hybrid models for multimodal analysis.
  69. Wagner, Oliver P., Electronic device with text error correction based on voice recognition data.
  70. Wagner, Oliver P., Electronic device with text error correction based on voice recognition data.
  71. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  72. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  73. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  74. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  75. Lindahl, Aram M., Electronic devices with voice command and contextual data processing capabilities.
  76. Nowson, Scott P.; Perez, Julien J., Emotion, mood and personality inference in real-time environments.
  77. Shin, Hyun Soon; Lee, Yong Kwi; Jo, Jun, Emotion-based vehicle service system, emotion cognition processing apparatus, safe driving apparatus, and emotion-based safe driving service method.
  78. Bellegarda, Jerome R., Entropy-guided text prediction using combined word and character n-gram language models.
  79. Bellegarda, Jerome, Exemplar-based latent perceptual modeling for automatic speech recognition.
  80. Futrell, Richard L.; Gruber, Thomas R., Exemplar-based natural language processing.
  81. Futrell, Richard L.; Gruber, Thomas R., Exemplar-based natural language processing.
  82. Bellegarda, Jerome R.; Silverman, Kim E. A., Fast, language-independent method for user authentication by voice.
  83. Bellegarda, Jerome R.; Silverman, Kim E. A., Fast, language-independent method for user authentication by voice.
  84. Gruber, Thomas R.; Sabatelli, Alessandro F.; Pitschel, Donald W., Generating and processing task items that represent tasks to perform.
  85. Higaki, Nobuo; Yoshida, Yuichi; Fujimura, Kikuo, Gesture recognition system.
  86. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device.
  87. Fleizach, Christopher Brian; Minifie, Darren C., Handling speech synthesis of content for multiple languages.
  88. Foo, Edwin W.; Hughes, Gregory F., Hearing assistance system for providing consistent human speech.
  89. King, Martin T.; Stephens, Redwood; Mannby, Claes-Fredrik; Peterson, Jesse; Sanvitale, Mark; Smith, Michael J., Identifying a document by performing spectral analysis on the contents of the document.
  90. King, Martin T.; Mannby, Claes-Fredrik; Smith, Michael J., Image search using text-based elements within the contents of images.
  91. King, Martin T.; Stafford Fraser, James Q.; Kushler, Clifford A.; Grover, Dale L., Information gathering system and method.
  92. Orr, Ryan M.; Nell, Garett R.; Brumbaugh, Benjamin L., Intelligent assistant for home automation.
  93. Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
  94. Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
  95. Saddler, Harry J.; Piercy, Aimee T.; Weinberg, Garrett L.; Booker, Susan L., Intelligent automated assistant.
  96. Os, Marcel Van; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
  97. Van Os, Marcel; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
  98. Orr, Ryan M.; Bernardo, Matthew P.; Mandel, Daniel J., Intelligent automated assistant for media exploration.
  99. Piersol, Kurt W.; Orr, Ryan M.; Mandel, Daniel J., Intelligent device arbitration and control.
  100. Booker, Susan L.; Krishnan, Murali; Weinberg, Garrett L.; Piercy, Aimee, Intelligent list reading.
  101. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  102. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  103. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  104. Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean, Intent deduction based on previous user interactions with voice assistant.
  105. Lemay, Stephen O.; Sabatelli, Alessandro Francesco; Anzures, Freddy Allen; Chaudhri, Imran; Forstall, Scott; Novick, Gregory, Interface for a virtual digital assistant.
  106. Hoffberg, Steven M.; Hoffberg-Borghesani, Linda I., Internet appliance system and method.
  107. Cash, Jesse R.; Dave, Swapnil R.; Varoglu, Devrim, Interpreting and acting upon commands that involve sharing information with remote devices.
  108. Bellegarda, Jerome R.; Barman, Bishal, Language identification from short strings.
  109. Hatori, Jun; Yu, Dominic, Language input correction.
  110. Kida, Yasuo; Kocienda, Ken; Furches, Elizabeth Caroline, Language input interface on a device.
  111. Kida, Yasuo; Kocienda, Kenneth; Cranfill, Elizabeth Caroline Furches, Language input interface on a device.
  112. Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean; Kittlaus, Dag, Maintaining context information between user interactions with a voice assistant.
  113. Shambaugh, Craig; Beck, III, James L., Method and apparatus for agent optimization using speech synthesis and recognition.
  114. Cheyer, Adam; Guzzoni, Didier, Method and apparatus for building an intelligent automated assistant.
  115. Cheyer, Adam; Guzzoni, Didier, Method and apparatus for building an intelligent automated assistant.
  116. Paulik, Matthias; Evermann, Gunnar; Gillick, Laurence S., Method and apparatus for discovering trending terms in speech requests.
  117. Christie, Gregory N.; Westen, Peter T.; Lemay, Stephen O.; Alfke, Jens, Method and apparatus for displaying information during an instant messaging session.
  118. Cheyer, Adam, Method and apparatus for searching using an active ontology.
  119. Seymour, Leslie G., Method and apparatus to encourage development of long term recollections of given episodes.
  120. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Method and system for character recognition.
  121. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Method and system for character recognition.
  122. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Method and system for character recognition.
  123. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Method and system for character recognition.
  124. Freeman, Daniel; Barrentine, Derek B., Method and system for operating a multi-function portable electronic device using voice-activation.
  125. Ramerth, Brent D.; Naik, Devang K.; Davidson, Douglas R.; Dolfing, Jannes G. A.; Pu, Jia, Method for disambiguating multiple readings in language conversion.
  126. Paulik, Matthias; Huang, Rongqing, Method for supporting dynamic grammars in WFST-based ASR.
  127. Scholl, Holger, Method of processing a text, gesture, facial expression, and/or behavior description comprising a test of the authorization for using corresponding profiles for synthesis.
  128. Unuma, Munetoshi; Nonaka, Shiro; Oho, Shigeru, Method, apparatus and system for recognizing actions.
  129. Russek, David J., Method, system and software for associating attributes within digital media presentations.
  130. Russek, David J., Method, system and software for digital media narrative personalization.
  131. Lee, Michael M., Methods and apparatus for altering audio output signals.
  132. Bellegarda, Jerome R., Methods and apparatuses for automatic speech recognition.
  133. Mercer, Paul, Methods and apparatuses for display and traversing of links in page character array.
  134. Shaw, Christopher Deane, Methods for spontaneously generating behavior in two and three-dimensional images and mechanical robots, and of linking this behavior to that of human users.
  135. King,Martin T.; Grover,Dale L.; Kushler,Clifford A.; Stafford Fraser,James Q., Methods, systems and computer program products for data gathering in a digital and hard copy document environment.
  136. Lee, Michael M.; Gregg, Justin; Seguin, Chad G., Mobile device having human language translation capability with positional feedback.
  137. Lee, Michael M.; Gregg, Justin; Seguin, Chad G., Mobile device having human language translation capability with positional feedback.
  138. Gruber, Thomas R.; Saddler, Harry J.; Bellegarda, Jerome Rene; Nyeggen, Bryce H.; Sabatelli, Alessandro, Multi-command single utterance input method.
  139. Mason, James Eric; Boettcher, Jesse, Multi-tiered voice feedback in an electronic device.
  140. Mason, James Eric; Boettcher, Jesse, Multi-tiered voice feedback in an electronic device.
  141. Bellegarda, Jerome R.; Davidson, Douglas R., Multilingual word prediction.
  142. Naik, Devang K., Name recognition system.
  143. Naik, Devang K., Name recognition system.
  144. Lindahl, Aram; Williams, Joseph M.; Klimanis, Gints Valdis, Noise profile determination for voice-related feature.
  145. King, Martin T.; Mannby, Claes-Fredrik; Arends, Thomas C.; Bajorins, David P.; Fox, Daniel C., Optical scanners, such as hand-held optical scanners.
  146. Gruber, Thomas Robert; Saddler, Harry Joseph; Cheyer, Adam John; Kittlaus, Dag; Brigham, Christopher Dean; Giuli, Richard Donald; Guzzoni, Didier Rene; Bastea-Forte, Marcello, Paraphrasing of user requests and results by automated digital assistant.
  147. Bellegarda, Jerome R., Parsimonious continuous-space phrase representations for natural language processing.
  148. Bellegarda, Jerome R.; Yaman, Sibel, Parsimonious handling of word inflection via categorical stem + suffix N-gram language models.
  149. Bellegarda, Jerome, Part-of-speech tagging using latent analogy.
  150. King, Martin T.; Stephens, Redwood; Mannby, Claes-Fredrik; Peterson, Jesse; Sanvitale, Mark; Smith, Michael J., Performing actions based on capturing information from rendered documents, such as documents under copyright.
  151. King, Martin T.; Stephens, Redwood; Mannby, Claes-Fredrik; Peterson, Jesse; Sanvitale, Mark; Smith, Michael J., Performing actions based on capturing information from rendered documents, such as documents under copyright.
  152. Chen, Lik Harry; Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert, Personalized vocabulary for digital assistant.
  153. Anzures, Freddy Allen; van Os, Marcel; Lemay, Stephen O.; Matas, Michael, Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars.
  154. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Portable scanning device.
  155. Wang, Xin; Ramerth, Brent D., Predictive conversion of language input.
  156. Dolfing, Jannes; Ramerth, Brent; Davidson, Douglas; Bellegarda, Jerome; Moore, Jennifer; Eminidis, Andreas; Shaffer, Joshua, Predictive text input.
  157. Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene; Brigham, Christopher Dean; Saddler, Harry Joseph, Prioritizing selection criteria by automated assistant.
  158. Paulik, Matthias; Mason, Henry G.; Seigel, Matthew S., Privacy preserving distributed evaluation framework for embedded personalized systems.
  159. Martel, Mathieu Jean; Deniau, Thomas, Proactive assistance based on dialog communication between devices.
  160. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Processing techniques for text capture from a rendered document.
  161. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Processing techniques for text capture from a rendered document.
  162. King, Martin Towle; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Quentin, Processing techniques for text capture from a rendered document.
  163. King, Martin T.; Kushler, Clifford A.; Stafford-Fraser, James Q.; Grover, Dale L., Processing techniques for visual capture data from a rendered document.
  164. King, Martin T.; Kushler, Clifford A.; Stafford-Fraser, James Q.; Grover, Dale L., Processing techniques for visual capture data from a rendered document.
  165. Kim, Yoon, Providing an indication of the suitability of speech recognition.
  166. Yanagihara, Kazuhisa, Providing text input using speech data and non-speech data.
  167. Yanagihara, Kazuhisa, Providing text input using speech data and non-speech data.
  168. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Publishing techniques for adding value to a rendered document.
  169. Vanderwater, Kathryn R.; Haugen, Frances B.; Unger, Alexander K., Ranking users based on contextual factors.
  170. Piernot, Philippe P.; Binder, Justin G., Reducing the need for manual start/end-pointing and trigger phrases.
  171. Frank, Ari M; Thieberger, Gil, Reducing transmissions of measurements of affective response by identifying actions that imply emotional response.
  172. Volkert, Christopher, Search assistant for digital media assets.
  173. Volkert, Christopher, Search assistant for digital media assets.
  174. Volkert, Christopher, Search assistant for digital media assets.
  175. Volkert, Christopher, Search assistant for digital media assets.
  176. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Search engines and systems with handheld document data capture devices.
  177. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Search engines and systems with handheld document data capture devices.
  178. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Secure data gathering from rendered documents.
  179. Moore, Jennifer Lauren; Naik, Devang K.; Bellegarda, Jerome R.; Aitken, Kevin Bartlett; Silverman, Kim E., Semantic search using a single-source semantic model.
  180. Bellegarda, Jerome, Sentiment prediction from textual data.
  181. Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean, Service orchestration for intelligent automated assistant.
  182. Naik, Devang K.; Piernot, Philippe P., Social reminders.
  183. Naik, Devang K.; Piernot, Philippe P., Social reminders.
  184. Kim, Yoon; Kajarekar, Sachin S., Speaker identification and unsupervised speaker adaptation techniques.
  185. Hunt, Melvyn; Bridle, John, Speech recognition involving a mobile device.
  186. Chen, Lik Harry, Speech recognition repair using contextual information.
  187. Sumner, Michael R.; Newendorp, Brandon J.; Orr, Ryan M., Structured dictation using intelligent automated assistants.
  188. Smus, Boris; Kauffmann, Alejandro Jose; Plagemann, Christian, Supplementing speech commands with gestures.
  189. Sinha, Anoop K., System and method for detecting errors in interactions with a voice-based digital assistant.
  190. Roberts, Andrew J.; Martin, David L.; Saddler, Harry J., System and method for emergency calls initiated by voice command.
  191. Evermann, Gunnar, System and method for inferring user intent from speech inputs.
  192. Naik, Devang K.; Tackin, Onur E., System and method for updating an adaptive speech recognition model.
  193. Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
  194. Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
  195. Vieri, Riccardo, System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system.
  196. Vieri, Riccardo, System to generate and set up an advertising campaign based on the insertion of advertising messages within an exchange of messages, and method to operate said system.
  197. Rogers, Matthew; Silverman, Kim; Naik, Devang; Rottler, Benjamin, Systems and methods for concatenation of words in text to speech synthesis.
  198. Herman, Kenneth; Rogers, Matthew; James, Bryan, Systems and methods for determining the language to use for speech generated by a text to speech engine.
  199. James, Bryan; Herman, Kenneth; Rogers, Matthew L., Systems and methods for determining the language to use for speech generated by a text to speech engine.
  200. Naik, Devang K., Systems and methods for name pronunciation.
  201. Keen, Daniel, Systems and methods for recognizing textual identifiers within a plurality of words.
  202. Naik, DeVang; Silverman, Kim; Bellegarda, Jerome, Systems and methods for selective rate of speech and speech preferences for text to speech synthesis.
  203. Bellegarda, Jerome; Naik, Devang; Silverman, Kim, Systems and methods for selective text to speech synthesis.
  204. Bellegarda, Jerome R.; Yaman, Sibel, Systems and methods for structured stem and suffix language models.
  205. Silverman, Kim; Naik, Devang; Bellegarda, Jerome; Lenzo, Kevin, Systems and methods for text normalization for text to speech synthesis.
  206. Rogers, Matthew; Silverman, Kim; Naik, Devang; Lenzo, Kevin; Rottler, Benjamin, Systems and methods for text to speech synthesis.
  207. Silverman, Kim; Naik, Devang; Lenzo, Kevin; Henton, Caroline, Systems and methods of detecting language and natural language strings for text to speech synthesis.
  208. Neels, Alice E.; Jong, Nicholas K., Text correction processing.
  209. Willmore, Christopher P.; Jong, Nicholas K.; Hogg, Justin S., Text prediction using combined word N-gram and unigram language models.
  210. Vieri, Riccardo; Vieri, Flavio, Text to speech conversion of text messages from mobile communication devices.
  211. Vieri, Riccardo; Vieri, Flavio, Text to speech conversion of text messages from mobile communication devices.
  212. Pitschel, Donald W.; Cheyer, Adam J.; Brigham, Christopher D.; Gruber, Thomas R., Training an at least partial voice command system.
  213. Kalb, Aaron S.; Perry, Ryan P.; Alsina, Thomas Matthieu, Translating phrases from one language into another using an order-based set of declarative rules.
  214. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
  215. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
  216. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
  217. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
  218. King, Martin T.; Grover, Dale L.; Kushler, Clifford A.; Stafford-Fraser, James Q., Triggering actions in response to optically or acoustically capturing keywords from a rendered document.
  219. Bellegarda, Jerome R., Unified ranking with entropy-weighted information for phrase-based semantic auto-completion.
  220. Raitio, Tuomo J.; Prahallad, Kishore Sunkeswari; Conkie, Alistair D.; Golipour, Ladan; Winarsky, David A., Unit-selection text-to-speech synthesis based on predicted concatenation parameters.
  221. Jeon, Woojay, Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks.
  222. Bellegarda, Jerome R., Unsupervised document clustering using latent semantic density analysis.
  223. Haughay, Allen P., User profiling for selecting user specific voice input processing information.
  224. Haughay, Allen P., User profiling for voice input processing.
  225. Haughay, Allen P., User profiling for voice input processing.
  226. Haughay, Allen P., User profiling for voice input processing.
  227. Lindahl, Aram; Paquier, Baptiste Pierre, User-specific noise suppression for voice quality improvements.
  228. Gruber, Thomas Robert; Brigham, Christopher Dean; Keen, Daniel S.; Novick, Gregory; Phipps, Benjamin S., Using context information to facilitate processing of commands in a virtual assistant.
  229. Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene, Using event alert text as input to an automated assistant.
  230. King, Martin T.; Mannby, Claes-Fredrik; Valenti, William, Using gestalt information to identify locations in printed information.
  231. Lemay, Stephen O.; Newendorp, Brandon J.; Dascola, Jonathan R., Virtual assistant activation.
  232. Fleizach, Christopher B., Voice control to diagnose inadvertent activation of accessibility features.
  233. Binder, Justin; Post, Samuel D.; Tackin, Onur; Gruber, Thomas R., Voice trigger for a digital assistant.
  234. Badaskar, Sameer, Voice-based media searching.
  235. Badaskar, Sameer, Voice-based media searching.
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로