$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

Formant-based speech synthesizer employing demi-syllable concatenation with independent cross fade in the filter parameter and source domains 원문보기

IPC분류정보
국가/구분 United States(US) Patent 등록
국제특허분류(IPC7판)
  • G06F-015/00
  • G10L-013/00
출원번호 US-0200327 (1998-11-25)
발명자 / 주소
  • Pearson Steve
  • Kibre Nicholas
  • Niedzielski Nancy
출원인 / 주소
  • Matsushita Electric Industrial Co., Ltd., JPX
대리인 / 주소
    Harness, Dickey & Pierce, P.L.C.
인용정보 피인용 횟수 : 111  인용 특허 : 5

초록

The concatenative speech synthesizer employs demi-syllable subword units to generate speech. The synthesizer is based on a source-filter model that uses source signals that correspond closely to the human glottal source and that uses filter parameters that correspond closely to the human vocal tract

대표청구항

[ What is claimed is:] [1.] A concatenative speech synthesizer, comprising:a database containing (a) demi-syllable waveform data associated with a plurality of demi-syllables and (b) filter parameter data associated with said plurality of demi-syllables;a unit selection system for extracting selecte

이 특허에 인용된 특허 (5)

  1. Sharman Richard Anthony,GBX, Method and system for synthesizing speech.
  2. Serra Xavier (Barcelona CA ESX) Williams Chris (San Rafael CA) Gross Robert (Raleigh NC) Wold Erling (El Cerrito CA), Method of and apparatus for analyzing and synthesizing a sound by extracting and controlling a sound parameter.
  3. Miyasaka Shuji,JPX, Reproducing apparatus.
  4. Holzrichter John F. ; Ng Lawrence C., Speech coding, reconstruction and recognition using acoustics and electromagnetic waves.
  5. Benbassat Gerard V. (St. Paul FRX), Speech encoding process combining written and spoken message codes.

이 특허를 인용한 특허 (111)

  1. Gruber, Thomas R.; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Actionable reminder entries.
  2. Gruber, Thomas Robert; Sabatelli, Alessandro F.; Aybes, Alexandre A.; Pitschel, Donald W.; Voas, Edward D.; Anzures, Freddy A.; Marcos, Paul D., Active transport based notifications.
  3. Mason, Henry, Analyzing audio input for efficient speech and music recognition.
  4. Huang, Rongqing; Oparin, Ilya, Applying neural network language models to weighted finite state transducers for automatic speech recognition.
  5. Nallasamy, Udhyakumar; Kajarekar, Sachin S.; Paulik, Matthias; Seigel, Matthew, Automatic accent detection using acoustic models.
  6. Giuli, Richard D.; Treadgold, Nicholas K., Better resolution when referencing to concepts.
  7. Giuli, Richard D.; Treadgold, Nicholas K., Better resolution when referencing to concepts.
  8. Naik, Devang K.; Mohamed, Ali S.; Chen, Hong M., Caching apparatus for serving phonetic pronunciations.
  9. Newendorp, Brandon J.; Dibiase, Evan S., Competing devices responding to voice triggers.
  10. Beutnagel, Mark Charles; Mohri, Mehryar; Riley, Michael Dennis, Concatenation cost in speech synthesis for acoustic unit sequential pair using hash table and default concatenation cost.
  11. Partovi, Hadi; Brathwaite, Roderick Steven; Davis, Angus Macdonald; McCue, Michael S.; Porter, Brandon William; Giannandrea, John; Walther, Eckart; Li, Zhe, Content personalization over an interface with adaptive voice character.
  12. Williams, Shaun E.; Mason, Henry G.; Krishnamoorthy, Mahesh; Paulik, Matthias; Agrawal, Neha; Kajarekar, Sachin S.; Uguroglu, Selen; Mohamed, Ali S., Context-based endpoint detection.
  13. Larson, Anthony L.; Dave, Swapnil R.; Varoglu, Devrim, Context-sensitive handling of interruptions.
  14. van Os, Marcel, Context-sensitive handling of interruptions by intelligent digital assistant.
  15. Gruber, Thomas R.; Cheyer, Adam John; Pitschel, Donald W., Crowd sourcing information to fulfill user requests.
  16. Rhoten, George; Treadgold, Nicholas K., Determining domain salience ranking from ambiguous words in natural speech.
  17. Cheyer, Adam John; Brigham, Christopher Dean; Guzzoni, Didier Rene, Determining user intent based on ontologies of domains.
  18. Cheyer, Adam J., Device access using voice authentication.
  19. Cheyer, Adam John, Device access using voice authentication.
  20. Piernot, Philippe P.; Binder, Justin G., Device voice control for selecting a displayed affordance.
  21. Carson, David A.; Keen, Daniel; Dibiase, Evan; Saddler, Harry J.; Iacono, Marco; Lemay, Stephen O.; Pitschel, Donald W.; Gruber, Thomas R., Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant.
  22. Fleizach, Christopher Brian; Gruber, Thomas Robert, Device, method, and user interface for voice-activated navigation and browsing of a document.
  23. Raitio, Tuomo J.; Hunt, Melvyn J.; Richards, Hywel B.; Chinthakunta, Madhusudan, Digital assistant providing whispered speech.
  24. Henton, Caroline; Naik, Devang, Disambiguating heteronyms in speech synthesis.
  25. Zhang, Ming; Pai, Wan-Chieh, Dynamic range control module, speech processing apparatus, and method for amplitude adjustment for a speech signal.
  26. Bellegarda, Jerome R., Entropy-guided text prediction using combined word and character n-gram language models.
  27. Futrell, Richard L.; Gruber, Thomas R., Exemplar-based natural language processing.
  28. Futrell, Richard L.; Gruber, Thomas R., Exemplar-based natural language processing.
  29. Addison, Edwin R.; Wilson, H. Donald; Marple, Gary; Handal, Anthony H.; Krebs, Nancy, Expressive parsing in computerized conversion of text to speech.
  30. Bellegarda, Jerome R.; Silverman, Kim E. A., Fast, language-independent method for user authentication by voice.
  31. Fleizach, Christopher Brian; Minifie, Darren C., Handling speech synthesis of content for multiple languages.
  32. Orr, Ryan M.; Nell, Garett R.; Brumbaugh, Benjamin L., Intelligent assistant for home automation.
  33. Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
  34. Gruber, Thomas Robert; Cheyer, Adam John; Kittlaus, Dag; Guzzoni, Didier Rene; Brigham, Christopher Dean; Giuli, Richard Donald; Bastea-Forte, Marcello; Saddler, Harry Joseph, Intelligent automated assistant.
  35. Saddler, Harry J.; Piercy, Aimee T.; Weinberg, Garrett L.; Booker, Susan L., Intelligent automated assistant.
  36. Os, Marcel Van; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
  37. Van Os, Marcel; Saddler, Harry J.; Napolitano, Lia T.; Russell, Jonathan H.; Lister, Patrick M.; Dasari, Rohit, Intelligent automated assistant for TV user interactions.
  38. Orr, Ryan M.; Bernardo, Matthew P.; Mandel, Daniel J., Intelligent automated assistant for media exploration.
  39. Piersol, Kurt W.; Orr, Ryan M.; Mandel, Daniel J., Intelligent device arbitration and control.
  40. Booker, Susan L.; Krishnan, Murali; Weinberg, Garrett L.; Piercy, Aimee, Intelligent list reading.
  41. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  42. Fleizach, Christopher Brian; Hudson, Reginald Dean, Intelligent text-to-speech conversion.
  43. Lemay, Stephen O.; Sabatelli, Alessandro Francesco; Anzures, Freddy Allen; Chaudhri, Imran; Forstall, Scott; Novick, Gregory, Interface for a virtual digital assistant.
  44. Cash, Jesse R.; Dave, Swapnil R.; Varoglu, Devrim, Interpreting and acting upon commands that involve sharing information with remote devices.
  45. Bellegarda, Jerome R.; Barman, Bishal, Language identification from short strings.
  46. Hatori, Jun; Yu, Dominic, Language input correction.
  47. Paulik, Matthias; Evermann, Gunnar; Gillick, Laurence S., Method and apparatus for discovering trending terms in speech requests.
  48. Paulik, Matthias; Huang, Rongqing, Method for supporting dynamic grammars in WFST-based ASR.
  49. Wilson,H. Donald; Handal,Anthony H.; Marple,Gary; Lessac,Michael, Method of recognizing spoken language with recognition of language color.
  50. Case,Eliot M., Method of training a digital voice library to associate syllable speech items with literal text syllables.
  51. Lee, Michael M., Methods and apparatus for altering audio output signals.
  52. Beutnagel, Mark Charles; Mohri, Mehryar; Riley, Michael Dennis, Methods and apparatus for rapid acoustic unit selection from a large speech corpus.
  53. Bellegarda, Jerome R., Methods and apparatus related to pruning for concatenative text-to-speech synthesis.
  54. Lee, Michael M.; Gregg, Justin; Seguin, Chad G., Mobile device having human language translation capability with positional feedback.
  55. Lee, Michael M.; Gregg, Justin; Seguin, Chad G., Mobile device having human language translation capability with positional feedback.
  56. Gruber, Thomas R.; Saddler, Harry J.; Bellegarda, Jerome Rene; Nyeggen, Bryce H.; Sabatelli, Alessandro, Multi-command single utterance input method.
  57. Bellegarda, Jerome R.; Davidson, Douglas R., Multilingual word prediction.
  58. Naik, Devang K., Name recognition system.
  59. Gruber, Thomas Robert; Saddler, Harry Joseph; Cheyer, Adam John; Kittlaus, Dag; Brigham, Christopher Dean; Giuli, Richard Donald; Guzzoni, Didier Rene; Bastea-Forte, Marcello, Paraphrasing of user requests and results by automated digital assistant.
  60. Bellegarda, Jerome R., Parsimonious continuous-space phrase representations for natural language processing.
  61. Bellegarda, Jerome R.; Yaman, Sibel, Parsimonious handling of word inflection via categorical stem + suffix N-gram language models.
  62. Chen, Lik Harry; Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert, Personalized vocabulary for digital assistant.
  63. Wang, Xin; Ramerth, Brent D., Predictive conversion of language input.
  64. Dolfing, Jannes; Ramerth, Brent; Davidson, Douglas; Bellegarda, Jerome; Moore, Jennifer; Eminidis, Andreas; Shaffer, Joshua, Predictive text input.
  65. Paulik, Matthias; Mason, Henry G.; Seigel, Matthew S., Privacy preserving distributed evaluation framework for embedded personalized systems.
  66. Martel, Mathieu Jean; Deniau, Thomas, Proactive assistance based on dialog communication between devices.
  67. Kim, Yoon, Providing an indication of the suitability of speech recognition.
  68. Stifelman, Lisa J.; Partovi, Hadi; Partovi, Haleh; Alpert, David Bryan; Marx, Matthew Talin; Bailey, Scott James; Sims, Kyle D.; Bailey, Darby McDonough; Brathwaite, Roderick Steven; Koh, Eugene; Davis, Angus Macdonald, Providing menu and other services for an information processing system using a telephone or other audio interface.
  69. Stifelman,Lisa Joy; Partovi,Hadi; Partovi,Haleh; Alpert,David Bryan; Marx,Matthew Talin; Bailey,Scott James; Sims,Kyle D.; Bailey,Darby McDonough; Brathwaite,Roderick Steven; Koh,Eugene; Davis,Angus Macdonald, Providing services for an information processing system using an audio interface.
  70. Beutnagel, Mark Charles; Mohri, Mehryar; Riley, Michael Dennis, Recording concatenation costs of most common acoustic unit sequential pairs to a concatenation cost database for speech synthesis.
  71. Piernot, Philippe P.; Binder, Justin G., Reducing the need for manual start/end-pointing and trigger phrases.
  72. Cheyer, Adam John; Guzzoni, Didier Rene; Gruber, Thomas Robert; Brigham, Christopher Dean, Service orchestration for intelligent automated assistant.
  73. Naik, Devang K.; Piernot, Philippe P., Social reminders.
  74. Naik, Devang K.; Piernot, Philippe P., Social reminders.
  75. Kim, Yoon; Kajarekar, Sachin S., Speaker identification and unsupervised speaker adaptation techniques.
  76. Hunt, Melvyn; Bridle, John, Speech recognition involving a mobile device.
  77. Kasai, Osamu; Mizoguchi, Toshiyuki, Speech synthesis for tasks with word and prosody dictionaries.
  78. Beutnagel, Mark Charles; Mohri, Mehryar; Riley, Michael Dennis, Speech synthesis from acoustic units with default values of concatenation cost.
  79. Yamada, Masayuki; Komori, Yasuhiro; Fukada, Toshiaki, Speech synthesis method and apparatus, and dictionary generation method and apparatus.
  80. Chazan, Dan; Hoory, Ron; Kons, Zvi; Shechtman, Slava; Sorin, Alexander, Speech synthesis using complex spectral modeling.
  81. Kasai, Osamu; Mizoguchi, Toshiyuki, Speech synthesis with prosodic model data and accent type.
  82. Yamada,Masayuki; Komori,Yasuhiro, Speech synthesizing method and apparatus using prosody control.
  83. Handal, Anthony H.; Marple, Gary; Wilson, H. Donald; Lessac, Michael, Speech training method with alternative proper pronunciation database.
  84. Sumner, Michael R.; Newendorp, Brandon J.; Orr, Ryan M., Structured dictation using intelligent automated assistants.
  85. Pollet, Vincent; Breen, Andrew, Synthesis by generation and concatenation of multi-form segments.
  86. Case, Eliot M.; Weirauch, Judith L., System and method for converting text-to-voice.
  87. Case,Eliot M.; Phillips,Richard P., System and method for converting text-to-voice.
  88. Case,Eliot M.; Weirauch,Judith L.; Phillips,Richard P., System and method for converting text-to-voice.
  89. Sinha, Anoop K., System and method for detecting errors in interactions with a voice-based digital assistant.
  90. Roberts, Andrew J.; Martin, David L.; Saddler, Harry J., System and method for emergency calls initiated by voice command.
  91. Evermann, Gunnar, System and method for inferring user intent from speech inputs.
  92. Naik, Devang K.; Tackin, Onur E., System and method for updating an adaptive speech recognition model.
  93. Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
  94. Naik, Devang K.; Gruber, Thomas R.; Weiner, Liam; Binder, Justin G.; Srisuwananukorn, Charles; Evermann, Gunnar; Williams, Shaun Eric; Chen, Hong; Napolitano, Lia T., System and method for user-specified pronunciation of words for speech synthesis and recognition.
  95. Naik, Devang K., Systems and methods for name pronunciation.
  96. Bellegarda, Jerome R.; Yaman, Sibel, Systems and methods for structured stem and suffix language models.
  97. Neels, Alice E.; Jong, Nicholas K., Text correction processing.
  98. Willmore, Christopher P.; Jong, Nicholas K.; Hogg, Justin S., Text prediction using combined word N-gram and unigram language models.
  99. Addison, Edwin R.; Wilson, H. Donald; Marple, Gary; Handal, Anthony H.; Krebs, Nancy, Text to speech.
  100. Pitschel, Donald W.; Cheyer, Adam J.; Brigham, Christopher D.; Gruber, Thomas R., Training an at least partial voice command system.
  101. Bellegarda, Jerome R., Unified ranking with entropy-weighted information for phrase-based semantic auto-completion.
  102. Raitio, Tuomo J.; Prahallad, Kishore Sunkeswari; Conkie, Alistair D.; Golipour, Ladan; Winarsky, David A., Unit-selection text-to-speech synthesis based on predicted concatenation parameters.
  103. Jeon, Woojay, Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks.
  104. Haughay, Allen P., User profiling for voice input processing.
  105. Haughay, Allen P., User profiling for voice input processing.
  106. Gruber, Thomas Robert; Brigham, Christopher Dean; Keen, Daniel S.; Novick, Gregory; Phipps, Benjamin S., Using context information to facilitate processing of commands in a virtual assistant.
  107. Gruber, Thomas Robert; Cheyer, Adam John; Guzzoni, Didier Rene, Using event alert text as input to an automated assistant.
  108. Lemay, Stephen O.; Newendorp, Brandon J.; Dascola, Jonathan R., Virtual assistant activation.
  109. Stylianou Ioannis G., Voice quality compensation system for speech synthesis based on unit-selection speech database.
  110. Binder, Justin; Post, Samuel D.; Tackin, Onur; Gruber, Thomas R., Voice trigger for a digital assistant.
  111. Badaskar, Sameer, Voice-based media searching.
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로