본 논문은 이중모음을 분류하기 위한 음향음성학적 파라메터를 연구하였다. 음향음성학적 파라메터는 성도를 통해 음성이 발성될 때 나타나는 특징을 기반으로 하여 분산분석(ANOVA) 방법을 통해 선별한 모음의 길이, 에너지 궤적, 그리고 포먼트의 차이를 이용하였다. TIMIT 데이터 베이스를 사용하였을 때, 단모음과 이중모음만을 구분하는 실험에서는 17.8% 의 밸런스에러율(BER)을 얻을 수 있었고, /aw/, /ay/, 그리고 /oy/를 단모음과 분류하는 실험에서는 각각 32.9%, 29.9%, 그리고 20.2%의 에러율을 얻을 수 있었다. 추가적으로 진행한 실험에서, 음향음성학적 파라메터와 음성인식에 널리 쓰이고 있는 MFCC를 함께 사용하였을 경우 역시 성능향상이 나타나는 것을 확인하였다.
본 논문은 이중모음을 분류하기 위한 음향음성학적 파라메터를 연구하였다. 음향음성학적 파라메터는 성도를 통해 음성이 발성될 때 나타나는 특징을 기반으로 하여 분산분석(ANOVA) 방법을 통해 선별한 모음의 길이, 에너지 궤적, 그리고 포먼트의 차이를 이용하였다. TIMIT 데이터 베이스를 사용하였을 때, 단모음과 이중모음만을 구분하는 실험에서는 17.8% 의 밸런스 에러율(BER)을 얻을 수 있었고, /aw/, /ay/, 그리고 /oy/를 단모음과 분류하는 실험에서는 각각 32.9%, 29.9%, 그리고 20.2%의 에러율을 얻을 수 있었다. 추가적으로 진행한 실험에서, 음향음성학적 파라메터와 음성인식에 널리 쓰이고 있는 MFCC를 함께 사용하였을 경우 역시 성능향상이 나타나는 것을 확인하였다.
This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant var...
This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.
This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.
* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.
문제 정의
Therefore, this study aims to investigate diphthong characteristics, and to use the associated acoustic phonetic parameters for diphthong classification for a distinctive feature-based speech recognition system. It is assumed that vowel detection has been completed, so that diphthong classification is carried out on vowel segments only.
제안 방법
This implies that acoustic phonetic parameters and MFCCs provide complementary information in detecting diphthongs. Also, experiments were performed to examine the effect of acoustic property. 11 acoustic phonetic parameters are divided by three properties (duration, energy property and formant property) depend on its acoustical characteristic.
2. Error rates for monophthongs from 4-way concurrent diphthong classification using acoustic phonetic parameters, MFCCs, and acoustic phonetic parameters with MFCCs.
This work examines acoustic phonetic parameters for classification of diphthongs in English, as part of a distinctive feature-based speech recognition system. Time variation characteristics of acoustic measurements related to the vocal tract and the voice source are examined, along with widely used cepstral coefficient features.
Using acoustic phonetic parameters and/or cepstral features, Gaussian Mixture Models (GMMs) with 8 mixtures which showed optimal performance are trained for each task from TIMIT training data. For performance evaluation, Balanced Error Rate (BER)[9] is found, in addition to overall classification rates.
대상 데이터
13th-order MFCCs are extracted at start and end positions of vowels, and delta MFCCs are found as the difference between the MFCCs at start and end positions. In total, 39th-order MFCCs are used in the experiments.
데이터처리
The measurements obtained for diphthong classification in the TIMIT training set are first examined using ANOVA. One-way analysis is performed for each of the acoustic measurements, and significant features with P<0.
성능/효과
Tables 3 (a) through (c) show confusion matrix results using acoustic phonetic parameters, MFCCs, and MFCCs in addition to acoustic phonetic parameters, respectively. Classification rates using acoustic phonetic parameters for /aw/, /ay/, and /oy/ are 32.9%, 29.9%, and 20.2%, respectively, while classification rates using acoustic phonetic parameters with MFCCs shows 3 to 6% performance improvement for all diphthongs. Overall, diphthongs with a /y/ offglide show better performance compared to diphthongs with a /w/ offglide.
6%, respectively). However, using acoustic phonetic parameters in addition to MFCCs improves performance, to 14.8% BER and 84.7% classification rate. This implies that acoustic phonetic parameters and MFCCs provide complementary information in detecting diphthongs.
In the two-class experiments (monophthongs versus diphthongs), an overall 17.8% balanced error rate is obtained using the proposed acoustic phonetic parameters, and 32.9%, 29.9%, and 20.2% error rates are obtained for /aw/, /ay/, and /oy/, in the four class experiments (discriminating between monophthongs, /aw/, /ay/ and /oy/). Concurrent 4-way classification is found to be more effective than a tree procedure, where diphthongs are first separated from monophthongs, and are then classified into one of the three diphthongs.
First, results of classification of monophthongs versus diphthongs are presented. Using the 11 acoustic phonetic parameters results in a BER of 17.8% and 82.0% classification rate, which is better than that using 39th-order MFCCs (with 18.1% and 81.6%, respectively). However, using acoustic phonetic parameters in addition to MFCCs improves performance, to 14.
후속연구
Therefore, normalization methods or compensation for adjacent phoneme effects may be necessary. The results of this study are expected to be included in an overall vowel detection module, as part of a distinctive feature-based speech recognition system.
참고문헌 (11)
K. N. Stevens, "Toward a model for lexical access based on acoustic landmarks and distinctive features," J. Acoust. Soc. Am. 111, 1872-1891 (2002).
B. Yang, "An acoustic study of English diphthongs produced by American males and females," Phonetics and Speech Sciences, 2, 43-50 (2010).
R. Carlson and J. Glass, "Vowel classification based on analysis-by-synthesis," in Proc. Int. Conf. Spoken Language Processing, 575-578 (1992).
C. Y. Espy-Wilson, "Acoustic measures for linguistic features distinguishing the semivowels in American English," J. Acoust. Soc. Am. 92, 736-757 (1992).
J. Gustafson and K. Sjolander, "Educational tools for speech technology," in Proc. Fonetik, 176-179 (1998).
J. S. Garofalo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren, "The DARPA TIMIT acousticphonetic continuous speech corpus CDROM," Linguistic Data Consortium (1993).
I. Read and S. Cox, "Automatic pitch accent prediction for Text-To-Speech synthesis," in Proc. Interspeech, 482-485 (2007).
J. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler, "Acoustic characteristics of American English vowels," J. Acoust. Soc. Am. 97, 3099-3111 (1995).
※ AI-Helper는 부적절한 답변을 할 수 있습니다.