최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기한국음향학회지= The journal of the acoustical society of Korea, v.41 no.1, 2022년, pp.38 - 44
정재희 (인천대학교 컴퓨터공학부) , 김우일 (인천대학교 컴퓨터공학부)
Speech enhancement is performed to improve intelligibility and quality of the noise-corrupted speech. In this paper, speech enhancement performance was compared using different loss functions in time and frequency domains. This study proposes a combination of loss functions to utilize advantage of e...
J. Lim and A. Oppenheim, "All-pole modeling of degraded speech," IEEE Trans. on Acoustics, Speech, and Signal Process. 26, 197-210 (1978).
R. Martin, "Spectral subtraction based on minimum statistics," power 6.8 (1994).
Y. H. Tu, J. Du, and C. H. Lee, "2d-to-2d mask estimation for speech enhancement based on fully convolutional neural network," Proc. IEEE ICASSP. 6664-6668 (2020).
Y. Xu, J. Du, and C. H. Lee, "A regression approach to speech enhancement based on deep neural networks," IEEE/ACM Trans. on Audio, Speech, and Lang. Process. 23, 7-19 (2014).
D. L. Wang and J. Chen, "Supervised speech separation based on deep learning: An overview," IEEE/ACM Trans. on Audio, Speech, and Lang. Process. 26, 1702-1726 (2018).
Z. Xu, S. Elshamy, and T. Fingscheidt, "Using separate losses for speech and noise in mask-based speech enhancement," Proc. IEEE ICASSP. 7519-7523 (2020).
K. Paliwal, K. Wojcicki, and B. Shannon, "The importance of phase in speech enhancement," speech communication, 53, 465-494 (2011).
Y. Wang and D. L. Wang, "A deep neural network for time-domain signal reconstruction," Proc. IEEE ICASSP. 4390-4394 (2015).
Y. Hu, Y. Liu, S. Lv, M. Xing, S. Zhang, Y. Fu, J. Wu, B. Zhang, and L. Xie, "DCCRN: Deep complex convolution recurrent network for phase-aware speech enhancement," arXiv preprint arXiv:2008.00264 (2020).
H. S. Choi, J. H. Kim, J. Huh, A. Kim, J. W. Ha, and K. Lee, "Phase-aware speech enhancement with deep complex u-net," Proc. ICLR. 1-20 (2019).
O. Oktay, J. Schlemper, L. Le. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert, "Attention u-net: Learning where to look for the pancreas," arXiv preprint arXiv:1804.03999 (2018).
Y. Luo and N. Mesgarani, "Conv-tasnet: Surpassing ideal time-frequency magnitude masking for speech separation," IEEE/ACM Trans. on Audio, Speech, and Lang. Process. 27, 1256-1266 (2019).
J. Zhang, M. D. Plumbley, and W. Wang, "Weighted magnitude-phase loss for speech dereverberation," Proc. IEEE ICASSP. 5794-5798 (2021).
C. Trabelsi, O. Bilaniuk, Y. Zhang, D. Serdyuk, S. Subramanian, J. F. Santos, S. Mehri, N. Rostamzadeh, Y. Bengio, and C. J. Pal, "Deep complex networks," arXiv preprint arXiv:1705.09792 (2017).
J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, and D. S. Pallett, "DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1," NASA STI/Recon Tech. Rep., n 93: 27403, 1993.
A. Varga, "The NOISEX-92 study on the effect of additive noise on automatic speech recognition," ical Report, DRA Speech Research Unit, CiNii (1992).
E. Vincent, R. Gribonval, and C. Fevotte, "Performance measurement in blind audio source separation," IEEE Trans. on Audio, Speech, and Lang. Process. 14, 1462-1469 (2006).
A. W. Rix, J. G. Beerends, M. P. Hollier, and A. P. Hekstra, "Perceptual evaluation of speech quality (PE SQ)-a new method for speech quality assessment of telephone networks and codecs," Proc. IEEE ICASSP. 749-752 (2001).
C. H. Taal, R. C. Hendriks, and R. Heusdens, "A short-time objective intelligibility measure for time-frequency weighted noisy speech," Proc. IEEE ICASSP. 4214-4217 (2010).
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
오픈액세스 학술지에 출판된 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.