최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기한국융합학회논문지 = Journal of the Korea Convergence Society, v.12 no.11, 2021년, pp.35 - 43
어수경 (고려대학교 컴퓨터학과) , 박찬준 (고려대학교 컴퓨터학과) , 서재형 (고려대학교 컴퓨터학과) , 문현석 (고려대학교 컴퓨터학과) , 임희석 (고려대학교 컴퓨터학과)
Recently, there has been a growing interest in zero-shot cross-lingual transfer, which leverages cross-lingual language models (CLLMs) to perform downstream tasks that are not trained in a specific language. In this paper, we point out the limitations of the data-centric aspect of quality estimation...
T. Ranasinghe, C. Orasan & R. Mitkov. (2020). TransQuest at WMT2020: Sentence-Level Direct Assessment. arXiv preprint arXiv:2010.05318.
Z. Chi, L. Dong, S. Ma, S. H. X. L. Mao, H. Huang & F. Wei. (2021). mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs. arXiv preprint arXiv:2104.08692.
G. Chen et al. (2021). Zero-shot Cross-lingual Transfer of Neural Machine Translation with Multilingual Pretrained Encoders. arXiv preprint arXiv:2104.08757.
L. Specia, K. Shah, J. G. De Souza & T. Cohn (2013). QuEst-A translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 79-84.
S. Eo, C. Park, H. Moon, J. Seo & H. Lim. (2021). Dealing with the Paradox of Quality Estimation. In Proceedings of the 4rd Workshop on Technologies for MT of Low Resource Languages, 1-10.
G. Lample & A. Conneau. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Y. Liu et al. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742.
J. Hu, S. Ruder, A. Siddhant, G. Neubig, O. Firat & M. Johnson. (2020, November). Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning, 4411-4421.
L. Zhou, L. Ding & K. Takeda. (2020). Zero-shot translation quality estimation with explicit cross-lingual patterns. arXiv preprint arXiv:2010.04989.
Z. Chi, L. Dong, F. Wei, W. Wang, X. L. Mao & H. Huang. (2020). Cross-lingual natural language generation via pre-training. In Proceedings of the AAAI Conference on Artificial Intelligence, 7570-7577. DOI : 10.1609/aaai.v34i05.6256
L. Specia, D. Raj & M. Turchi (2010). Machine translation evaluation versus quality estimation. Machine translation, 24(1), 39-50. DOI : 10.1007/s10590-010-9077-2
D. Lee. (2020). Cross-lingual transformers for neural automatic post-editing. In Proceedings of the Fifth Conference on Machine Translation, 772-776.
K. Shah, T. Cohn & L. Specia. (2015). A bayesian non-linear method for feature selection in machine translation quality estimation. Machine Translation, 29(2), 101-125. DOI : 10.1007/s10590-014-9164-x
R. Soricut, N. Bach & Z. Wang. (2012). The SDL language weaver systems in the WMT12 quality estimation shared task. In Proceedings of the Seventh Workshop on Statistical Machine Translation, 145-151.
M. Wang et al. (2020). Hw-tsc's participation at wmt 2020 quality estimation shared task. In Proceedings of the Fifth Conference on Machine Translation, 1056-1061.
S. Eo, C. Park, H. Moon, J. Seo & H. Lim. (2021). Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation. Applied Sciences, 11(14), 6584. DOI : 10.3390/app11146584
T. Wolf et al. (2019). Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
C. Park, Y. Yang, K. Park & H. Lim. (2020). Decoding strategies for improving low-resource machine translation. Electronics, 9(10), 1562. DOI : 10.3390/electronics9101562
C. Park, S. Eo, H. Moon & H. S. Lim. (2021). Should we find another model?: Improving Neural Machine Translation Performance with ONE-Piece Tokenization Method without Model Modification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers, 97-104. DOI : 10.18653/v1/2021.naacl-industry.13
C. Park, Y. Lee, C. Lee & H. Lim. (2020). Quality, not quantity?: Effect of parallel corpus quantity and quality on neural machine translation. In The 32st Annual Conference on Human Cognitive Language Technology, 363-368.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.