최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기방송공학회논문지 = Journal of broadcast engineering, v.27 no.4, 2022년, pp.511 - 518
나재민 (아주대학교 인공지능학과) , 황원준 (아주대학교 인공지능학과)
Supervised learning based on deep learning has made a leap forward in various application fields. However, many supervised learning methods work under the common assumption that training and test data are extracted from the same distribution. If it deviates from this constraint, the deep learning ne...
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets" Advances in neural information processing systems, Vol.27, 2014.
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, "Domain-Adversarial Training of Neural Networks" The journal of machine learning research, Vol.17, 2016.
S. Xie, Z. Zheng, L. Chen, and C. Chen, "Learning semantic representations for unsupervised domain adaptation" International conference on machine learning. PMLR, 2018.
J. Na, D. Han, H. Chang, and W. Hwang, "Contrastive Vicinal Space for Unsupervised Domain Adaptation" arXiv preprint arXiv:2111.13353, 2021.
K. Saito, Y. Ushiku, and T. Harada. Asymmetric tri-training for unsupervised domain adaptation. International Conference on Machine Learning, pages 2988-2997. PMLR, 2017.
Shimodaira Hidetoshi. Improving predictive inference under covariate shift by weightingthe log-likelihood function. Journal of statistical planning and inference 90(2),227-244 (2000).
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
Lee, D.H., et al.: Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on challenges in representation learning, ICML. p. 896 (2013)
※ AI-Helper는 부적절한 답변을 할 수 있습니다.