최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기방송과 미디어 = Broadcasting and media magazine, v.27 no.2, 2022년, pp.19 - 25
김승룡 (고려대학교)
L.A. Gatys et al., Texture and art with deep neural networks, Neurobiology, 2017
R. Geirhos et al., ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness, ICLR 2019
J. Devlin et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL, 2019
Misra and Maaten, Self-Supervised Learning of Pretext-Invariant Representations, ArXiv, 2020
Dosovitskiy et al., Discriminative Unsupervised Feaature Learning with Exemplar Convolutional Neural Networks, NIPS, 2014
Doersch et al., Unsupervised Visual Representation Learning by Context Prediction, ICCV, 2015
Norrozi et al., Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles, ECCV, 2016
Gidaris et al., Unsupervised Representation Learning by Predicting Image Rotations, ICLR, 2018
Zhang et al., Colorful Image Colorization, ECCV, 2016
Pathak et al., Context Encoders: Feature Learning by Inpainting, CVPR, 2016
Chen et al., A Simple Framework for Contrastive Learning of Visual Representations, ICML, 2020
Z. Wu et al., Unsupervised Feature Learning via Non-Parametric Instance Discrimination, CVPR, 2018
I. Misra et al., Self-Supervised Learning of Pretext-Invariant Representations, CVPR, 2020
K. He et al., Momentum Contrast for Unsupervised Visual Representation Learning, CVPR, 2020
J. B. Grill et al., Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning, NeurIPS, 2020
M. Caron et al., SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, NeurIPS, 2020
X. Chen and K. He, Exploring Simple Siamese Representation Learning, CVPR, 2021
S. Atito et al., SiT: Self-Supervised Vision Transformer, ArXiv, 2021
M. Caron et al., Emerging Properties in Self-Supervised Vision Transformers, ArXiv, 2021
H. Bao et al., BEIT: BERT Pre-Training of Image Transformers, ICCV, 2021
K. He et al., Masked Autoencoders Are Scalable Vision Learners, ICCV, 2021
해당 논문의 주제분야에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.