최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기전기전자학회논문지 = Journal of IKEEE, v.27 no.3, 2023년, pp.251 - 257
이광엽 (Dept. of Computer Eng., Seokyeong University) , 문환희 (Dept. of Computer Eng., Seokyeong University) , 박태룡 (Dept. of Computer Eng., Seokyeong University)
In this paper, we propose a dual-structured self-attention method that improves the lack of regional features of the vision transformer's self-attention. Vision Transformers, which are more computationally efficient than convolutional neural networks in object classification, object segmentation, an...
Chih-Yang Lin, Yi-Cheng Chiu, Hui-Fuang?Ng, Timothy K. Shih, Kuan-Hung Lin, "Global-and-Local Context Network for Semantic Segmentation of Street View Images," Sensors, Vol.20,?No.10, 2020. DOI: 10.3390 /s20102907
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan?Wei, Zheng Zhang, Stephen Lin, Baining Guo,?"Swin Transformer: Hierarchical Vision Transformer?Using Shifted Windows," Proceedings of the?IEEE/CVF International Conference on Computer?Vision (ICCV), pp.10012-10022, 2021.
Jinpeng Li, Yichao Yan, Shengcai Liao,?Xiaokang Yang, Ling Shao, "Local-to-Global?Self-Attention in Vision Transformers," 2021.?DOI: 10.48550/arXiv.2107.04735
Nikolas Ebert, Didier Stricker, Oliver Wasenmuller, "PLG-ViT: Vision Transformer with Parallel?Local and Global Self-Attention," Sensors, Vol.23,?No.7, 2023. DOI: 10.3390/s23073447
Alexey Dosovitskiy, Lucas Beyer, Alexander?Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,?Thomas Unterthiner, Mostafa Dehghani, Matthias?Minderer, Georg Heigold, SylvainGelly, Jakob?Uszkoreit, Neil Houlsby, "An image is worth 16×16 words: Transformersfor image recognition at scale," ICLR, 2020.?DOI: 10.48550/arXiv.2010.11929
※ AI-Helper는 부적절한 답변을 할 수 있습니다.