최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기멀티미디어학회논문지 = Journal of Korea Multimedia Society, v.22 no.1, 2019년, pp.44 - 54
조상흠 (Dept. of Software and Computer Engineering, Ajou University) , 이용 (Korea Institute of Science and Technology Information) , 나재민 (Dept. of Software and Computer Engineering, Ajou University) , 김영빈 (Dept. of Software and Computer Engineering, Ajou University) , 박민우 (Korea Institute of Science and Technology Information) , 이상환 (Korea Institute of Science and Technology Information) , 황원준 (Dept. of Software and Computer Engineering, Ajou University)
Recently, image-based object detection has made great progress with the introduction of Convolutional Neural Network (CNN). Many trials such as Region-based CNN, Fast R-CNN, and Faster R-CNN, have been proposed for achieving better performance in object detection. YOLO has showed the best performanc...
* AI 자동 식별 결과로 적합하지 않은 문장이 있을 수 있으니, 이용에 유의하시기 바랍니다.
핵심어 | 질문 | 논문에서 추출한 답변 |
---|---|---|
Single Shot multi-box Detector (SSD)는 무엇인가? | (You Only Look Once) [6]가 있다. SSD는 입력 이미 지에서 추출해 내는 특징 맵 (feature map)을 여러 개의 크기로 만들어서 큰 특징 맵에서는 작은 물체의 검출을, 작은 특징 맵에서는 큰 물체의 검출을 하게 만든 방식의 네트워크이고, YOLO는 전체 특징 맵을 | |
컴퓨터 비전 분야에서 CNN을 사용한 방식이 주류가 될 수 있었던 요인은 무엇인가? | 데, 1) Deep Convolutional Neural Networks (Deep CNNs) 와 2) 충분한 양의 라벨링된 데이터이다. 위 | |
Image-to-Image translation의 주된 목표는? | 리킨다. 이 분야는 시작 도메인의 이미지에서 해당 이미지의 주요 특징은 유지한 상태로 목표 도메인의 이미지로 변환하는 것을 목표로 한다. 예를 들어 반 |
J.Y. Zhu, T. Park, P. Isola, and A.A. Efros, "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks," Proceeding of International Conference on Computer Vision, pp. 2242-2251, 2017.
Y. Choi, M. Choi, M. Kim, JW. Ha, S. Kim, J. Choo, et al., "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation," Proceeding of International Conference on Computer Vision and Pattern Recognition, pp. 8789-8797, 2018.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, et al., "Generative Adversarial Nets," Proceeding of Conference on Neural Information Processing Systems, pp. 1-9, 2014.
P. Isola, J.Y. Zhu, T. Zhou, and A.A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," Proceeding of International Conference on Computer Vision and Pattern Recognition, pp. 5967-5976, 2017.
W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, CY. Fu, et al., "SSD: Single Shot Multibox Detector," Proceeding of European Conference on Computer Vision, pp. 21-37, 2016.
J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," Proceeding of International Conference on Computer Vision and Pattern Recognition, pp. 6517-6525, 2017.
M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, and A. Zisserman, “The PASCAL Visual Object Classes (VOC) Challenge,” International J ournal of Computer Vision, Vol. 88, No. 2, pp. 303-338, 2010.
T.Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, et al., "Microsoft COCO: Common Objects in Context," Proceeding of European Conference on Computer Vision, pp. 740-755, 2014.
F. Yu, W. Xian, Y. Chen, F. Liu, M. Liao, V. Madhavan, et al., "BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling," arXiv Preprint arXiv:1805.04687, 2018.
E. Denton, S. Chintala, A. Szlam, and R. Fergus, "Deep Generative Image Models Using a Laplacian Pyramid of Adversarial Networks," Proceeding of Conference on Neural Information Processing Systems, pp. 1486-1494, 2015.
A. Radford, L. Metz, and S. Chintala, "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks," Proceeding of International Conference on Learning Representations, pp. 1-16, 2016.
M. Mirza and S. Osindero, "Conditional Generative Adversarial Nets," arXiv Preprint arXiv:1411.1784, 2014.
M.Y. Liu and O. Tuzel, "Coupled Generative Adversarial Networks," Proceeding of Conference on Neural Information Processing Systems, pp. 469-477, 2016.
M.Y. Liu, T. Breuel, and J. Kautz, "Unsupervised Image-to-Image Translation Networks," Proceeding of Conference on Neural Information Processing Systems, pp. 700-708, 2017.
T. Kim, M. Cha, H. Kim, J.K. Lee, and J. Kim, "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks," Proceeding of International Conference on Machine Learning, pp. 1857-1865, 2017.
J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," Proceeding of European Conference on Computer Vision, pp. 694-711, 2016.
L.A. Gatys, A.S. Ecker, and M. Bethge, "Image Style Transfer Using Convolutional Neural Networks," Proceeding of International Conference on Computer Vision and Pattern Recognition, pp. 2414-2423, 2016.
K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," Proceeding of International Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
D.P. Kingma and J.L. Ba, "ADAM: A Method for Stochastic Optimization," Proceeding of International Conference on Learning Representations, pp. 1-15, 2015.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only Look Once: Unified, Real-Time Object Detection," Proceeding of International Conference on Computer Vision and Pattern Recognition, pp. 779-788, 2016.
H.S. Ha and B.Y. Hwang, “Enhancement Method of CCTV Video Quality Based on SRGAN,” Journal of Korea Multimedia Society, Vol. 21, No. 9, pp. 1027-1034, 2018.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
출판사/학술단체 등이 한시적으로 특별한 프로모션 또는 일정기간 경과 후 접근을 허용하여, 출판사/학술단체 등의 사이트에서 이용 가능한 논문
※ AI-Helper는 부적절한 답변을 할 수 있습니다.