$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

[국내논문] 딥러닝 기반 비디오 스토리 학습 기술 원문보기

한국멀티미디어학회지, v.20 no.3, 2016년, pp.23 - 40  

허민오 (서울대 컴퓨터공학부) ,  김경민 (서울대 컴퓨터공학부) ,  장병탁 (서울대학교 공과대학 컴퓨터공학부)

초록이 없습니다.

질의응답

핵심어 질문 논문에서 추출한 답변
영상 컨텐츠 이해 기술이 발전하게 된 세가지 핵심요인은 무엇인가? 이와 같이 영상 컨텐츠 이해 기술이 발전하게 된 세 가지 핵심 요인으로 딥러닝 기술 외에도 GPU 등의 하드웨어로 인한 계산 속도의 향상, 학습에 사용할 수 있는 빅데이터의 등장을 들 수 있다. 즉, 빅데이터를 재료로 딥러닝 기술을 이용하여 정보의 다양한 표현 방식을 학습하고 영상의 다양한 요소를 인식하는 지능 기술의 비약적인 발전이 최근 계속되고 있다.
CNN을 통한 모델링이 이미지 데이터에 적합한 이유는? 이미지 데이터의 경우 이러한 특징에 매우 적합하다. 왜냐하면 이미지는 국부적 패턴이 모여 전체적 패턴을 구성하는 계층적 특성이 있으며, 이 국부적 패턴은 위치에 무관하게 전 영역에서 동일하게 사용될 수 있다. 만약, 이미지 안에 선으로 그린 사과가 있다고 할 때, 사과를 표현하는 윤곽선은 사과의 좌우측 테두리에 상관없이 동일한 패턴을 쓸 수 있다.
R-CNN 기법의 특징은 무엇인가? 물체검출 문제에도 컨볼루션 연산을 이용한 딥러닝기법인 R-CNN[3]이 제안되었다. 단순히 이미지 안에 어떤 물체가 있는지만 다루지 않고 무엇이 어디에 있는지 검출해낸다. 이미지를 격자형 영역으로 나누고 각 영역이 검출영역에 속하는지 여부를 다루도록 한 영역제안망(region proposal network)을 모델 안에 포함한 것이 특징이다.
질의응답 정보가 도움이 되었나요?

참고문헌 (61)

  1. Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature vol. 521, pp. 436-444, 2015. 

  2. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," In Advances in neural information processing systems (NIPS), 2012. 

  3. R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 580-587, 2014. 

  4. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, "Learning deep features for scene recognition using places database," In Advances in neural information processing systems (NIPS), pp. 487-495, 2014. 

  5. F. Schroff, D. Kalenichenko, and J. Philbin. "Facenet: A unified embedding for face recognition and clustering." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 815-823, 2015. 

  6. Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1701-1708). 

  7. A. Toshev, and C. Szegedy. "Deeppose: Human pose estimation via deep neural networks." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1653-1660, 2014. 

  8. J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler, "Joint training of a convolutional network and a graphical model for human pose estimation," In Advances in neural information processing systems (NIPS), pp. 1799-1807, 2014. 

  9. S.-W. Lee, C.-Y. Lee, D. Kwak, J. Kim, J. Kim, and B.-T. Zhang, "Dual-memory deep learning architectures for lifelong learning of everyday human behaviors," International Joint Conference on Artificial Intelligence (IJCAI 2016), pp. 1669-1675, 2016. 

  10. C. Park, and G. Kim, "Expressing an Image Stream with a Sequence of Natural Sentences," In Advances in neural information processing systems (NIPS), 2015. 

  11. Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler, "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books," In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 19-27, 2015. 

  12. K.-M. Kim, C.-J. Nan, M.-O. Heo, S.-H. Choi, B.-T. Zhang, "DeepStory: video story qa by deep embedded memory networks," AAAI 2017 (submitted) 

  13. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol.86(11), 2278-2324, 1998. 

  14. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," J. Machine Learning Res. vol.15, pp. 1929-1958, 2014. 

  15. C. Szegedy, W. Liu, W., Y. Jia, P. Sermanet, S. Reed, D. Anguelov, and A. Rabinovich, "Going deeper with convolutions," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 1-9, 2015. 

  16. K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 

  17. Y. Bengio, A. Courville, and P. Vincent. "Representation learning: A review and new perspectives." IEEE transactions on pattern analysis and machine intelligence, vol.35.8 , pp. 1798-1828, 2013. 

  18. W. W. Zhu, A. Berndsen, E. C. Madsen, M. Tan, I. H. Stairs, A. Brazier, P. Lazarus, R. Lynch, P. Scholz, K. Stovall, et al. "Searching for pulsars using image pattern recognition," The Astrophysical Journal, vol.781(2):117, 2014. 

  19. G. Hinton, and R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." Science 313.5786, pp. 504-507, 2006. 

  20. R. Salakhutdinov, and G. Hinton, "Deep Boltzmann machines," In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 448-455, 2009. 

  21. D. P. Kingma, M. Welling, "Auto-Encoding Variational Bayes," International Conference on Learning Representations (ICLR), 2014. 

  22. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, and Y. Bengio, "Generative adversarial nets," In Advances in Neural Information Processing Systems (NIPS), pp. 2672-2680, 2014. 

  23. T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, "Distributed representations of words and phrases and their compositionality," In Proc. Advances in Neural Information Processing Systems (NIPS), pp. 3111-3119, 2013. 

  24. S. Hochreiter, and J. Schmidhuber, "Long short-term memory," Neural Comput. vol. 9, pp. 1735-1780, 1997. 

  25. K. Cho, B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H Schwenk, and Y. Bengio, "Learning phrase representations using RNN encoder-decoder for statistical machine translation," Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014. 

  26. J. Chung, C. Gulcehre, K.H. Cho, and Y. Bengio, Empirical Evalutation of Gated Recurrent Neural Networks on Sequence Modeling, arXiv:1412.3555, 2014. 

  27. W. Zhang and M. Lapata, "Chinese Poetry Generation with Recurrent Neural Networks," Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014. 

  28. A. Karpathy, "The Unreasonalbe Effectiveness of Recurrent Neural Networks," http://karpathy.github.io/2015/05/21/rnn-ef fectiveness/ 

  29. http://benjamin.wtf 

  30. K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, D. Wierstra, "DRAW: a recurrent neural network for image generation," International Conference on Machine Learning (ICML), 2015. 

  31. A. van den Oord, N. Kalchbrenner, K. Kavukcuoglu, "Pixel recurrent neural networks," International Conference on Machine Learning (ICML), 2016. 

  32. J. Weston, S. Chopra, and A. Bordes, "Memory networks," International Conference on Learning Representation (ICLR), 2015. 

  33. S. Sukhbaatar, J. Weston, and R. Fergus. "End-to-end memory networks." Advances in neural information processing systems (NIPS), 2015. 

  34. A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher, "Ask Me Anything: Dynamic Memory Networks for Natural Language Processing," International Conference on Machine Learning (ICML), 2016. 

  35. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imageneet: A large-scale hierarchical image database," In Computer Vision and Pattern Recognition (CVPR), pp. 248-255, 2009. 

  36. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, "Microsoft coco: Common objects in context," In Computer Vision-ECCV 2014, pp. 740-755, 2014. 

  37. S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick and D. Parikh, "VQA: Visual Question Answering," In International Conference on Computer Vision (ICCV), 2015. 

  38. R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei, "Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations," https://arxiv.org/abs/1602.07332, 2016. 

  39. J. Markoff, "A Learning Advance in Artificial Intelligence Rivals Human Abilities". The New York Times, 2015-12-10. 

  40. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, "Show and tell: A neural image caption generator," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3156-3164, 2015. 

  41. K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio, "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention ," International Conference on Machine Learning (ICML), 2015. 

  42. A. Karpathy, and L. Fei-Fei, "Deep Visual-Semantic Alignments for Generating Image Descriptions," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 

  43. H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Dollar, J. Gao, X. He, M. Mitchell, J. C. Platt, C. L. Zitnick, and G. Zweig, "From Captions to Visual Concepts and Back," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 

  44. X. Chen, and C. L. Zitnick, "Mind's Eye: A Recurrent Visual Representation for Image Caption Generation," In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015 

  45. R. Socher, M. Ganjoo, C. D. Manning, and A. Ng, "zero-shot learning through cross-modal transfer," In Advances in neural information processing systems (NIPS), pp. 935-943, 2013. 

  46. R. Kiros, R. Salakhutdinov, and R. Zemel,."Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models," Transactions of the Association for Computational Linguistics, (To appear). 

  47. L. Ba, K. Swersky, and S. Fidler. "Predicting deep zero-shot convolutional neural networks using textual descriptions," Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015. 

  48. E. Mansimov, E. Parisotto, J. Ba, and R. Salakhutdinov, "Generating Images from Captions with Attention," International Conference on Learning Representation (ICLR), 2016. 

  49. S. Reed, Z. Akata, X. Yan, L. Logeswaran, Bernt Schiele, and H. Lee, "Generative Adversarial Text to Image Synthesis," International Conference on Machine Learning (ICML), 2016. 

  50. A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, M. Rohrbach, "Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding," EMNLP 2016 (accepted). 

  51. J.-H. Kim, S.-W. Lee, D.-H. Kwak, M.-O. Heo, J. Kim, J.-W. Ha, B.-T. Zhang, "Multimodal Residual Learning for Visual QA, " Advances in neural information processing systems (NIPS) 2016 (accepted). 

  52. Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, and A. van den Hengel, "Visual Question Answering: A Survey of Methods and Datasets," arXiv:1607.05910, 2016. 

  53. K. Kafle, and C. Kanan, "Visual Question Answering: Datasets, Algorithms, and Future Challenges", arXiv:1610.01465, 2016. 

  54. A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele, "A dataset for movie description," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 

  55. M. Tapaswi, Y Zhu, R. Stiefelhagen, A. Torralba, R. Urtasun, and S. Fidler, "Movieqa: Understanding stories in movies through question- answering," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 

  56. C. Fan, and D. J. Crandall, "DeepDiary: Automatic Caption Generation for Lifelogging Image Streams," arXiv:1608.03819v1, 2016. 

  57. S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko, " Translating videos to natural language using deep recurrent neural networks," the 2015 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL-HLT), 2015. 

  58. A. Rohrbach, M. Rohrbach, and B. Schiele, "The long-short story of movie description," German Conference on Pattern Recognition, Springer International Publishing, 2015. 

  59. R. Kiros, Y. Zhu, R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler, "Skip-thought vectors," In Advances in neural information processing systems (NIPS), pp. 3294-3302, 2015. 

  60. L.J.P. van der Maaten and G.E. Hinton. "Visualizing High-Dimensional Data Using t-SNE," Journal of Machine Learning Research 9(Nov):2579-2605, 2008. 

  61. L. Zhu, Z. Xu, Y. Yang, and A. Hauptmann, "Uncovering Temporal Context for Video Question and Answering," arXiv preprint arXiv:1511.04670, 2015. 

저자의 다른 논문 :

LOADING...
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로