$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

[해외논문] Learning to Reconstruct HDR Images from Events, with Applications to Depth and Flow Prediction

International journal of computer vision, v.129 no.4, 2021년, pp.900 - 920  

Mostafavi, Mohammad ,  Wang, Lin ,  Yoon, Kuk-Jin

초록이 없습니다.

참고문헌 (54)

  1. 10.1109/CVPR.2018.00296 Atapour-Abarghouei, A., & Breckon, T. P. (2018). Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,volume 18, page 1. 

  2. 10.1109/CVPR.2016.102 Bardow, P., Davison, A. J., & Leutenegger, S. (2016). Simultaneous optical flow and intensity estimation from an event camera. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 884-892. 

  3. 10.1109/WACV.2016.7477561 Barua, S., Miyatani, Y., & Veeraraghavan, A. (2016). Direct face detection and video reconstruction from event cameras. In: 2016 IEEE winter conference on applications of computer vision (WACV), pp. 1-9. IEEE. 

  4. Neural Networks R Benosman 27 32 2012 10.1016/j.neunet.2011.11.001 Benosman, R., Ieng, S. H., Clercq, C., Bartolozzi, C., & Srinivasan, M. (2012). Asynchronous frameless event-based optical flow. Neural Networks, 27, 32-37. 

  5. Binas, J., Neil, D., Liu, S.-C., & Delbruck, T. (2017). DDD17: End-to-end davis driving dataset. arXiv preprint arXiv:1711.01458. 

  6. 10.1109/CVPRW.2018.00107 Chen, N. F. (2018). Pseudo-labels for supervised learning on dynamic vision sensor data, applied to object detection under ego-motion. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 644-653. 

  7. Community, B. O. (2018). Blender - a 3D modelling and rendering package. Stichting Blender Foundation, Amsterdam. Retrieved from http://www.blender.org 

  8. 10.1109/IJCNN.2011.6033299 Cook, M., Gugelmann, L., Jug, F., Krautz, C., & Steger, A. (2011). Interacting maps for fast visual interpretation. In: The 2011 international joint conference on neural networks (IJCNN), pp. 770-776. IEEE. 

  9. Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A., Conradt, J., Daniilidis, K., Scaramuzza, D. (2019). Event-based Vision: A Survey. arXiv preprint arXiv:1904.08405. 

  10. Gallego, H., Rebecq, H., & Scaramuzza, D. (2018). A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3867-3876). 

  11. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In: Advances in neural information processing systems, pp. 2672-2680. 

  12. 10.1109/CVPR.2017.632 Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. arXiv preprint. 

  13. Karacan, L., Akata, Z., Erdem, A., & Erdem, E. (2016). Learning to generate images of outdoor scenes from attributes and semantic layouts. arXiv preprint arXiv:1612.00215. 

  14. 10.1007/978-3-319-46466-4_21 Kim, H., Leutenegger, S., & Davison, A. J. (2016). Real-time 3d reconstruction and 6-dof tracking with an event camera. In: European conference on computer vision, pp. 349-364. Springer. 

  15. Solid State Circ H Kim 43 566 2008 10.1109/JSSC.2007.914337 Kim, H., Handa, A., Benosman, R., Ieng, S.-H., & Davison, A. J. (2008). Simultaneous mosaicing and tracking with an event camera.J. Solid State Circ, 43, 566-576. 

  16. Kingma, D., & Ba, J. (2015). Adam: A method for stochastic optimization. In the International Conference on Learning Representations (ICLR). 

  17. 10.1007/978-3-030-01267-0_11 Lai, W. S., Huang, J. B., Wang, O., Shechtman, E., Yumer, E., & Yang, M. H. (2018). Learning blind video temporal consistency. In: Proceedings of the European conference on computer vision (ECCV), pp. 170-185. 

  18. 10.1109/CVPR.2017.19 Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A. P., Tejani, A., Totz, J., Wang, Z., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. IEEE Conference on Computer Vision ;and Pattern Recognition (CVPR), volume 2, page 4. 

  19. 10.1007/978-3-319-46487-9_43 Li, C., & Wand, M. (2016). Precomputed real-time texture synthesis with Markovian generative adversarial networks. In European Conference on Computer Vision, pp. 702-716. Springer. 

  20. IEEE Journal of Solid-State Circuits P Lichtsteiner 43 2 566 2008 10.1109/JSSC.2007.914337 Lichtsteiner, P., Posch, C., & Delbruck, T. (2008). A $$128\times 128$$$$120 dB$$$$15 \mu s$$ latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2), 566-576. 

  21. 10.1109/CVPR.2018.00568 Maqueda, A. I., Loquercio, A., Gallego, G., Garcıa, N., & Scaramuzza, D. (2018). Event-based vision meets deep learning on steering prediction for self-driving cars. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5419-5427. 

  22. Mathieu, M., Couprie, C., & LeCun, Y. (2015). Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440. 

  23. IEEE Trans-actions on Image Processing A Mittal 21 12 4695 2012 10.1109/TIP.2012.2214050 Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Trans-actions on Image Processing, 21(12), 4695-4708. 

  24. 10.1109/EBCCSP.2016.7605233 Moeys, D. P., Corradi, F., Kerr, E., Vance, P., Das, G., Neil, D., Kerr, D., & Delbrück, T. (2016). Steering a predator robot using a mixed frame/event-driven convolutional neural network. In: 2016 Second international conference on event-based control, communication, and signal processing (EBCCSP), pp. 1-8. IEEE. 

  25. 10.1109/ISCAS.2017.8050412 Moeys, D. P., Li, C., Martel, J. N., Bamford, S., Longinotti, L., Motsnyi, V., Bello, D. S. S., Delbruck, T. (2017). Color temporal contrast sensitivity in dynamic vision sensors. In: IEEE international symposium on circuits and systems (ISCAS), 2017, pp. 1-4. IEEE. 

  26. The International Journal of Robotics Research E Mueggler 36 2 142 2017 10.1177/0278364917691115 Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., & Scaramuzza, D. (2017). The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and slam. The International Journal of Robotics Research, 36(2), 142-149. 

  27. International Journal of Computer Vision G Munda 126 12 1381 2018 10.1007/s11263-018-1106-2 Munda, G., Reinbacher, C., & Pock, T. (2018). Real-time intensity-image reconstruction for event cameras using manifold regularisation. International Journal of Computer Vision, 126(12), 1381-1393. 

  28. Nguyen, A., Do, T.-T., Caldwell, D. G., & Tsagarakis, N. G. Real-time 6dof pose relocalization for event cameras with stacked spatial lstm networks. arXiv preprint. 

  29. Open Source Computer Vision Library 2020. 

  30. Ouderaa, V. D., Tycho, F. A., & Worrall, D. E. (2019). Reversible gans for memory-efficient image-to-image translation. In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 4720-4728. 

  31. Rebecq, H., Gehrig, D., & Scaramuzza, D. (2018). Esim: An open event camera simulator. In: Conference on robot learning, pp. 969-982. 

  32. Rebecq, H., Ranftl, R., Koltun, V., & Scaramuzza, D. (2019). Events-to-video: Bringing modern computer vision to event cameras. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3857-3866). 

  33. 10.1007/s11263-017-1050-6 Rebecq, H., Gallego, G., Mueggler, E., & Scaramuzza, D. (2018). EMVS: Event-based multi-view stereo-3D reconstruction with an event camera in real-time. International Journal of Computer Vision, 126(12), 1394-1414. 

  34. IEEE Robotics and Automation Letters H Rebecq 2 2 593 2017 10.1109/LRA.2016.2645143 Rebecq, H., Horstschaefer, T., Gallego, G., & Scaramuzza, D. (2017). Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time. IEEE Robotics and Automation Letters, 2(2), 593-600. 

  35. Reinbacher, C., Graber, G., & Pock, T. (2016). Real-time intensity-image reconstruction for event cameras using manifold regularisation. arXiv preprint arXiv:1607.06283. 

  36. 10.1007/978-3-319-24574-4_28 Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention, pp. 234-241. Springer. 

  37. Ruder, M., Dosovitskiy, A., & Brox, T. (2016). Artistic style transfer for videos. In: German conference on pattern recognition (pp. 26-36). Springer, Cham. 

  38. rviz 3D visualization tool for ROS (2019). Retrieved from https://github.com/ros-visualization/rviz. 

  39. Scheerlinck, C., Barnes, N., & Mahony, R. (2018). Continuous-time intensity estimation using event cameras. arXiv preprint arXiv:1811.00386. 

  40. 10.1109/CVPRW.2019.00215 Scheerlinck, C., Rebecq, H., Stoffregen, T., Barnes, N., Mahony, R., & Scaramuzza, D. (2019). CED: Color event camera dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 

  41. Shedligeri, P. A., Shah, K., Kumar, D., & Mitra, K. (2018). Photorealistic image reconstruction from hybrid intensity and event based sensor. arXiv preprint arXiv:1805.06140. 

  42. Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations (ICLR). 

  43. 10.1007/978-3-319-46493-0_20 Wang, X., & Gupta, A. (2016). Generative image modeling using style and structure adversarial networks. In: European conference on computer vision, pp. 318-335. Springer 

  44. Wang, Z., Chen, J., & CH, S. (2020). Hoi: Deep learning for image super-resolution: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 

  45. Wang, L., Mostafavi I, S. M., Ho, Y., & Yoon, K. (2019). Eventbased high dynamic range image and very high frame rate video generation using conditional generative adversarial networks. In: IEEE conference on computer vision and pattern recognition (CVPR). 

  46. IEEE Transactions on Image Processing Z Wang 13 4 600 2004 10.1109/TIP.2003.819861 Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600-612. 

  47. Ye, C., Mitrokhin, A., Fermüller, C., Yorke, J. A., & Aloimonos, Y. (2018). Unsupervised learning of dense optical flow, depth and egomotion from sparse event data. arXiv preprint arXiv:1809.08625. 

  48. 10.1109/ICCV.2017.310 Yi, Z., Zhang, H. R., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. ICCV, 2868-2876. 

  49. Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In:Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595). 

  50. IEEE Transactions on Image Processing L Zhang 20 8 2378 2011 10.1109/TIP.2011.2109730 Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). Fsim: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8), 2378-2386. 

  51. Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial net-works. arXiv preprint. 

  52. 10.15607/RSS.2018.XIV.062 Zhu, A. Z., Yuan, L., Chaney, K., & Daniilidis, K. (2018). Ev-flownet: Self-supervised optical flow estimation for event-based cameras. Proceedings of Robotics: Science and Systems 

  53. 10.1109/CVPR.2019.00108 Zhu, A. Z., Yuan, L., Chaney, K., & Daniilidis, K. (2019). Unsupervised event-based learning of optical flow, depth, and egomotion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 989-997). 

  54. 10.1109/LRA.2018.2800793 Zhu, A. Z., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., & Daniilidis, K. (2018). The multi vehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robotics and Automation Letters, 3(3), 2032-2039. 

LOADING...

활용도 분석정보

상세보기
다운로드
내보내기

활용도 Top5 논문

해당 논문의 주제분야에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.

관련 콘텐츠

유발과제정보 저작권 관리 안내
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로