최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기International journal of computer vision, v.129 no.4, 2021년, pp.900 - 920
Mostafavi, Mohammad , Wang, Lin , Yoon, Kuk-Jin
초록이 없습니다.
Neural Networks R Benosman 27 32 2012 10.1016/j.neunet.2011.11.001 Benosman, R., Ieng, S. H., Clercq, C., Bartolozzi, C., & Srinivasan, M. (2012). Asynchronous frameless event-based optical flow. Neural Networks, 27, 32-37.
Binas, J., Neil, D., Liu, S.-C., & Delbruck, T. (2017). DDD17: End-to-end davis driving dataset. arXiv preprint arXiv:1711.01458.
Community, B. O. (2018). Blender - a 3D modelling and rendering package. Stichting Blender Foundation, Amsterdam. Retrieved from http://www.blender.org
Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A., Conradt, J., Daniilidis, K., Scaramuzza, D. (2019). Event-based Vision: A Survey. arXiv preprint arXiv:1904.08405.
Gallego, H., Rebecq, H., & Scaramuzza, D. (2018). A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3867-3876).
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In: Advances in neural information processing systems, pp. 2672-2680.
Karacan, L., Akata, Z., Erdem, A., & Erdem, E. (2016). Learning to generate images of outdoor scenes from attributes and semantic layouts. arXiv preprint arXiv:1612.00215.
Solid State Circ H Kim 43 566 2008 10.1109/JSSC.2007.914337 Kim, H., Handa, A., Benosman, R., Ieng, S.-H., & Davison, A. J. (2008). Simultaneous mosaicing and tracking with an event camera.J. Solid State Circ, 43, 566-576.
Kingma, D., & Ba, J. (2015). Adam: A method for stochastic optimization. In the International Conference on Learning Representations (ICLR).
10.1109/CVPR.2017.19 Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A. P., Tejani, A., Totz, J., Wang, Z., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. IEEE Conference on Computer Vision ;and Pattern Recognition (CVPR), volume 2, page 4.
IEEE Journal of Solid-State Circuits P Lichtsteiner 43 2 566 2008 10.1109/JSSC.2007.914337 Lichtsteiner, P., Posch, C., & Delbruck, T. (2008). A $$128\times 128$$$$120 dB$$$$15 \mu s$$ latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2), 566-576.
Mathieu, M., Couprie, C., & LeCun, Y. (2015). Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440.
IEEE Trans-actions on Image Processing A Mittal 21 12 4695 2012 10.1109/TIP.2012.2214050 Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Trans-actions on Image Processing, 21(12), 4695-4708.
10.1109/EBCCSP.2016.7605233 Moeys, D. P., Corradi, F., Kerr, E., Vance, P., Das, G., Neil, D., Kerr, D., & Delbrück, T. (2016). Steering a predator robot using a mixed frame/event-driven convolutional neural network. In: 2016 Second international conference on event-based control, communication, and signal processing (EBCCSP), pp. 1-8. IEEE.
The International Journal of Robotics Research E Mueggler 36 2 142 2017 10.1177/0278364917691115 Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., & Scaramuzza, D. (2017). The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and slam. The International Journal of Robotics Research, 36(2), 142-149.
International Journal of Computer Vision G Munda 126 12 1381 2018 10.1007/s11263-018-1106-2 Munda, G., Reinbacher, C., & Pock, T. (2018). Real-time intensity-image reconstruction for event cameras using manifold regularisation. International Journal of Computer Vision, 126(12), 1381-1393.
Nguyen, A., Do, T.-T., Caldwell, D. G., & Tsagarakis, N. G. Real-time 6dof pose relocalization for event cameras with stacked spatial lstm networks. arXiv preprint.
Open Source Computer Vision Library 2020.
Ouderaa, V. D., Tycho, F. A., & Worrall, D. E. (2019). Reversible gans for memory-efficient image-to-image translation. In: IEEE conference on computer vision and pattern recognition (CVPR), pp. 4720-4728.
Rebecq, H., Gehrig, D., & Scaramuzza, D. (2018). Esim: An open event camera simulator. In: Conference on robot learning, pp. 969-982.
Rebecq, H., Ranftl, R., Koltun, V., & Scaramuzza, D. (2019). Events-to-video: Bringing modern computer vision to event cameras. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3857-3866).
10.1007/s11263-017-1050-6 Rebecq, H., Gallego, G., Mueggler, E., & Scaramuzza, D. (2018). EMVS: Event-based multi-view stereo-3D reconstruction with an event camera in real-time. International Journal of Computer Vision, 126(12), 1394-1414.
IEEE Robotics and Automation Letters H Rebecq 2 2 593 2017 10.1109/LRA.2016.2645143 Rebecq, H., Horstschaefer, T., Gallego, G., & Scaramuzza, D. (2017). Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time. IEEE Robotics and Automation Letters, 2(2), 593-600.
Reinbacher, C., Graber, G., & Pock, T. (2016). Real-time intensity-image reconstruction for event cameras using manifold regularisation. arXiv preprint arXiv:1607.06283.
Ruder, M., Dosovitskiy, A., & Brox, T. (2016). Artistic style transfer for videos. In: German conference on pattern recognition (pp. 26-36). Springer, Cham.
rviz 3D visualization tool for ROS (2019). Retrieved from https://github.com/ros-visualization/rviz.
Scheerlinck, C., Barnes, N., & Mahony, R. (2018). Continuous-time intensity estimation using event cameras. arXiv preprint arXiv:1811.00386.
Shedligeri, P. A., Shah, K., Kumar, D., & Mitra, K. (2018). Photorealistic image reconstruction from hybrid intensity and event based sensor. arXiv preprint arXiv:1805.06140.
Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations (ICLR).
Wang, Z., Chen, J., & CH, S. (2020). Hoi: Deep learning for image super-resolution: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).
Wang, L., Mostafavi I, S. M., Ho, Y., & Yoon, K. (2019). Eventbased high dynamic range image and very high frame rate video generation using conditional generative adversarial networks. In: IEEE conference on computer vision and pattern recognition (CVPR).
IEEE Transactions on Image Processing Z Wang 13 4 600 2004 10.1109/TIP.2003.819861 Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600-612.
Ye, C., Mitrokhin, A., Fermüller, C., Yorke, J. A., & Aloimonos, Y. (2018). Unsupervised learning of dense optical flow, depth and egomotion from sparse event data. arXiv preprint arXiv:1809.08625.
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In:Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 586-595).
IEEE Transactions on Image Processing L Zhang 20 8 2378 2011 10.1109/TIP.2011.2109730 Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). Fsim: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8), 2378-2386.
Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial net-works. arXiv preprint.
10.1109/LRA.2018.2800793 Zhu, A. Z., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., & Daniilidis, K. (2018). The multi vehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robotics and Automation Letters, 3(3), 2032-2039.
해당 논문의 주제분야에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.