$\require{mediawiki-texvc}$

연합인증

연합인증 가입 기관의 연구자들은 소속기관의 인증정보(ID와 암호)를 이용해 다른 대학, 연구기관, 서비스 공급자의 다양한 온라인 자원과 연구 데이터를 이용할 수 있습니다.

이는 여행자가 자국에서 발행 받은 여권으로 세계 각국을 자유롭게 여행할 수 있는 것과 같습니다.

연합인증으로 이용이 가능한 서비스는 NTIS, DataON, Edison, Kafe, Webinar 등이 있습니다.

한번의 인증절차만으로 연합인증 가입 서비스에 추가 로그인 없이 이용이 가능합니다.

다만, 연합인증을 위해서는 최초 1회만 인증 절차가 필요합니다. (회원이 아닐 경우 회원 가입이 필요합니다.)

연합인증 절차는 다음과 같습니다.

최초이용시에는
ScienceON에 로그인 → 연합인증 서비스 접속 → 로그인 (본인 확인 또는 회원가입) → 서비스 이용

그 이후에는
ScienceON 로그인 → 연합인증 서비스 접속 → 서비스 이용

연합인증을 활용하시면 KISTI가 제공하는 다양한 서비스를 편리하게 이용하실 수 있습니다.

[해외논문] Knowledge Transfer for On-Device Deep Reinforcement Learning in Resource Constrained Edge Computing Systems 원문보기

IEEE access : practical research, open solutions, v.8, 2020년, pp.146588 - 146597  

Jang, Ingook (Autonomous IoT Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea) ,  Kim, Hyunseok (Autonomous IoT Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea) ,  Lee, Donghun (Autonomous IoT Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea) ,  Son, Young-Sung (Autonomous IoT Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea) ,  Kim, Seonghyun (Autonomous IoT Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea)

Abstract AI-Helper 아이콘AI-Helper

Deep reinforcement learning (DRL) is a promising approach for developing control policies by learning how to perform tasks. Edge devices are required to control their actions by exploiting DRL to solve tasks autonomously in various applications such as smart manufacturing and autonomous driving. How...

참고문헌 (38)

  1. arXiv 1711 07478 Implementing the deep Q-network roderick 2017 

  2. 10.1109/ICCAD45719.2019.8942147 

  3. Proc IEEE/ACM ICCAD On Neural Architecture Search for Resource-Constrained Hardware Platforms lu 2019 1 

  4. ArXiv 1503 02531 Distilling the knowledge in a neural network hinton 2015 

  5. Proc NIPS Do deep nets really need to be deep ba 2014 2654 

  6. arXiv 1804 07612 Revisiting small batch training for deep neural networks masters 2018 

  7. Proc ICLR Adam: A method for stochastic optimization kingma 2015 1 

  8. arXiv 1603 04467 TensorFlow: Large-scale machine learning on heterogeneous distributed systems abadi 2016 

  9. 10.1109/ICCV.2019.00199 

  10. Mohammadi, Mehdi, Al-Fuqaha, Ala, Sorour, Sameh, Guizani, Mohsen. Deep Learning for IoT Big Data and Streaming Analytics: A Survey. IEEE Communications surveys and tutorials, vol.20, no.4, 2923-2960.

  11. Proc ICLR Policy distillation rusu 2016 1 

  12. Nature Human-level control through deep reinforcement learning mnih 2015 10.1038/nature14236 518 529 

  13. Nature Mastering the game of go with deep neural networks and tree search silver 2016 10.1038/nature16961 529 484 

  14. Proc ICLR Continuous control with deep reinforcement learning lillicrap 2016 1 

  15. Liu, Chi Harold, Chen, Zheyu, Tang, Jian, Xu, Jie, Piao, Chengzhe. Energy-Efficient UAV Control for Effective and Fair Communication Coverage: A Deep Reinforcement Learning Approach. IEEE journal on selected areas in communications : a publication of the IEEE Communications Society, vol.36, no.9, 2059-2070.

  16. Savaglio, Claudio, Pace, Pasquale, Aloi, Gianluca, Liotta, Antonio, Fortino, Giancarlo. Lightweight Reinforcement Learning for Energy Efficient Communications in Wireless Sensor Networks. IEEE access : practical research, open solutions, vol.7, 29355-29364.

  17. Chen, Xianfu, Zhang, Honggang, Wu, Celimuge, Mao, Shiwen, Ji, Yusheng, Bennis, Medhi. Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning. IEEE Internet of things journal : a joint publication of the IEEE Sensors Council, the IEEE Communications Society, the IEEE Computer Society, the IEEE Signal Processing Society, vol.6, no.3, 4005-4018.

  18. Min, Minghui, Xiao, Liang, Chen, Ye, Cheng, Peng, Wu, Di, Zhuang, Weihua. Learning-Based Computation Offloading for IoT Devices With Energy Harvesting. IEEE transactions on vehicular technology, vol.68, no.2, 1930-1941.

  19. Chen, Xianfu, Zhao, Zhifeng, Wu, Celimuge, Bennis, Mehdi, Liu, Hang, Ji, Yusheng, Zhang, Honggang. Multi-Tenant Cross-Slice Resource Orchestration: A Deep Reinforcement Learning Approach. IEEE journal on selected areas in communications : a publication of the IEEE Communications Society, vol.37, no.10, 2377-2392.

  20. 10.1109/ICCV.2017.541 

  21. IEEE Trans Ind Informat Internet of Things for enterprise systems of modern manufacturing bi 2014 10.1109/TII.2014.2300338 10 1537 

  22. Proc NIPS Dynamic network surgery for efficient dnns guo 2016 1379 

  23. Ma, Huadong, Liu, Wu. A Progressive Search Paradigm for the Internet of Things. IEEE multimedia, vol.25, no.1, 76-86.

  24. Shi, Weisong, Cao, Jie, Zhang, Quan, Li, Youhuizi, Xu, Lanyu. Edge Computing: Vision and Challenges. IEEE Internet of things journal : a joint publication of the IEEE Sensors Council, the IEEE Communications Society, the IEEE Computer Society, the IEEE Signal Processing Society, vol.3, no.5, 637-646.

  25. 10.1145/1150402.1150464 

  26. Pan, Jianli, McElhannon, James. Future Edge Cloud and Edge Computing for Internet of Things Applications. IEEE Internet of things journal : a joint publication of the IEEE Sensors Council, the IEEE Communications Society, the IEEE Computer Society, the IEEE Signal Processing Society, vol.5, no.1, 439-449.

  27. 10.1109/IROS.2017.8202244 

  28. 10.1109/ICDCS.2017.123 

  29. Guerrero-ibanez, Juan Antonio, Zeadally, Sherali, Contreras-Castillo, Juan. Integration challenges of intelligent transportation systems with connected vehicle, cloud computing, and internet of things technologies. IEEE wireless communications, vol.22, no.6, 122-128.

  30. 10.1109/ICRA.2018.8461233 

  31. Zanella, Andrea, Bui, Nicola, Castellani, Angelo, Vangelista, Lorenzo, Zorzi, Michele. Internet of Things for Smart Cities. IEEE Internet of things journal : a joint publication of the IEEE Sensors Council, the IEEE Communications Society, the IEEE Computer Society, the IEEE Signal Processing Society, vol.1, no.1, 22-32.

  32. Zhu, Hao, Cao, Yang, Wang, Wei, Jiang, Tao, Jin, Shi. Deep Reinforcement Learning for Mobile Edge Caching: Review, New Features, and Open Issues. IEEE network, vol.32, no.6, 50-57.

  33. Xu, Mengwei, Qian, Feng, Mei, Qiaozhu, Huang, Kang, Liu, Xuanzhe. DeepType : On-Device Deep Learning for Input Personalization Service with Minimal Privacy Concern. Proceedings of acm on interactive, mobile, wearable and ubiquitous technologies, vol.2, no.4, 1-26.

  34. 10.1145/3241539.3241559 

  35. Li, He, Ota, Kaoru, Dong, Mianxiong. Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing. IEEE network, vol.32, no.1, 96-101.

  36. Proc IEEE Int Conf Edge Comput (EDGE) Are existing knowledge transfer techniques effective for deep learning with edge devices? sharma 2018 29 

  37. Proc ICLR Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding han 2016 1 

  38. Proc NIPS Learning both weights and connections for efficient neural network han 2015 1135 

LOADING...

활용도 분석정보

상세보기
다운로드
내보내기

활용도 Top5 논문

해당 논문의 주제분야에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.

관련 콘텐츠

오픈액세스(OA) 유형

GOLD

오픈액세스 학술지에 출판된 논문

유발과제정보 저작권 관리 안내
섹션별 컨텐츠 바로가기

AI-Helper ※ AI-Helper는 오픈소스 모델을 사용합니다.

AI-Helper 아이콘
AI-Helper
안녕하세요, AI-Helper입니다. 좌측 "선택된 텍스트"에서 텍스트를 선택하여 요약, 번역, 용어설명을 실행하세요.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.

선택된 텍스트

맨위로