• 검색어에 아래의 연산자를 사용하시면 더 정확한 검색결과를 얻을 수 있습니다.
  • 검색연산자
검색연산자 기능 검색시 예
() 우선순위가 가장 높은 연산자 예1) (나노 (기계 | machine))
공백 두 개의 검색어(식)을 모두 포함하고 있는 문서 검색 예1) (나노 기계)
예2) 나노 장영실
| 두 개의 검색어(식) 중 하나 이상 포함하고 있는 문서 검색 예1) (줄기세포 | 면역)
예2) 줄기세포 | 장영실
! NOT 이후에 있는 검색어가 포함된 문서는 제외 예1) (황금 !백금)
예2) !image
* 검색어의 *란에 0개 이상의 임의의 문자가 포함된 문서 검색 예) semi*
"" 따옴표 내의 구문과 완전히 일치하는 문서만 검색 예) "Transform and Quantization"
쳇봇 이모티콘
ScienceON 챗봇입니다.
궁금한 것은 저에게 물어봐주세요.

논문 상세정보


Considerable accuracy improvements in deep learning have recently been achieved in many applications that require large amounts of computation and expensive memory. However, recent advanced techniques for compacting and accelerating the deep learning model have been developed for deployment in lightweight devices with constrained resources. Lightweight deep learning techniques can be categorized into two schemes: lightweight deep learning algorithms (model simplification and efficient convolutional filters) in nature and transferring models into compact/small ones (model compression and knowledge distillation). In this report, we briefly summarize various lightweight deep learning techniques and possible research directions.

저자의 다른 논문

참고문헌 (19)

  1. 1. K. He et al., "Deep Residual Learning for Image Recognition," in Proc. IEEE Conf. Comput. Vision Pattern Recognition , Las Vegars, NV, USA, June 2016, pp. 770-778. 
  2. 2. K. He et al., "Identity Mappings in Deep Residual Networks," in European Conference on Computer Vision , Springer, 2016, pp. 630-645 
  3. 3. G. Huang et al., "Densely Connected Convolutional Networks," in Proc. IEEE Conf. Computer Vision Pattern Recognition , Honolulu, HI, USA, July, 2017, pp. 2265-2269. 
  4. 4. F.N. Iandola et al., "SqueezeNet: AlexNet-Level Accuracy with 50x Fewer Parameters and < 0.5MB model size," arXiv:1602.07360, 2016. 
  5. 5. A.G. Howard et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," arXiv:1704.04861, 2017. 
  6. 6. M. Sandler et al., "MobileNet V2: Inverted Residuals and Linear Bottlenecks," arXiv:1801.04381, 2018. 
  7. 7. X. Zhang et al., "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices," arXiv:1707.01083, 2017. 
  8. 8. M. Ningning et al., "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design," arXiv:1807.11164, 2018. 
  9. 9. T.J. Yang et al., "NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications," arXiv:1804.03230, 2018. 
  10. 10. M. Tan et al., MnasNet: Platform-Aware Neural Architecture Search for Mobile," arXiv:1807.11626, 2018. 
  11. 11. S. Han, H. Mao, and W.J. Dally, "Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding," arXiv:1510.00149, 2015. 
  12. 12. M. Rastegari et al., "XnorNet: ImageNet Classification Using Binary Convolutional Neural Networks," arXiv:1603.05279, 2016. 
  13. 13. K. Ullrich, E. Meeds, and M. Welling, "Soft Weight-Sharing for Neural Network Compression," arXiv:1702.04008, 2017. 
  14. 14. G. Hinton, O. Vinyals, and J. Dean, "Distilling the Knowledge in a Neural Network," arXiv: 1503.02531, 2015. 
  15. 15. T. Chen, I. Goodfellow, and J. Shlens, "Net2Net: Accelerating Learning via Knowledge Transfer," in Int. Conf. Learning Representation (ICLR), May 2016. 
  16. 16. J. Wu, J. Hou and W. Liu, "PocketFlow : An Automated Framework for Compressing and Accelerating Deep Neural Networks,". in Proc. Neural Inf. Process. Syst. (NIPS) , Montreal, Canada, Dec. 2018. 
  17. 17. Y. He et al., :AMC: AutoML for Model Compression and Acceleration on Mobile Devices," in Proc. Eur. Conf. Comput. Vision (ECCV) , Munich, Germany, Sept. 2018, pp. 784-800. 
  18. 18. https://www.xnor.ai/ 
  19. 19. https://hyperconnect.com/ 


궁금한 사항이나 기타 의견이 있으시면 남겨주세요.

Q&A 등록


원문 PDF 다운로드

  • ScienceON :

원문 URL 링크

원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다. (원문복사서비스 안내 바로 가기)

DOI 인용 스타일

"" 핵심어 질의응답