최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기IEEE journal of solid-state circuits, v.56 no.9, 2021년, pp.2858 - 2869
Han, Donghyeon (Korea Advanced Institute of Science and Technology (KAIST), School of Electrical Engineering, Daejeon, South Korea) , Im, Dongseok (Korea Advanced Institute of Science and Technology (KAIST), School of Electrical Engineering, Daejeon, South Korea) , Park, Gwangtae (Korea Advanced Institute of Science and Technology (KAIST), School of Electrical Engineering, Daejeon, South Korea) , Kim, Youngwoo (Korea Advanced Institute of Science and Technology (KAIST), School of Electrical Engineering, Daejeon, South Korea) , Song, Seokchan (Korea Advanced Institute of Science and Technology (KAIST), School of Electrical Engineering, Daejeon, South Korea) , Lee, Juhyoung (Korea Advanced Institute of Science and Technology (KAIST), School of Electrical Engineering, Daejeon, South Korea) , Yoo, Hoi-Jun (Korea Advanced Institute of Science and Technology (KAIST), School of Electrical Engineering, Daejeon, South Kor)
This article presents HNPU, which is an energy-efficient deep neural network (DNN) training processor by adopting algorithm-hardware co-design. The HNPU supports stochastic dynamic fixed-point representation and layer-wise adaptive precision searching unit for low-bit-precision training. It addition...
Tu, Fengbin, Wu, Weiwei, Wang, Yang, Chen, Hongjiang, Xiong, Feng, Shi, Man, Li, Ning, Deng, Jinyi, Chen, Tianbao, Liu, Leibo, Wei, Shaojun, Xie, Yuan, Yin, Shouyi. Evolver: A Deep Learning Processor With On-Device Quantization–Voltage–Frequency Tuning. IEEE journal of solid-state circuits, vol.56, no.2, 658-673.
Tensor Processing Unit—Second Generation (TPU-v2) 0
Proc 31st Int Conf Neural Inf Process Syst (NIPS) Flexpoint: An adaptive numerical format for efficient training of deep neural networks köster 2017 1740
Proc 32nd Int Conf Neural Inf Process Syst (NIPS) Training deep neural networks with 8-bit floating point numbers wang 2018 7686
Choi, Seungkyu, Sim, Jaehyeong, Kang, Myeonggu, Choi, Yeongjae, Kim, Hyeonuk, Kim, Lee-Sup. An Energy-Efficient Deep Convolutional Neural Network Training Accelerator for In Situ Personalization on Smart Devices. IEEE journal of solid-state circuits, vol.55, no.10, 2691-2702.
Han, Donghyeon, Lee, Jinsu, Lee, Jinmook, Yoo, Hoi-Jun. A Low-Power Deep Neural Network Online Learning Processor for Real-Time Object Tracking Application. IEEE transactions on circuits and systems. a publication of the IEEE Circuits and Systems Society. I, Regular papers, vol.66, no.5, 1794-1804.
Proc NIPS Workshop Private Multi-Party Mach Learn Federated learning: Strategies for improving communication efficiency kone?ný 2016 1
Chen, Yu-Hsin, Krishna, Tushar, Emer, Joel S., Sze, Vivienne. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks. IEEE journal of solid-state circuits, vol.52, no.1, 127-138.
Yin, Shihui, Seo, Jae-Sun. A 2.6 TOPS/W 16-Bit Fixed-Point Convolutional Neural Network Learning Processor in 65-nm CMOS. IEEE solid-state circuits letters, vol.3, 13-16.
arXiv 1606 06160 [cs] DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients zhou 2016
Proc 32nd Int Conf Int Conf Mach Learn Deep learning with limited numerical precision gupta 2015 37 1737
Tesla V100 0
해당 논문의 주제분야에서 활용도가 높은 상위 5개 콘텐츠를 보여줍니다.
더보기 버튼을 클릭하시면 더 많은 관련자료를 살펴볼 수 있습니다.
*원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.