$\require{mediawiki-texvc}$
  • 검색어에 아래의 연산자를 사용하시면 더 정확한 검색결과를 얻을 수 있습니다.
  • 검색연산자
검색연산자 기능 검색시 예
() 우선순위가 가장 높은 연산자 예1) (나노 (기계 | machine))
공백 두 개의 검색어(식)을 모두 포함하고 있는 문서 검색 예1) (나노 기계)
예2) 나노 장영실
| 두 개의 검색어(식) 중 하나 이상 포함하고 있는 문서 검색 예1) (줄기세포 | 면역)
예2) 줄기세포 | 장영실
! NOT 이후에 있는 검색어가 포함된 문서는 제외 예1) (황금 !백금)
예2) !image
* 검색어의 *란에 0개 이상의 임의의 문자가 포함된 문서 검색 예) semi*
"" 따옴표 내의 구문과 완전히 일치하는 문서만 검색 예) "Transform and Quantization"
쳇봇 이모티콘
안녕하세요!
ScienceON 챗봇입니다.
궁금한 것은 저에게 물어봐주세요.

논문 상세정보

Abstract

As the increasing expectations of a practical AI (Artificial Intelligence) service makes AI algorithms more complicated, an efficient processor to process AI algorithms is required. To meet this requirement, processors optimized for parallel processing, such as GPUs (Graphics Processing Units), have been widely employed. However, the GPU has a generalized structure for various applications, so it is not optimized for the AI algorithm. Therefore, research on the development of AI processors optimized for AI algorithm processing has been actively conducted. This paper briefly introduces an AI processor especially for inference acceleration, developed by the Electronics and Telecommunications Research Institute, South Korea., and other global vendors for mobile and server platforms. However, the GPU has a generalized structure for various applications, so it is not optimized for the AI algorithm. Therefore, research on the development of AI processors optimized for AI algorithm processing has been actively conducted.

참고문헌 (20)

  1. 1. A. Reuther et al., "Survey and benchmarking of machine learning accelerators," arXiv preprint arXiv:1908.11348, 2019. 
  2. 2. A. Frumusanu, "The Apple iPhone 11, 11 Pro & 11 Pro Max Review: Performance, Battery, & Camera Elevated," Anandtech, Oct. 16, 2019. 
  3. 3. A. Ignatov et al., "AI Benchmark: All About Deep Learning on Smartphones in 2019," arXiv preprint arXiv:1910.06663, 2019. 
  4. 4. Huawei, consumer.huawei.com/en/campaign/kirin-990-series/ 
  5. 5. H. Liao, "DaVinci: A Scalable Architecture for Neural Network Computing," 2018, www.hotchips.org/hc31/HC31_1.11_Huawei.Davinci.HengLiao_v4.0.pdf 
  6. 6. Samsung, www.samsung.com/semiconductor/minisite/exynos/products/mobileprocessor/exynos-990/ 
  7. 7. S. Windsor, "Snapdragon 865 vs Kirin 990 5G vs Exynos 990 (Exynos 9830) vs MediaTek Dimensity 1000 (MT6889): which one is the best 5G processor?" www.gearbest.com, Dec. 10, 2019. 
  8. 8. H. Liao et al., "DaVinci: A Scalable Architecture for Neural Network Computing," in Proc. Hot Chips 31 Symp., Cupertino, CA, USA, Aug. 18-20, 2019, doi: 10.1109/HOTCHIPS.2019.8875654. 
  9. 9. J. Song et al., "7.1 An 11.5 TOPS/W 1024-MAC butterfly structure dual-core sparsity-aware neural processing unit in 8nm flagship mobile SoC," in Proc. IEEE Int. Solid-State Circuits Conf.-(ISSCC), San Francisco, CA, USA, Feb. 17-21, 2019, doi: 10.1109/ISSCC.2019.8662476. 
  10. 10. www.arm.com/products/silicon-ip-cpu/ethos/ethos-n77, n57, n37 
  11. 11. www.ceva-dsp.com/product/ceva-neupro/ 
  12. 12. www.gyrfalcontech.ai/solutions/2801s, 2801s 
  13. 13. www.ces.tech/Innovation-Awards/Honorees/2020/Honorees/H/Hailo-8.aspx 
  14. 14. H. Orr Danon, "Introducing Hailo-8: The Most Efficient Deep Learning Processor for Edge Devices," 2019 Embedded Vision Summit, May 2019. 
  15. 15. E. Lindholm et al., "NVIDIA Tesla: A Unified Graphics and Computing Architecture," IEEE Micro, vol. 28, no. 2, 2008, pp. 39-55. 
  16. 16. NVIDIA, "Nvidia Tesla V100 GPU Architecture," WP-08608-001_v1.1, 2017, https://images.nvidia.com/content/voltaarchitecture/pdf/volta-architecture-whitepaper.pdf. 
  17. 17. D. Patterson, "Domain Specific Architectures for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs)," Allen School Distinguished Lecture: David Patterson (UC Berkeley/Google) 
  18. 18. N. P. Jouppi et al., "In-Datacenter Performance Analysis of a Tensor Processing Unit," in Proc. Annu. Int. Symp. Comput. Architect., Toronto, Canada, June 2017 doi: 10.1145/3079856.3080246. 
  19. 19. caffe.berkeleyvision.org 
  20. 20. Y. Kwon et al., "Function-Safe Vehicular AI Processor with Nano Core-In-Memory Architecture," in Proc. Annu. Int. Conf. Artif. Intell. Circuits Syst., Hsinchu, Taiwan, Mar. 2019, doi: 10.1109/AICAS.2019.8771603 . 

문의하기 

궁금한 사항이나 기타 의견이 있으시면 남겨주세요.

Q&A 등록

원문보기

원문 PDF 다운로드

  • ScienceON :

원문 URL 링크

원문 PDF 파일 및 링크정보가 존재하지 않을 경우 KISTI DDS 시스템에서 제공하는 원문복사서비스를 사용할 수 있습니다. (원문복사서비스 안내 바로 가기)

DOI 인용 스타일

"" 핵심어 질의응답