Data Orchestration in Deep Learning Accelerators
Krishna, Tushar, Kwon, Hyoukjun, Parashar, Angshuman
- 出版商: Morgan & Claypool
- 出版日期: 2020-08-18
- 售價: $2,160
- 貴賓價: 9.5 折 $2,052
- 語言: 英文
- 頁數: 164
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1681738694
- ISBN-13: 9781681738697
-
相關分類:
DeepLearning
立即出貨 (庫存=1)
買這商品的人也買了...
-
$1,290$1,264 -
$4,050$3,848 -
$580$568 -
$1,780$1,744 -
$336人工智能算法 捲1 基礎算法
-
$458人工智能算法 捲3 深度學習和神經網絡
-
$280$252 -
$1,750$1,715 -
$1,760$1,725 -
$1,790$1,754
相關主題
商品描述
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.
商品描述(中文翻譯)
本合成講義專注於深度神經網絡加速器內有效的數據協調技術。隨著摩爾定律的終結以及深度學習和其他人工智慧應用的快速增長,為了在邊緣設備上進行節能推理,定制的深度神經網絡(DNN)加速器應運而生。現代DNN具有數百萬個超參數,涉及數十億次計算;這需要從內存到芯片上的處理引擎進行大量的數據移動。眾所周知,如今數據移動的成本超過了實際計算的成本;因此,DNN加速器需要仔細協調芯片上計算、網絡和內存元素之間的數據,以最小化對外部DRAM的訪問次數。本書涵蓋了DNN數據流、數據重用、緩衝區層次結構、片上網絡和自動化設計空間探索。最後,介紹了壓縮和稀疏DNN的數據協調挑戰和未來趨勢。目標讀者是對設計高性能和低能耗的DNN推理加速器感興趣的學生、工程師和研究人員。