Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks

Type: Preprint

Publication Date: 2017-01-01

Citations: 27

DOI: https://doi.org/10.48550/arxiv.1712.01507

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Bit Fusion: Bit-Level Dynamically Composable Architecture for Accelerating Deep Neural Networks 2017 Hardik Sharma
Jongse Park
Naveen Suda
Liangzhen Lai
Benson Chau
Joon‐Kyung Kim
Vikas Chandra
Hadi Esmaeilzadeh
+ Bit-Parallel Vector Composability for Neural Acceleration 2020 Soroush Ghodrati
Hardik Sharma
Cliff Young
Nam Sung Kim
Hadi Esmaeilzadeh
+ PDF Chat Bit-Parallel Vector Composability for Neural Acceleration 2020 Soroush Ghodrati
Hardik Sharma
Cliff Young
Nam Sung Kim
Hadi Esmaeilzadeh
+ PDF Chat BF-IMNA: A Bit Fluid In-Memory Neural Architecture for Neural Network Acceleration 2024 Mariam Rakka
Rachid Karami
Ahmed M. Eltawil
Mohammed E. Fouda
Fadi Kurdahi
+ Ax-BxP: Approximate Blocked Computation for Precision-Reconfigurable Deep Neural Network Acceleration 2020 R. Elangovan
Shubham Jain
Anand Raghunathan
+ Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How 2018 Alberto Delmás
Patrick Judd
Dylan Malone Stuart
Zissis Poulos
Mostafa Mahmoud
Sayeh Sharify
Miloš Nikolić
Andreas Moshovos
+ Bit-balance: Model-Hardware Co-design for Accelerating NNs by Exploiting Bit-level Sparsity 2023 Wenhao Sun
Zhiwei Zou
Deng Liu
Wendi Sun
Song Chen
Yi Kang
+ PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-Efficient ReRAM 2020 Aayush Ankit
Izzat El Hajj
Sai Rahul Chalamalasetti
Sapan Agarwal
Matthew Marinella
Martin Foltín
John Paul Strachan
Dejan Milojičić
Wen‐mei Hwu
Kaushik Roy
+ TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks 2019 Shubham Jain
Sumeet Kumar Gupta
Anand Raghunathan
+ PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM 2019 Aayush Ankit
Izzat El Hajj
Sai Rahul Chalamalasetti
Sapan Agarwal
Matthew Marinella
Martin Foltín
John Paul Strachan
Dejan Milojičić
Wen‐mei Hwu
Kaushik Roy
+ PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-efficient ReRAM 2019 Aayush Ankit
Izzat El Hajj
Sai Rahul Chalamalasetti
Sapan Agarwal
Matthew Marinella
Martin Foltín
John Paul Strachan
Dejan Milojičić
Wen‐mei Hwu
Kaushik Roy
+ PDF Chat TiM-DNN: Ternary In-Memory Accelerator for Deep Neural Networks 2020 Shubham Jain
Sumeet Kumar Gupta
Anand Raghunathan
+ PDF Chat Ax-BxP: Approximate Blocked Computation for Precision-reconfigurable Deep Neural Network Acceleration 2022 R. Elangovan
Shubham Jain
Anand Raghunathan
+ TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks. 2019 Shubham Jain
Sumeet Kumar Gupta
Anand Raghunathan
+ PDF Chat LoopTree: Exploring the Fused-layer Dataflow Accelerator Design Space 2024 Michael A. Gilbert
Yannan Nellie Wu
Joel Emer
Vivienne Sze
+ DPRed: Making Typical Activation and Weight Values Matter In Deep Learning Computing 2018 Alberto Delmás
Sayeh Sharify
Patrick Judd
Kevin Siu
Miloš Nikolić
Andreas Moshovos
+ PDF Chat Bit-Balance: Model-Hardware Codesign for Accelerating NNs by Exploiting Bit-Level Sparsity 2023 Wenhao Sun
Zhiwei Zou
Deng Liu
Wendi Sun
Song Chen
Yi Kang
+ PDF Chat HYDRA: Hybrid Data Multiplexing and Run-time Layer Configurable DNN Accelerator 2024 Sonu Kumar
Komal Gupta
Gopal Raut
Mukul Lokhande
Santosh Kumar Vishvakarma
+ PDF Chat FlexNN: A Dataflow-aware Flexible Deep Learning Accelerator for Energy-Efficient Edge Devices 2024 Arnab Raha
Deepak A. Mathaikutty
Soumendu Kumar Ghosh
Shamik Kundu
+ FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support 2022 Seock-Hwan Noh
Jahyun Koo
Seunghyun Lee
Jongse Park
Jaeha Kung

Works That Cite This (11)

Action Title Year Authors
+ PDF Chat Effective Algorithm-Accelerator Co-design for AI Solutions on Edge Devices 2020 Cong Hao
Yao Chen
Xiaofan Zhang
Yuhong Li
Jinjun Xiong
Wen‐mei Hwu
Deming Chen
+ MetaMix: Meta-state Precision Searcher for Mixed-precision Activation Quantization 2023 Han-Byul Kim
Joo Hyung Lee
Sungjoo Yoo
Hong-Seok Kim
+ Accelerating Generalized Linear Models with MLWeaving: A One-Size-Fits-All System for Any-precision Learning (Technical Report) 2019 Zeke Wang
Kaan Kara
Hantian Zhang
Gustavo Alonso
Onur Mutlu
Ce Zhang
+ Neural-PIM: Efficient Processing-In-Memory with Neural Approximation of Peripherals 2021 Weidong Cao
Yilong Zhao
Adith Boloor
Yinhe Han
Xuan Zhang
Li Jiang
+ PDF Chat Enable Deep Learning on Mobile Devices: Methods, Systems, and Applications 2022 Han Cai
Ji Lin
Yujun Lin
Zhijian Liu
Haotian Tang
Hanrui Wang
Ligeng Zhu
Song Han
+ PDF Chat MetaMix: Meta-State Precision Searcher for Mixed-Precision Activation Quantization 2024 Hanbyul Kim
Joo Hyung Lee
Sungjoo Yoo
Hong‐Seok Kim
+ PDF Chat BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization 2024 Miloš Nikolić
Ghouthi Boukli Hacene
Ciaran Bannon
Alberto Delmás Lascorz
Matthieu Courbariaux
Omar Mohamed Awad
Isak Edo Vivancos
Yoshua Bengio
Vincent Gripon
Andreas Moshovos
+ PDF Chat Direct Spatial Implementation of Sparse Matrix Multipliers for Reservoir Computing 2022 Matthew Denton
Herman Schmit
+ PDF Chat Recurrent Neural Networks: An Embedded Computing Perspective 2020 Nesma M. Rezk
Madhura Purnaprajna
Tomas Nordström
Zain Ul-Abdin
+ SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference 2020 Ye Yu
Niraj K. Jha

Works Cited by This (0)

Action Title Year Authors