Ask a Question

Prefer a chat interface with context about you and your work?

ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining

ALWANN: Automatic Layer-Wise Approximation of Deep Neural Network Accelerators without Retraining

The state-of-the-art approaches employ approximate computing to reduce the energy consumption of DNN hardware. Approximate DNNs then require extensive retraining afterwards to recover from the accuracy loss caused by the use of approximate operations. However, retraining of complex DNNs does not scale well. In this paper, we demonstrate that efficient …