Architecture, Dataflow and Physical Design Implications of 3D-ICs for DNN-Accelerators

Type: Preprint

Publication Date: 2020-12-23

Citations: 0

Locations

  • arXiv (Cornell University) - View

Similar Works

Action Title Year Authors
+ Architecture, Dataflow and Physical Design Implications of 3D-ICs for DNN-Accelerators 2020 Jan Moritz Joseph
Ananda Samajdar
Lingjun Zhu
Rainer Leupers
Sung Kyu Lim
Thilo Pionteck
Tushar Krishna
+ PDF Chat Architecture, Dataflow and Physical Design Implications of 3D-ICs for DNN-Accelerators 2021 Jan Moritz Joseph
Ananda Samajdar
Lingjun Zhu
Rainer Leupers
Sung Kyu Lim
Thilo Pionteck
Tushar Krishna
+ Temperature-Aware Monolithic 3D DNN Accelerators for Biomedical Applications 2022 Prachi Shukla
Vasilis F. Pavlidis
Emre Salman
Ayse K. Coskun
+ Dataflow-Architecture Co-Design for 2.5D DNN Accelerators using Wireless Network-on-Package 2020 Robert Guirado
Hyoukjun Kwon
Sergi Abadal
Eduard Alarcón
Tushar Krishna
+ Dataflow-Architecture Co-Design for 2.5D DNN Accelerators using Wireless Network-on-Package 2020 Robert Guirado
Hyoukjun Kwon
Sergi Abadal
Eduard Alarcón
Tushar Krishna
+ PDF Chat Dataflow-Architecture Co-Design for 2.5D DNN Accelerators using Wireless Network-on-Package 2021 Robert Guirado
Hyoukjun Kwon
Sergi Abadal
Eduard Alarcón
Tushar Krishna
+ PDF Chat SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks 2021 Gokul Krishnan
Sumit K. Mandal
Manvitha Pannala
Chaitali Chakrabarti
Jae-sun Seo
Ümit Y. Ogras
Yu Cao
+ PDF Chat Neurostream: Scalable and Energy Efficient Deep Learning with Smart Memory Cubes 2017 Erfan Azarkhish
Davide Rossi
Igor Loi
Luca Benini
+ PDF Chat Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration 2021 Hasan Genc
Seah Kim
Alon Amid
Ameer Haj-Ali
Vighnesh Iyer
Pranav Prakash
Jerry Zhao
Daniel Grubb
Harrison Liew
Howard Mao
+ PDF Chat CiMLoop: A Flexible, Accurate, and Fast Compute-In-Memory Modeling Tool 2024 Tanner Andrulis
Joel Emer
Vivienne Sze
+ Towards Heterogeneous Multi-core Accelerators Exploiting Fine-grained Scheduling of Layer-Fused Deep Neural Networks 2022 Arne Symons
Linyan Mei
Steven Colleman
Pouya Houshmand
Susmita Kar
Marian Verhelst
+ PDF Chat Dataflow-Aware PIM-Enabled Manycore Architecture for Deep Learning Workloads 2024 Harsh Sharma
Gaurav Narang
Janardhan Rao Doppa
Ümit Y. Ogras
Partha Pratim Pande
+ PDF Chat Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations 2016 Tayfun Gokmen
Yurii A. Vlasov
+ PDF Chat CMDS: Cross-layer Dataflow Optimization for DNN Accelerators Exploiting Multi-bank Memories 2023 Man Shi
Steven Colleman
Charlotte VanDeMieroop
Antony Joseph
Maurice Meijer
Wim Dehaene
Marian Verhelst
+ Gemini: Mapping and Architecture Co-exploration for Large-scale DNN Chiplet Accelerators 2023 Jingwei Cai
Zuotong Wu
Sen Peng
Yuchen Wei
Zhanhong Tan
Guiming Shi
Mingyu Gao
Kaisheng Ma
+ PDF Chat End-to-end 100-TOPS/W Inference With Analog In-Memory Computing: Are We There Yet? 2021 Gianmarco Ottavi
Geethan Karunaratne
Francesco Conti
Irem Boybat
Luca Benini
Davide Rossi
+ MPNA: A Massively-Parallel Neural Array Accelerator with Dataflow Optimization for Convolutional Neural Networks 2018 Muhammad Abdullah Hanif
Rachmad Vidya Wicaksana Putra
Muhammad Ayyoub Tanvir
Rehan Hafiz
Semeen Rehman
Muhammad Shafique
+ PDF Chat An Electro-Photonic System for Accelerating Deep Neural Networks 2023 Cansu Demirkiran
Furkan Eris
Gongyu Wang
Jonathan Elmhurst
Nick Moore
Nicholas C. Harris
Ayon Basumallik
Vijay Janapa Reddi
Ajay Joshi
Darius Bunandar
+ Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration 2019 Hasan Genç
Seah Kim
Alon Amid
Ameer Haj-Ali
Vighnesh Iyer
Pranav Prakash
Jerry Zhao
Daniel Grubb
Harrison Liew
Howard Mao
+ A Customized NoC Architecture to Enable Highly Localized Computing-On-the-Move DNN Dataflow 2021 Kaining Zhou
Yangshuo He
Rui Xiao
Jiayi Liu
Kejie Huang

Works That Cite This (0)

Action Title Year Authors