Memory capacity of neural networks with threshold and ReLU activations

Type: Preprint

Publication Date: 2020-01-01

Citations: 13

DOI: https://doi.org/10.48550/arxiv.2001.06938

Locations

  • arXiv (Cornell University) - View
  • DataCite API - View

Similar Works

Action Title Year Authors
+ Network size and weights size for memorization with two-layers neural networks 2020 Sébastien Bubeck
Ronen Eldan
Yin Tat Lee
Dan Mikulincer
+ PDF Chat Neural networks with linear threshold activations: structure and algorithms 2023 Sammy Khalife
Hongyu Cheng
Amitabh Basu
+ PDF Chat Neural Networks with Linear Threshold Activations: Structure and Algorithms 2022 Sammy Khalife
Amitabh Basu
+ The capacity of feedforward neural networks 2019 Pierre Baldi
Roman Vershynin
+ The capacity of feedforward neural networks 2019 Pierre Baldi
Roman Vershynin
+ Neural networks with linear threshold activations: structure and algorithms 2021 Sammy Khalife
Hongyu Cheng
Amitabh Basu
+ PDF Chat Fixed width treelike neural networks capacity analysis -- generic activations 2024 Mihailo Stojnic
+ Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation 2020 Elisa Oostwal
Michiel Straat
Michael Biehl
+ Memory capacity of two layer neural networks with smooth activations 2023 Liam Madden
Christos Thrampoulidis
+ Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently 2019 Rong Ge
Runzhe Wang
Haoyu Zhao
+ PDF Chat Analyzing the Neural Tangent Kernel of Periodically Activated Coordinate Networks 2024 Hemanth Saratchandran
Shin-Fang Chng
Simon Lucey
+ Memory Capacity of Two Layer Neural Networks with Smooth Activations 2024 Liam Madden
Christos Thrampoulidis
+ PDF Chat Learning Neural Networks with Sparse Activations 2024 Pranjal Awasthi
Nishanth Dikkala
Pritish Kamath
Raghu Meka
+ The capacity of feedforward neural networks 2019 Pierre Baldi
Roman Vershynin
+ On Sparsity in Overparametrised Shallow ReLU Networks. 2020 Jaume de Dios
Joan Bruna
+ PDF Chat Properties of the Geometry of Solutions and Capacity of Multilayer Neural Networks with Rectified Linear Unit Activations 2019 Carlo Baldassi
Enrico M. Malatesta
Riccardo Zecchina
+ On Sparsity in Overparametrised Shallow ReLU Networks 2020 Jaume de Dios
Joan Bruna
+ The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers 2022 Zonglin Li
Chong You
Srinadh Bhojanapalli
Daliang Li
Ankit Singh Rawat
Sashank J. Reddi
Ke Ye
Felix Chern
Felix Yu
Ruiqi Guo
+ A Capacity Scaling Law for Artificial Neural Networks 2017 Gerald Friedland
Mario Michael Krell
+ Learning a Neuron by a Shallow ReLU Network: Dynamics and Implicit Bias for Correlated Inputs 2023 Dmitry Chistikov
Matthias Englert
Ranko Lazić