Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
The Efficacy of L 1s Regularization in Two-Layer Neural Networks.
Gen Li
,
Yuantao Gu
,
Jie Ding
Type:
Preprint
Publication Date:
2020-10-02
Citations:
1
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks
2020
Gen Li
Yuantao Gu
Jie Ding
+
Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off
2021
Huiyuan Wang
Lin Wei
+
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint
2020
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Denny Wu
Tianzong Zhang
+
PDF
Chat
A General Framework of the Consistency for Large Neural Networks
2024
Haoran Zhan
Yingcun Xia
+
Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias
2022
Clayton Sanford
Navid Ardeshir
Daniel Hsu
+
Empirical Risk Landscape Analysis for Understanding Deep Neural Networks
2018
Pan Zhou
Jiashi Feng
+
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
2020
Zitong Yang
Yaodong Yu
Chong You
Jacob Steinhardt
Yi Ma
+
A Priori Estimates for Two-layer Neural Networks
2018
E Weinan
Chao Ma
Lei Wu
+
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
2019
Tomaso Poggio
Andrzej Banburski
Qianli Liao
+
PDF
Chat
Error Bounds of Supervised Classification from Information-Theoretic Perspective
2024
Binchuan Qi
Wei Gong
Li Li
+
Approximation and Estimation for High-Dimensional Deep Learning Networks
2018
Andrew R. Barron
Jason M. Klusowski
+
A spectral-based analysis of the separation between two-layer neural networks and linear methods
2021
Lei Wu
Jihao Long
+
Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization
2020
Benjamin Aubin
Florent Krząkała
Yue M. Lu
Lenka Zdeborová
+
Analysis of the expected $L_2$ error of an over-parametrized deep neural network estimate learned by gradient descent without regularization
2023
Selina Drews
Michael Köhler
+
PDF
Chat
Can Shallow Neural Networks Beat the Curse of Dimensionality? A Mean Field Training Perspective
2020
Stephan Wojtowytsch
E Weinan
+
PDF
Chat
<i>A priori</i> estimates of the population risk for two-layer neural networks
2019
E Weinan
Chao Ma
Lei Wu
+
PDF
Chat
Risk Bounds on MDL Estimators for Linear Regression Models with Application to Simple ReLU Neural Networks
2024
Yoshinari Takeishi
Jun’ichi Takeuchi
+
With Greater Distance Comes Worse Performance: On the Perspective of Layer Utilization and Model Generalization
2022
James Z. Wang
Cheng-Lin Yang
+
Statistical learning by sparse deep neural networks
2023
Felix Abramovich
+
PDF
Chat
Stochastic Gradient Descent for Two-layer Neural Networks
2024
Dinghao Cao
Zheng-Chu Guo
Lei Shi
Works That Cite This (1)
Action
Title
Year
Authors
+
The Rate of Convergence of Variation-Constrained Deep Neural Networks
2021
Gen Li
Yuantao Gu
Jie Ding
Works Cited by This (16)
Action
Title
Year
Authors
+
PDF
Chat
Information-theoretic determination of minimax rates of convergence
1999
Yuhong Yang
Andrew R. Barron
+
Beating the Perils of Non-Convexity: Guaranteed Training of Neural Networks using Tensor Methods
2015
Majid Janzamin
Hanie Sedghi
Anima Anandkumar
+
Tensor decompositions for learning latent variable models
2014
Animashree Anandkumar
Rong Ge
Daniel Hsu
Sham M. Kakade
Matus Telgarsky
+
PDF
Chat
Group sparse regularization for deep neural networks
2017
Simone Scardapane
Danilo Comminiello
Amir Hussain
Aurelio Uncini
+
Learning Structured Sparsity in Deep Neural Networks
2016
Wei Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Li
+
A Survey of Model Compression and Acceleration for Deep Neural Networks
2017
Yu Cheng
Duo Wang
Pan Zhou
Tao Zhang
+
Size-Independent Sample Complexity of Neural Networks
2017
Noah Golowich
Alexander Rakhlin
Ohad Shamir
+
On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition
2018
Marco Mondelli
Andrea Montanari
+
PDF
Chat
Model Selection Techniques: An Overview
2018
Jie Ding
Vahid Tarokh
Yuhong Yang
+
Complexity, Statistical Risk, and Metric Entropy of Deep Nets Using Total Path Variation
2019
Andrew R. Barron
Jason M. Klusowski