The Efficacy of L 1s Regularization in Two-Layer Neural Networks.

Type: Preprint

Publication Date: 2020-10-02

Citations: 1

Locations

  • arXiv (Cornell University) - View

Similar Works

Action Title Year Authors
+ The Efficacy of $L_1$ Regularization in Two-Layer Neural Networks 2020 Gen Li
Yuantao Gu
Jie Ding
+ Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off 2021 Huiyuan Wang
Lin Wei
+ Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint 2020 Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Denny Wu
Tianzong Zhang
+ PDF Chat A General Framework of the Consistency for Large Neural Networks 2024 Haoran Zhan
Yingcun Xia
+ Intrinsic dimensionality and generalization properties of the $\mathcal{R}$-norm inductive bias 2022 Clayton Sanford
Navid Ardeshir
Daniel Hsu
+ Empirical Risk Landscape Analysis for Understanding Deep Neural Networks 2018 Pan Zhou
Jiashi Feng
+ Rethinking Bias-Variance Trade-off for Generalization of Neural Networks 2020 Zitong Yang
Yaodong Yu
Chong You
Jacob Steinhardt
Yi Ma
+ A Priori Estimates for Two-layer Neural Networks 2018 E Weinan
Chao Ma
Lei Wu
+ Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization 2019 Tomaso Poggio
Andrzej Banburski
Qianli Liao
+ PDF Chat Error Bounds of Supervised Classification from Information-Theoretic Perspective 2024 Binchuan Qi
Wei Gong
Li Li
+ Approximation and Estimation for High-Dimensional Deep Learning Networks 2018 Andrew R. Barron
Jason M. Klusowski
+ A spectral-based analysis of the separation between two-layer neural networks and linear methods 2021 Lei Wu
Jihao Long
+ Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization 2020 Benjamin Aubin
Florent Krząkała
Yue M. Lu
Lenka Zdeborová
+ Analysis of the expected $L_2$ error of an over-parametrized deep neural network estimate learned by gradient descent without regularization 2023 Selina Drews
Michael Köhler
+ PDF Chat Can Shallow Neural Networks Beat the Curse of Dimensionality? A Mean Field Training Perspective 2020 Stephan Wojtowytsch
E Weinan
+ PDF Chat <i>A priori</i> estimates of the population risk for two-layer neural networks 2019 E Weinan
Chao Ma
Lei Wu
+ PDF Chat Risk Bounds on MDL Estimators for Linear Regression Models with Application to Simple ReLU Neural Networks 2024 Yoshinari Takeishi
Jun’ichi Takeuchi
+ With Greater Distance Comes Worse Performance: On the Perspective of Layer Utilization and Model Generalization 2022 James Z. Wang
Cheng-Lin Yang
+ Statistical learning by sparse deep neural networks 2023 Felix Abramovich
+ PDF Chat Stochastic Gradient Descent for Two-layer Neural Networks 2024 Dinghao Cao
Zheng-Chu Guo
Lei Shi