Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
Shahar Azulay
Follow
Share
Generating author description...
All published works
Action
Title
Year
Authors
+
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror Descent
2021
Shahar Azulay
Edward Moroshko
Mor Shpigel Nacson
Blake Woodworth
Nathan Srebro
Amir Globerson
Daniel Soudry
+
Holdout SGD: Byzantine Tolerant Federated Learning
2020
Shahar Azulay
Lior Raz
Amir Globerson
Tomer Koren
Yehuda Afek
Common Coauthors
Coauthor
Papers Together
Amir Globerson
2
Blake Woodworth
1
Lior Raz
1
Daniel Soudry
1
Mor Shpigel Nacson
1
Tomer Koren
1
Nathan Srebro
1
Edward Moroshko
1
Yehuda Afek
1
Commonly Cited References
Action
Title
Year
Authors
# of times referenced
+
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
2018
Arthur Paul Jacot
Franck Gabriel
Clément Hongler
2
+
Parallelized Stochastic Gradient Descent
2010
Martin Zinkevich
Markus Weimer
Lihong Li
Alex Smola
1
+
PDF
Chat
Introduction to Online Convex Optimization
2016
Elad Hazan
1
+
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks
2018
Sanjeev Arora
Nadav Cohen
Noah Golowich
Wei Hu
1
+
Implicit Regularization in Matrix Factorization
2017
Suriya Gunasekar
Blake Woodworth
Srinadh Bhojanapalli
Behnam Neyshabur
Nati Srebro
1
+
A Little Is Enough: Circumventing Defenses For Distributed Learning
2019
Moran Baruch
Gilad Baruch
Yoav Goldberg
1
+
On Lazy Training in Differentiable Programming
2018
Lénaïc Chizat
Edouard Oyallon
Francis Bach
1
+
Lexicographic and depth-sensitive margins in homogeneous and non-homogeneous deep models
2019
Mor Shpigel Nacson
Suriya Gunasekar
Jason D. Lee
Nathan Srebro
Daniel Soudry
1
+
Characterizing Implicit Bias in Terms of Optimization Geometry
2018
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
1
+
Implicit Bias of Gradient Descent on Linear Convolutional Networks
2018
Suriya Gunasekar
Jason D. Lee
Daniel Soudry
Nathan Srebro
1
+
Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations
2017
Yuanzhi Li
Tengyu Ma
Hongyang Zhang
1
+
Matrix Completion has No Spurious Local Minimum
2016
Rong Ge
Jason D. Lee
Tengyu Ma
1
+
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
2018
Simon S. Du
Xiyu Zhai
Barnabás Póczos
Aarti Singh
1
+
Algorithmic Regularization in Learning Deep Homogeneous Models: Layers are Automatically Balanced
2018
Simon S. Du
Wei Hu
Jason D. Lee
1
+
Byzantine Stochastic Gradient Descent
2018
Dan Alistarh
Zeyuan Allen-Zhu
Jerry Li
1
+
Implicit Regularization for Optimal Sparse Recovery
2019
Tomas Vaškevičius
Varun Kanade
Patrick Rebeschini
1
+
Mirrorless Mirror Descent: A More Natural Discretization of Riemannian Gradient Flow.
2020
Suriya Gunasekar
Blake Woodworth
Nathan Srebro
1
+
The Elements of Statistical Learning: Data Mining, Inference, and Prediction 2nd Edition
2020
Trevor Hastie
Robert Tibshirani
Jerome H. Friedman
1
+
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
2020
Noam Razin
Nadav Cohen
1
+
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
2020
Edward Moroshko
Blake Woodworth
Suriya Gunasekar
Jason D. Lee
Nati Srebro
Daniel Soudry
1
+
Implicit Regularization in ReLU Networks with the Square Loss
2020
Gal Vardi
Ohad Shamir
1
+
Distributed Robust Learning
2014
Jiashi Feng
Huan Xu
Shie Mannor
1
+
A unifying view on implicit bias in training linear neural networks
2021
Chulhee Yun
Shankar Krishnan
Hossein Mobahi
1