Slavomír Hanzely

Follow

Generating author description...

All published works
Action Title Year Authors
+ PDF Chat $\psi$DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning 2024 Klea Ziu
Slavomír Hanzely
Loka Li
Kun Zhang
Martin Takáč
Dmitry Kamzolov
+ PDF Chat Damped Newton Method with Near-Optimal Global $\mathcal {O}\left(k^{-3} \right)$ Convergence Rate 2024 Slavomír Hanzely
Farshed Abdukhakimov
Martin Takáč
+ Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes 2023 Konstantin Mishchenko
Slavomír Hanzely
Peter Richtárik
+ Sketch-and-Project Meets Newton Method: Global $\mathcal O(k^{-2})$ Convergence with Low-Rank Updates 2023 Slavomír Hanzely
+ Adaptive Optimization Algorithms for Machine Learning 2023 Slavomír Hanzely
+ Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation 2022 Rustem Islamov
Xun Qian
Slavomír Hanzely
Mher Safaryan
Peter Richtárik
+ A Damped Newton Method Achieves Global $O\left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate 2022 Slavomír Hanzely
Dmitry Kamzolov
Dmitry Pasechnyuk
Alexander Gasnikov
Peter Richtárik
Martin Takáč
+ ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation 2021 Zhize Li
Slavomír Hanzely
Peter Richtárik
+ Lower Bounds and Optimal Algorithms for Personalized Federated Learning 2020 Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
+ Adaptive Learning of the Optimal Mini-Batch Size of SGD. 2020 Motasem Alfarra
Slavomír Hanzely
Alyazeed Albasyoni
Bernard Ghanem
Peter Richtárik
+ Lower Bounds and Optimal Algorithms for Personalized Federated Learning 2020 Filip Hanzely
Slavomír Hanzely
Samuel Horváth
Peter Richtárik
Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ Optimal methods of smooth convex minimization 1985 Arkadi Nemirovski
Yu. E. Nesterov
1
+ Robust Stochastic Approximation Approach to Stochastic Programming 2009 Arkadi Nemirovski
Anatoli Juditsky
Guanghui Lan
Alexander Shapiro
1
+ PDF Chat Adaptive step size random search 1968 M. Schumer
K. Steiglitz
1
+ PDF Chat Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization 2014 Saeed Ghadimi
Guanghui Lan
Hongchao Zhang
1
+ PDF Chat On lower complexity bounds for large-scale smooth convex optimization 2014 Cristóbal Guzmán
Arkadi Nemirovski
1
+ Two-Point Step Size Gradient Methods 1988 Jonathan Barzilai
Jonathan M. Borwein
1
+ A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems 2009 Amir Beck
Marc Teboulle
1
+ PDF Chat Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization 2014 Shai Shalev‐Shwartz
Tong Zhang
1
+ PDF Chat Information-Based Complexity, Feedback and Dynamics in Convex Programming 2011 Maxim Raginsky
Alexander Rakhlin
1
+ An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization 2015 Qihang Lin
Zhaosong Lu
Lin Xiao
1
+ Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization 2011 Mark Schmidt
Nicolas Le Roux
Francis Bach
1
+ Federated Learning of Deep Networks using Model Averaging 2016 H. Brendan McMahan
Eider Moore
Daniel Ramage
Blaise Agüera y Arcas
1
+ Federated Optimization: Distributed Machine Learning for On-Device Intelligence 2016 Jakub Konečný
H. Brendan McMahan
Daniel Ramage
Peter Richtárik
1
+ PDF Chat Distributed Multi-Task Relationship Learning 2017 Sulin Liu
Sinno Jialin Pan
Qirong Ho
1
+ PDF Chat Fixed-point optimization of deep neural networks with adaptive step size retraining 2017 Sungho Shin
Yoonho Boo
Wonyong Sung
1
+ Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks 2017 Chelsea Finn
Pieter Abbeel
Sergey Levine
1
+ Scaling SGD Batch Size to 32K for ImageNet Training. 2017 Yang You
Igor Gitman
Boris Ginsburg
1
+ Large Batch Training of Convolutional Networks 2017 Yang You
Igor Gitman
Boris Ginsburg
1
+ Distributed Stochastic Multi-Task Learning with Graph Regularization 2018 Weiran Wang
Jialei Wang
Mladen Kolar
Nathan Srebro
1
+ Revisiting Small Batch Training for Deep Neural Networks 2018 Dominic Masters
Carlo Luschi
1
+ Stochastic Quasi-Gradient Methods: Variance Reduction via Jacobian Sketching 2018 Robert M. Gower
Peter Richtárik
Francis Bach
1
+ A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates 2018 Kaiwen Zhou
Fanhua Shang
James Cheng
1
+ Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization 2018 Blake Woodworth
Jialei Wang
Adam Smith
Brendan McMahan
Nati Srebro
1
+ Federated Learning with Non-IID Data 2018 Yue Zhao
Meng Li
Liangzhen Lai
Naveen Suda
Damon Civin
Vikas Chandra
1
+ Direct Acceleration of SAGA using Sampled Negative Momentum 2018 Kaiwen Zhou
1
+ Parallelization does not Accelerate Convex Optimization: Adaptivity Lower Bounds for Non-smooth Convex Minimization 2018 Eric Balkanski
Yaron Singer
1
+ SEGA: Variance Reduction via Gradient Sketching 2018 Filip Hanzely
Konstantin Mishchenko
Peter Richtárik
1
+ Lower Bounds for Parallel and Randomized Convex Optimization 2018 Jelena Diakonikolas
Cristóbal Guzmán
1
+ Federated Learning for Mobile Keyboard Prediction 2018 Andrew Hard
Chloé Kiddon
Daniel Ramage
Françoise Beaufays
Hubert Eichner
K. Praveen Kumar Rao
Rajiv Mathews
Sean Augenstein
1
+ Optimal mini-batch and step sizes for SAGA 2019 Nidham Gazagnadou
Robert M. Gower
Joseph Salmon
1
+ Federated Optimization in Heterogeneous Networks 2018 Li Tian
Anit Kumar Sahu
Manzil Zaheer
Maziar Sanjabi
Ameet Talwalkar
Virginia Smith
1
+ Hybrid Stochastic Gradient Descent Algorithms for Stochastic Nonconvex Optimization 2019 Quoc Tran-Dinh
Nhan H. Pham
Dzung T. Phan
Lam M. Nguyen
1
+ Convergence of Distributed Stochastic Variance Reduced Methods Without Sampling Extra Data 2020 Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei Chen
Tie‐Yan Liu
1
+ L-SVRG and L-Katyusha with Arbitrary Sampling 2019 Xun Qian
Zheng Qu
Peter Richtárik
1
+ Variational Federated Multi-Task Learning 2019 Luca Corinzia
Joachim M. Buhmann
1
+ Better Mini-Batch Algorithms via Accelerated Gradient Methods 2011 Andrew Cotter
Ohad Shamir
Nathan Srebro
Karthik Sridharan
1
+ SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives 2014 Aaron Defazio
Francis Bach
Simon Lacoste-Julien
1
+ Barzilai-Borwein step size for stochastic gradient descent 2016 Conghui Tan
Shiqian Ma
Yu‐Hong Dai
Yuqiu Qian
1
+ SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient 2017 Lam M. Nguyen
Jie Liu
Katya Scheinberg
Martin Takáč
1
+ Direct Acceleration of SAGA using Sampled Negative Momentum 2019 Kaiwen Zhou
Qinghua Ding
Fanhua Shang
James Cheng
Danli Li
Zhi‐Quan Luo
1
+ SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path-Integrated Differential Estimator 2018 Cong Fang
Chris Junchi Li
Zhouchen Lin
Tong Zhang
1
+ A Simple Practical Accelerated Method for Finite Sums 2016 Aaron Defazio
1
+ PDF Chat Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming 2013 Saeed Ghadimi
Guanghui Lan
1
+ Non-convex Finite-Sum Optimization Via SCSG Methods 2017 Lihua Lei
Cheng Ju
Jianbo Chen
Michael I. Jordan
1
+ Don't decay the learning rate, increase the batch size 2018 Samuel Smith
Pieter-Jan Kindermans
Chris Ying
Quoc V. Le
1
+ PDF Chat Communication-efficient algorithms for decentralized and stochastic optimization 2018 Guanghui Lan
Soomin Lee
Yi Zhou
1
+ Stochastic Variance Reduction for Nonconvex Optimization 2016 Sashank J. Reddi
Ahmed Hefny
Suvrit Sra
Barnabás Póczos
Alex Smola
1
+ PDF Chat An optimal randomized incremental gradient method 2017 Guanghui Lan
Yi Zhou
1
+ Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization 2019 Rong Ge
Zhize Li
Weiyao Wang
Xiang Wang
1
+ Adaptive Gradient-Based Meta-Learning Methods 2019 Mikhail Khodak
Maria-Florina Balcan
Ameet Talwalkar
1