An Bian

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ A Distributed Quasi-Newton Algorithm for Empirical Risk Minimization with Nonsmooth Regularization 2018 Ching-pei Lee
Cong Han Lim
Stephen J. Wright
2
+ AIDE: Fast and Communication Efficient Distributed Optimization 2016 Sashank J. Reddi
Jakub Konečný
Peter Richtárik
Barnabás Póczos
Alexander J. Smola
2
+ PDF Chat Distributed block-diagonal approximation methods for regularized empirical risk minimization 2019 Ching-pei Lee
Kai-Wei Chang
2
+ HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent 2011 Feng Niu
Benjamin Recht
Christopher Ré
Stephen J. Wright
2
+ Parallelized Stochastic Gradient Descent 2010 Martin Zinkevich
Markus Weimer
Lihong Li
Alex Smola
2
+ PDF Chat EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization 2015 Wei Shi
Qing Ling
Gang Wu
Wotao Yin
1
+ PDF Chat Distributed Block Coordinate Descent for Minimizing Partially Separable Functions 2015 Jakub Mareček
Peter Richtárik
Martin Takáč
1
+ PDF Chat On the Convergence of Decentralized Gradient Descent 2016 Kun Yuan
Qing Ling
Wotao Yin
1
+ DiSCO: Distributed Optimization for Self-Concordant Empirical Loss 2015 Yuchen Zhang
Lin Xiao
1
+ Distributed coordinate descent method for learning with big data 2016 Peter Richtárik
Martin Takáč
1
+ PDF Chat Submodular Functions: from Discrete to Continous Domains 2018 Francis Bach
1
+ Adding vs. Averaging in Distributed Primal-Dual Optimization 2015 Chenxin Ma
Virginia Smith
Martin Jaggi
Michael I. Jordan
Peter Richtárik
Martin Takáč
1
+ Stochastic methods for <i>l</i> <sub>1</sub> regularized loss minimization 2009 Shai Shalev‐Shwartz
Ambuj Tewari
1
+ Newton's Method for Large Bound-Constrained Optimization Problems 1999 Chih‐Jen Lin
Jorge J. Morè
1
+ Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity 2010 Coralia Cartis
Nicholas I. M. Gould
Philippe L. Toint
1
+ Cubic regularization of Newton method and its global performance 2006 Yurii Nesterov
B. T. Polyak
1
+ An Interior-Point Method for Large-Scale l 1 -Regularized Logistic Regression 2007 Kwangmoo Koh
Seung-Jean Kim
Stephen Boyd
1
+ A coordinate gradient descent method for nonsmooth separable minimization 2007 Paul Tseng
Sangwoon Yun
1
+ PDF Chat Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes 2014 Chandra Chekuri
J. Vondrák
Rico Zenklusen
1
+ Scikit-learn: Machine Learning in Python 2012 Fabián Pedregosa
Gaël Varoquaux
Alexandre Gramfort
Vincent Michel
Bertrand Thirion
Olivier Grisel
Mathieu Blondel
Peter Prettenhofer
Ron J. Weiss
Vincent Dubourg
1
+ Parallel Coordinate Descent for L1-Regularized Loss Minimization 2011 Joseph K. Bradley
Aapo Kyrola
Danny Bickson
Carlos Guestrin
1
+ Federated Optimization:Distributed Optimization Beyond the Datacenter 2015 Jakub Konečný
H. Brendan McMahan
Daniel Ramage
1
+ PDF Chat On the Linear Convergence of the ADMM in Decentralized Consensus Optimization 2014 Wei Shi
Qing Ling
Kun Yuan
Gang Wu
Wotao Yin
1
+ Monte Carlo sampling methods using Markov chains and their applications 1970 W. Keith Hastings
1
+ PDF Chat Determinantal Point Processes for Machine Learning 2012 Alex Kulesza
1
+ From MAP to Marginals: Variational Inference in Bayesian Submodular Models 2014 Josip Djolonga
Andreas Krause
1
+ PDF Chat On the O(1=k) convergence of asynchronous distributed alternating Direction Method of Multipliers 2013 Ermin Wei
Asuman Ozdaglar
1
+ Near-Optimal MAP Inference for Determinantal Point Processes 2012 Jennifer Gillenwater
Alex Kulesza
Ben Taskar
1
+ An improved GLMNET for L1-regularized logistic regression 2012 Guo-Xun Yuan
Chia-Hua Ho
Chih‐Jen Lin
1
+ DSA: Decentralized Double Stochastic Averaging Gradient Algorithm 2015 Aryan Mokhtari
Alejandro Ribeiro
1
+ PDF Chat Bundle CDN: A Highly Parallelized Approach for Large-Scale ℓ1-Regularized Logistic Regression 2013 Yatao Bian
Xiong Li
Mingqi Cao
Yuncai Liu
1
+ Distributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity 2015 Jason D. Lee
Qihang Lin
Tengyu Ma
Tianbao Yang
1
+ Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains 2016 Yatao Bian
Baharan Mirzasoleiman
Joachim M. Buhmann
Andreas Krause
1
+ A Reduction for Optimizing Lattice Submodular Functions with Diminishing Returns. 2016 Alina Ene
Huy L. Nguyễn
1
+ Federated Learning: Strategies for Improving Communication Efficiency 2016 Jakub Konečný
H. Brendan McMahan
Felix X. Yu
Peter Richtárik
Ananda Theertha Suresh
Dave Bacon
1
+ PDF Chat Distributed coordinate descent for generalized linear models with regularization 2017 Ilya Trofimov
Alexander Genkin
1
+ CoCoA: A General Framework for Communication-Efficient Distributed Optimization 2016 Virginia Smith
Simone Forte
Chenxin Ma
Martin Takáč
Michael I. Jordan
Martin Jaggi
1
+ PDF Chat Non-Monotone DR-Submodular Function Maximization 2017 Tasuku Soma
Yuichi Yoshida
1
+ Optimal algorithms for smooth and strongly convex distributed optimization in networks 2017 Kevin G. Seaman
Francis Bach
Sébastien Bubeck
Yin Tat Lee
Laurent Massoulié
1
+ Decentralized Consensus Optimization With Asynchrony and Delays 2017 Tianyu Wu
Kun Yuan
Qing Ling
Wotao Yin
Ali H. Sayed
1
+ Sparse Online Learning via Truncated Gradient 2008 John Langford
Lihong Li
Tong Zhang
1
+ GIANT: Globally Improved Approximate Newton Method for Distributed Optimization 2017 Shusen Wang
Farbod Roosta-Khorasani
Peng Xu
Michael W. Mahoney
1
+ Online Continuous Submodular Maximization 2018 Lin Chen
Hamed Hassani
Amin Karbasi
1
+ Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings 2018 Aryan Mokhtari
Hamed Hassani
Amin Karbasi
1
+ D$^2$: Decentralized Training over Decentralized Data 2018 Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
1
+ Inexact Successive Quadratic Approximation for Regularized Optimization. 2018 Ching-pei Lee
Stephen J. Wright
1
+ Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization. 2018 Rad Niazadeh
Tim Roughgarden
Joshua R. Wang
1
+ A Distributed Second-Order Algorithm You Can Trust 2018 Celestine Dünner
Aurélien Lucchi
Matilde Gargiani
An Bian
Thomas Hofmann
Martin Jaggi
1
+ Optimal Algorithms for Non-Smooth Distributed Optimization in Networks 2018 Kevin Scaman
Francis Bach
Sébastien Bubeck
Yin Tat Lee
Laurent Massoulié
1
+ Posterior agreement for large parameter-rich optimization problems 2018 Joachim M. Buhmann
Julien Dumazert
Alexey Gronskiy
Wojciech Szpankowski
1