Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
Alessio Maritan
Follow
Share
Generating author description...
All published works
Action
Title
Year
Authors
+
PDF
Chat
Network-GIANT: Fully Distributed Newton-Type Optimization via Harmonic Hessian Consensus
2023
Alessio Maritan
Ganesh D. Sharma
Luca Schenato
Subhrakanti Dey
+
ZO-JADE: Zeroth-order Curvature-Aware Multi-Agent Convex Optimization
2023
Alessio Maritan
Luca Schenato
+
Network-GIANT: Fully distributed Newton-type optimization via harmonic Hessian consensus
2023
Alessio Maritan
Ganesh D. Sharma
Luca Schenato
Subhrakanti Dey
+
ZO-JADE: Zeroth-Order Curvature-Aware Distributed Multi-Agent Convex Optimization
2023
Alessio Maritan
Luca Schenato
+
FedZeN: Towards superlinear zeroth-order federated learning via incremental Hessian estimation
2023
Alessio Maritan
Subhrakanti Dey
Luca Schenato
Common Coauthors
Coauthor
Papers Together
Luca Schenato
5
Subhrakanti Dey
3
Ganesh D. Sharma
2
Commonly Cited References
Action
Title
Year
Authors
# of times referenced
+
Lyapunov Theory for Discrete Time Systems
2018
Nicoletta Bof
Ruggero Carli
Luca Schenato
2
+
PDF
Chat
Newton-Raphson Consensus for Distributed Convex Optimization
2015
Damiano Varagnolo
Filippo Zanella
Angelo Cenedese
Gianluigi Pillonetto
Luca Schenato
2
+
ZO-JADE: Zeroth-Order Curvature-Aware Distributed Multi-Agent Convex Optimization
2023
Alessio Maritan
Luca Schenato
2
+
PDF
Chat
Multi-Agent Distributed Optimization via Inexact Consensus ADMM
2014
Tsung‐Hui Chang
Mingyi Hong
Xiangfeng Wang
1
+
The NEWUOA software for unconstrained optimization without derivatives
2006
M. J. D. Powell
1
+
PDF
Chat
On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
2014
Wei Shi
Qing Ling
Kun Yuan
Gang Wu
Wotao Yin
1
+
PDF
Chat
Fast Distributed Gradient Methods
2014
Dušan Jakovetić
João Xavier
José M. F. Moura
1
+
Zeroth Order Nonconvex Multi-Agent Optimization over Networks
2017
Davood Hajinezhad
Mingyi Hong
Alfredo García
1
+
Derivative-Free and Blackbox Optimization
2017
Charles Audet
Warren Hare
1
+
Hessian-Aware Zeroth-Order Optimization for Black-Box Adversarial Attack
2018
Haishan Ye
Zhichao Huang
Cong Fang
Chris Junchi Li
Tong Zhang
1
+
PDF
Chat
Newton-like Method with Diagonal Correction for Distributed Optimization
2017
Dragana Bajović
Dušan Jakovetić
Nataša Krejić
Nataša Krklec Jerinkić
1
+
PDF
Chat
On the Convergence of Decentralized Gradient Descent
2016
Kun Yuan
Qing Ling
Wotao Yin
1
+
PDF
Chat
Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
2019
Albert S. Berahas
Richard H. Byrd
Jorge Nocedal
1
+
PDF
Chat
Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
2017
Angelia Nedić
Alex Olshevsky
Wei Shi
1
+
A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization
2016
Aryan Mokhtari
Wei Shi
Qing Ling
Alejandro Ribeiro
1
+
Communication-Efficient Distributed Optimization in Networks with Gradient Tracking and Variance Reduction
2019
Boyue Li
Shicong Cen
Yuxin Chen
Yuejie Chi
1
+
Hessian Inverse Approximation as Covariance for Random Perturbation in Black-Box Problems
2020
Jingyi Zhu
1
+
Distributed Zero-Order Optimization under Adversarial Noise
2021
Arya Akhavan
Massimiliano Pontil
Alexandre B. Tsybakov
1
+
PDF
Chat
Zeroth-order algorithms for stochastic distributed nonconvex optimization
2022
Xinlei Yi
Shengjun Zhang
Tao Yang
Karl Henrik Johansson
1
+
Curvature-Aware Derivative-Free Optimization
2021
Bumsu Kim
HanQin Cai
Daniel McKenzie
Wotao Yin
1
+
Zeroth-Order Stochastic Coordinate Methods for Decentralized Non-convex Optimization
2022
Shengjun Zhang
Colleen P. Bailey
1
+
DONE: Distributed Approximate Newton-type Method for Federated Edge Learning
2022
Canh T. Dinh
Nguyen H. Tran
Tuan Dung Nguyen
Wei Bao
Amir Rezaei Balef
Bing Bing Zhou
Albert Y. Zomaya
1
+
FedNL: Making Newton-Type Methods Applicable to Federated Learning
2021
Mher Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
1
+
Zeroth-order Nonconvex Stochastic Optimization: Handling Constraints, High-Dimensionality and Saddle-Points
2018
Krishnakumar Balasubramanian
Saeed Ghadimi
1
+
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization
2017
Shusen Wang
Fred Roosta
Peng Xu
Michael W. Mahoney
1
+
Introduction to Derivative-Free Optimization
2009
Andrew R. Conn
Katya Scheinberg
L. N. Vicente
1
+
PDF
Chat
Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation
2008
Andrew R. Conn
Katya Scheinberg
L. N. Vicente
1