Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
We consider derivative-free algorithms for stochastic and nonstochastic convex optimization problems that use only function values rather than gradients. Focusing on nonasymptotic bounds on convergence rates, we show that if pairs of function values are available, algorithms for d-dimensional optimization that use gradient estimates based on random perturbations suffer a …