Parallel Gradient Distribution in Unconstrained Optimization
Parallel Gradient Distribution in Unconstrained Optimization
A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton, or conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm …