Ask a Question

Prefer a chat interface with context about you and your work?

High-Performance Kernel Machines With Implicit Distributed Optimization and Randomization

High-Performance Kernel Machines With Implicit Distributed Optimization and Randomization

We propose a framework for massive-scale training of kernel-based statistical models, based on combining distributed convex optimization with randomization techniques. Our approach is based on a block-splitting variant of the alternating directions method of multipliers, carefully reconfigured to handle very large random feature matrices under memory constraints, while exploiting hybrid …