ASQ

Exclusive Content & Downloads from ASQ

High-Performance Kernel Machines With Implicit Distributed Optimization and Randomization

Summary: [This abstract is based on the authors' abstract.] We propose a framework for massive-scale training of kernel-based statistical models, based on combining distributed convex optimization with randomization techniques. Our approach is based on a block-splitting variant of the alternating directions method of multipliers, carefully reconfigured to handle very large random feature matrices under memory constraints, while exploiting hybrid parallelism typically found in modern clusters of multicore machines. Our high-performance implementation supports a variety of statistical learning tasks by enabling several loss functions, regularization schemes, kernels, and layers of randomized approximations for both dense and sparse datasets, in an extensible framework. We evaluate our implementation on large-scale model construction tasks and provide a comparison against existing sequential and parallel libraries. Supplementary materials for this article are available online.

Anyone with a subscription, including Site and Enterprise members, can access this article.


Other Ways to Access content:

Join ASQ

Join ASQ as a Full member. Enjoy all the ASQ member benefits including access to many online articles.

  • Topics: Software and Technology (for statistics, measurement, analysis), Statistics
  • Keywords: Computation, Optimization, Randomization tests, Kernel density estimates
  • Author: Avron, Haim; Sindhwani, Vikas;
  • Journal: Technometrics