Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
A Reliable Effective Terascale Linear Learning System
Alekh Agarwal
,
Olivier Chapelle
,
Miroslav Dudı́k
,
John Langford
Type:
Preprint
Publication Date:
2011-10-19
Citations:
54
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
A Reliable Effective Terascale Linear Learning System
2011
Alekh Agarwal
Olivier Chapelle
Miroslav Dudı́k
John Langford
+
Snap ML: A Hierarchical Framework for Machine Learning
2018
Celestine Dünner
T. A. Parnell
Dimitrios Sarigiannis
Nikolas Ioannou
Andreea Anghel
Gummadi Ravi
Madhusudanan Kandasamy
Haralampos Pozidis
+
Snap ML: A Hierarchical Framework for Machine Learning
2018
Celestine Dünner
Thomas Parnell
Dimitrios Sarigiannis
Nikolas Ioannou
Andreea Anghel
Gummadi Ravi
Madhusudanan Kandasamy
Haralampos Pozidis
+
Snap ML: A Hierarchical Framework for Machine Learning
2018
Celestine Dünner
Thomas Parnell
Dimitrios Sarigiannis
Nikolas Ioannou
Andreea Anghel
Gummadi Ravi
Madhusudanan Kandasamy
Haralampos Pozidis
+
Jensen: An Easily-Extensible C++ Toolkit for Production-Level Machine Learning and Convex Optimization
2018
Rishabh Iyer
John T. Halloran
Kai Wei
+
PDF
Chat
Efficient and Numerically Stable Sparse Learning
2010
Sihong Xie
Wei Fan
Olivier Verscheure
Jiangtao Ren
+
A scalable modular convex solver for regularized risk minimization
2007
Choon Hui Teo
Alex Smola
S. V. N. Vishwanathan
Quoc V. Le
+
Coresets for Data-efficient Training of Machine Learning Models
2019
Baharan Mirzasoleiman
Jeff Bilmes
Jure Leskovec
+
Coresets for Data-efficient Training of Machine Learning Models
2019
Baharan Mirzasoleiman
Jeff Bilmes
Jure Leskovec
+
PDF
Chat
Compute Trends Across Three Eras of Machine Learning
2022
Jaime Sevilla
Lennart Heim
Anson Ho
Tamay Besiroglu
Marius Hobbhahn
P. Moreno
+
Efficient Use of Limited-Memory Resources to Accelerate Linear Learning.
2017
Celestine Dünner
Thomas Parnell
Martin Jaggi
+
Small-Data, Large-Scale Linear Optimization
2018
Vishal Gupta
+
Benchopt: Reproducible, efficient and collaborative optimization benchmarks
2022
Thomas Moreau
Mathurin Massias
Alexandre Gramfort
Pierre Ablin
Pierre‐Antoine Bannier
Benjamin Charlier
Mathieu Dagréou
Tom Dupré la Tour
Ghislain Durif
Cássio F. Dantas
+
Parallel training of linear models without compromising convergence
2018
Nikolas Ioannou
Celestine Dünner
Kornilios Kourtis
Thomas Parnell
+
Compute Trends Across Three Eras of Machine Learning
2022
Jaime Sevilla
Lennart Heim
Anson Ho
Tamay Besiroglu
Marius Hobbhahn
Pablo Villalobos
+
PDF
Chat
Deep Learning and Machine Learning -- Python Data Structures and Mathematics Fundamental: From Theory to Practice
2024
Silin Chen
Ziqian Bi
Junyu Liu
Benji Peng
Sen Zhang
Xiaoyong Pan
Jiawei Xu
Jinlang Wang
Keyu Chen
Caitlyn Heqi Yin
+
Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems
2017
Celestine Dünner
Thomas Parnell
Martin Jaggi
+
PDF
Chat
High-Performance Kernel Machines With Implicit Distributed Optimization and Randomization
2015
Haim Avron
Vikas Sindhwani
+
PDF
Chat
On Linear Learning with Manycore Processors
2019
Eliza Wszola
Celestine Mendler-Dünner
Martin Jaggi
Markus Püschel
+
Greenformers: Improving Computation and Memory Efficiency in Transformer Models via Low-Rank Approximation
2021
Samuel Cahyawijaya
Works That Cite This (30)
Action
Title
Year
Authors
+
Efficient Online Bootstrapping for Large Scale Learning
2013
Zhen Qin
V. Petřı́ček
Nikos Karampatziakis
Lihong Li
John Langford
+
Discriminative Features via Generalized Eigenvectors
2013
Nikos Karampatziakis
Paul Mineiro
+
A distributed block coordinate descent method for training $l_1$ regularized linear classifiers
2014
Dhruv Mahajan
S. Sathiya Keerthi
S. Sundararajan
+
Without-Replacement Sampling for Stochastic Gradient Methods: Convergence Results and Application to Distributed Optimization
2016
Ohad Shamir
+
PDF
Chat
Distributed Coordinate Descent for L1-regularized Logistic Regression
2015
Ilya Trofimov
Alexander Genkin
+
Efficient programmable learning to search.
2014
Hal Daumé
John Langford
Stéphane Ross
+
Improved Densification of One Permutation Hashing
2014
Anshumali Shrivastava
Ping Li
+
Theory of Dual-sparse Regularized Randomized Reduction
2015
Tianbao Yang
Lijun Zhang
Rong Jin
Shenghuo Zhu
+
Fundamental Limits of Online and Distributed Algorithms for Statistical Learning and Estimation
2013
Ohad Shamir
+
Accelerated Mini-Batch Stochastic Dual Coordinate Ascent
2013
Shai Shalev‐Shwartz
Tong Zhang
Works Cited by This (13)
Action
Title
Year
Authors
+
PDF
Chat
Parallel Online Learning
2011
Daniel Hsu
Nikos Karampatziakis
John Langford
Alex Smola
+
Block truncated-Newton methods for parallel optimization
1989
Stephen G. Nash
Ariela Sofer
+
A scalable modular convex solver for regularized risk minimization
2007
Choon Hui Teo
Alex Smola
S. V. N. Vishwanathan
Quoc V. Le
+
PDF
Chat
On the limited memory BFGS method for large scale optimization
1989
Cheng‐Di Dong
Jorge Nocedal
+
PDF
Chat
Updating quasi-Newton matrices with limited storage
1980
Jorge Nocedal
+
PDF
Chat
Feature hashing for large scale multitask learning
2009
Kilian Q. Weinberger
Anirban Dasgupta
John Langford
Alex Smola
Josh Attenberg
+
PDF
Chat
Parallel Gradient Distribution in Unconstrained Optimization
1995
L. O. Mangasarian
+
Gradient methods for minimizing composite objective function
2007
Yu. Nesterov
+
Parallelized Stochastic Gradient Descent
2010
Martin Zinkevich
Markus Weimer
Lihong Li
Alex Smola
+
GraphLab: A New Framework for Parallel Machine Learning
2010
Yucheng Low
Joseph E. Gonzalez
Aapo Kyrola
Danny Bickson
Carlos Guestrin
Joseph M. Hellerstein