University of Oulu

Elgabli, A., Park, J., Bedi, A., Bennis, M., Aggarwal, V., GADMM : fast and communication efficient framework for distributed machine learning, Journal of Machine Learning Research, Vol. 21:76 ISSN: 1532-4435, http://jmlr.org/papers/v21/19-718.html

GADMM : fast and communication efficient framework for distributed machine learning

Saved in:
Author: Elgabli, Anis; Park, Jihong; Bedi, Amrit S.;
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 2 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2020050825688
Language: English
Published: Journal of machine learning research, 2020
Publish Date: 2020-05-08
Description:

Abstract

When the data is distributed across multiple servers, lowering the communication cost between the servers (or workers) while solving the distributed learning problem is an important problem and is the focus of this paper. In particular, we propose a fast, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm, Group Alternating Direction Method of Multipliers (GADMM) is based on the Alternating Direction Method of Multipliers (ADMM) framework. The key novelty in GADMM is that it solves the problem in a decentralized topology where at most half of the workers are competing for the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges to the optimal solution for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets. Furthermore, we propose Dynamic GADMM (D-GADMM), a variant of GADMM, and prove its convergence under the time-varying network topology of the workers.

see all

Series: Journal of machine learning research
ISSN: 1532-4435
ISSN-E: 1533-7928
ISSN-L: 1532-4435
Volume: 21
Issue: 76
Pages: 1 - 39
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Copyright information: © 2020 Anis Elgabli, Jihong Park, Amrit S. Bedi, Mehdi Bennis, and Vaneet Aggarwal. License: CC-BY 4.0, see´https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v21/19-718.html.
  https://creativecommons.org/licenses/by/4.0/