University of Oulu

A. Elgabli, J. Park, A. S. Bedi, C. B. Issaid, M. Bennis and V. Aggarwal, "Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning," in IEEE Transactions on Communications, vol. 69, no. 1, pp. 164-181, Jan. 2021, doi: 10.1109/TCOMM.2020.3026398

Q-GADMM : quantized group ADMM for communication efficient decentralized machine learning

Saved in:
Author: Elgabli, Anis1; Park, Jihong2; Bedi, Amrit S.3;
Organizations: 1Center of Wireless Communication, University of Oulu, Finland
2School of Information Technology, Deakin University, Geelong, VIC 3220, Australia
3Department of Electrical Engineering, IIT Kanpur
4School of Industrial Engineering and the School of Electrical and Computer Engineering, Purdue University, USA
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 5.4 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202103096830
Language: English
Published: Institute of Electrical and Electronics Engineers, 2021
Publish Date: 2021-03-09
Description:

Abstract

In this article, we propose a communication-efficient decentralized machine learning (ML) algorithm, coined quantized group ADMM (Q-GADMM). To reduce the number of communication links, every worker in Q-GADMM communicates only with two neighbors, while updating its model via the group alternating direction method of multipliers (GADMM). Moreover, each worker transmits the quantized difference between its current model and its previously quantized model, thereby decreasing the communication payload size. However, due to the lack of centralized entity in decentralized ML, the spatial sparsity and payload compression may incur error propagation, hindering model training convergence. To overcome this, we develop a novel stochastic quantization method to adaptively adjust model quantization levels and their probabilities, while proving the convergence of Q-GADMM for convex objective functions. Furthermore, to demonstrate the feasibility of Q-GADMM for non-convex and stochastic problems, we propose quantized stochastic GADMM (Q-SGADMM) that incorporates deep neural network architectures and stochastic sampling. Simulation results corroborate that Q-GADMM significantly outperforms GADMM in terms of communication efficiency while achieving the same accuracy and convergence speed for a linear regression task. Similarly, for an image classification task using DNN, Q-SGADMM achieves significantly less total communication cost with identical accuracy and convergence speed compared to its counterpart without quantization, i.e., stochastic GADMM (SGADMM).

see all

Series: IEEE transactions on communications
ISSN: 0090-6778
ISSN-E: 1558-0857
ISSN-L: 0090-6778
Volume: 69
Issue: 1
Pages: 164 - 181
DOI: 10.1109/TCOMM.2020.3026398
OADOI: https://oadoi.org/10.1109/TCOMM.2020.3026398
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: This work was supported in part by the INFOTECH Project NOOR, in part by the EU-CHISTERA projects LeadingEdge and CONNECT, and in part by the Academy of Finland through the MISSION and SMARTER projects.
Copyright information: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.