University of Oulu

D. Wen, K. -J. Jeon, M. Bennis and K. Huang, "Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned Edge Learning Over Broadband Channels," in IEEE Transactions on Wireless Communications, vol. 20, no. 12, pp. 8348-8361, Dec. 2021, doi: 10.1109/TWC.2021.3092075

Adaptive subcarrier, parameter, and power allocation for partitioned edge learning over broadband channels

Saved in:
Author: Wen, Dingzhu1; Jeon, Ki-Jun2; Bennis, Mehdi3;
Organizations: 1Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong
2LG Electronics, Seoul 06772, South Korea
3Department of Communications Engineering, University of Oulu, FI-90014 Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.7 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2022012811132
Language: English
Published: Institute of Electrical and Electronics Engineers, 2021
Publish Date: 2022-01-28
Description:

Abstract

In this paper, we consider partitioned edge learning (PARTEL), which implements parameter-server training, a well known distributed learning method, in a wireless network. Thereby, PARTEL leverages distributed computation resources at edge devices to train a large-scale artificial intelligence (AI) model by dynamically partitioning the model into parametric blocks for separated updating at devices. Targeting broadband channels, we consider the joint control of parameter allocation, sub-channel allocation, and transmission power to improve the performance of PARTEL. Specifically, the policies for joint SUbcarrier, Parameter, and POweR allocaTion (SUPPORT) are optimized under the criterion of minimum learning latency. Two cases are considered. First, for the case of decomposable models (e.g., logistic regression), the latency-minimization problem is a mixed-integer program and non-convex. Due to its intractability, we develop a practical solution by integer relaxation and transforming it into an equivalent convex problem of model size maximization under a latency constraint. Thereby, a low-complexity algorithm is designed to compute the SUPPORT policy. Second, consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables. This, however, introduces constraints on model partitioning reducing the granularity of parameter allocation. The preceding policy is extended to DNN models by applying the proposed techniques of load rounding and proportional adjustment to rein in latency expansion caused by the load granularity constraints.

see all

Series: IEEE transactions on wireless communications
ISSN: 1536-1276
ISSN-E: 1558-2248
ISSN-L: 1536-1276
Volume: 20
Issue: 12
Pages: 8348 - 8361
DOI: 10.1109/TWC.2021.3092075
OADOI: https://oadoi.org/10.1109/TWC.2021.3092075
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
113 Computer and information sciences
Subjects:
Funding: The work of Dingzhu Wen and Kaibin Huang was supported in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2019B1515130003, in part by the Hong Kong Research Grants Council under Grant 17208319 and Grant 17209917, in part by the Innovation and Technology Fund under Grant GHP/016/18GD, and in part by the Shenzhen Science and Technology Program under Grant JCYJ20200109141414409. The work of Mehdi Bennis was supported by the EU-CHISTERA Projects, COmmunicatioN-aware dyNamic Edge CompuTing (CONNECT) and LeadingEdge.
Copyright information: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.