University of Oulu

S. Jamshidiha, V. Pourahmadi, A. Mohammadi and M. Bennis, "Link-Level Throughput Maximization Using Deep Reinforcement Learning," in IEEE Networking Letters, vol. 2, no. 3, pp. 101-105, Sept. 2020, doi: 10.1109/LNET.2020.3000334

Link-level throughput maximization using deep reinforcement learning

Saved in:
Author: Jamshidiha, Saeed1; Pourahmadi, Vahid1; Mohammadi, Abbas1;
Organizations: 1Department of Electrical Engineering, Amirkabir University of Technology, Tehran, Iran
2Centre for Wireless Communications, University of Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.4 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202101181977
Language: English
Published: Institute of Electrical and Electronics Engineers, 2020
Publish Date: 2021-01-18
Description:

Abstract

A multi-agent deep reinforcement learning framework is proposed to address link level throughput maximization by power allocation and modulation and coding scheme (MCS) selection. Given the complex problem space, reward shaping is utilized instead of classical training procedures. The time-frame utilities are decomposed into subframe rewards, and a stepwise training procedure is proposed, starting from a simplified power allocation setup without MCS selection, incorporating MCS selection gradually, as the agents learn optimal power allocation. The proposed method outperforms both weighted minimum mean squared error (WMMSE) and Fractional Programming (FP) with idealized MCS selections.

see all

Series: IEEE networking letters
ISSN: 2576-3156
ISSN-E: 2576-3156
ISSN-L: 2576-3156
Volume: 2
Issue: 3
Pages: 101 - 105
DOI: 10.1109/LNET.2020.3000334
OADOI: https://oadoi.org/10.1109/LNET.2020.3000334
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Copyright information: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.