University of Oulu

X. Chen, H. Zhang, C. Wu, S. Mao, Y. Ji and M. Bennis, "Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning," in IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4005-4018, June 2019. doi: 10.1109/JIOT.2018.2876279

Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning

Saved in:
Author: Chen, Xianfu1; Zhang, Honggang2; Wu, Celimuge3;
Organizations: 1VTT Technical Research Centre of Finland, Oulu, Finland
2College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
3Graduate School of Informatics and Engineering, University of Electro-Communications, Tokyo, Japan
4Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA
5Information Systems Architecture Research Division, National Institute of Informatics, Tokyo, Japan
6Centre for Wireless Communications, University of Oulu, Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 4.6 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2019092429677
Language: English
Published: Institute of Electrical and Electronics Engineers, 2019
Publish Date: 2019-09-24
Description:

Abstract

To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced radio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. This paper considers MEC for a representative mobile user in an ultradense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modeled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between mobile user and BSs. To break the curse of high dimensionality in state space, we first propose a double deep Q-network (DQN)-based strategic computation offloading algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. Then motivated by the additive structure of the utility function, a Q-function decomposition technique is combined with the double DQN, which leads to a novel learning algorithm for the solving of stochastic computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies.

see all

Series: IEEE internet of things journal
ISSN: 2372-2541
ISSN-E: 2327-4662
ISSN-L: 2327-4662
Volume: 6
Issue: 3
Pages: 4005 - 4018
DOI: 10.1109/JIOT.2018.2876279
OADOI: https://oadoi.org/10.1109/JIOT.2018.2876279
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: This research was supported in part by National Key R&D Program of China under Grant 2018YFB0803702, National Natural Science Foundation of China under Grants 61701439 and 61731002, Zhejiang Key Research and Development Plan under Grant 2018C03056, National Science Foundation under Grant CNS-1702957, Wireless Engineering Research and Engineering Center (WEREC) at Auburn University, JSPS KAK-ENHI under Grant JP16H02817, and Academy of Finland under grant 289611.
Academy of Finland Grant Number: 289611
Detailed Information: 289611 (Academy of Finland Funding decision)
Copyright information: © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.