University of Oulu

X. Chen, H. Zhang, C. Wu, S. Mao, Y. Ji and M. Bennis, "Performance Optimization in Mobile-Edge Computing via Deep Reinforcement Learning," 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall), Chicago, IL, USA, 2018, pp. 1-6. doi: 10.1109/VTCFall.2018.8690980

Performance optimization in mobile-edge computing via deep reinforcement learning

Saved in:
Author: Chen, Xianfu1; Zhang, Honggang2; Wu, Celimuge3;
Organizations: 1VTT Technical Research Centre of Finland, Finland
2College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
3Graduate School of Informatics and Engineering, University of Electro-Communications, Tokyo, Japan
4Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA
5Information Systems Architecture Research Division, National Institute of Informatics, Tokyo, Japan
6Centre for Wireless Communications, University of Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.7 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202002195881
Language: English
Published: Institute of Electrical and Electronics Engineers, 2018
Publish Date: 2020-02-19
Description:

Abstract

To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is emerging as a promising paradigm by providing computing capabilities within radio access networks in close proximity. Nevertheless, the design of computation offloading policies for a MEC system remains challenging. Specifically, whether to execute an arriving computation task at local mobile device or to offload a task for cloud execution should adapt to the environmental dynamics in a smarter manner. In this paper, we consider MEC for a representative mobile user in an ultra dense network, where one of multiple base stations (BSs) can be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to minimize the long-term cost and an offloading decision is made based on the channel qualities between the mobile user and the BSs, the energy queue state as well as the task queue state. To break the curse of high dimensionality in state space, we propose a deep Q-network-based strategic computation offloading algorithm to learn the optimal policy without having a priori knowledge of the dynamic statistics. Numerical experiments provided in this paper show that our proposed algorithm achieves a significant improvement in average cost compared with baseline policies.

see all

Series: IEEE Vehicular Technology Conference
ISSN: 1090-3038
ISSN-L: 1090-3038
ISBN: 978-1-5386-6358-5
ISBN Print: 978-1-5386-6359-2
Pages: 1 - 7
DOI: 10.1109/VTCFall.2018.8690980
OADOI: https://oadoi.org/10.1109/VTCFall.2018.8690980
Host publication: 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall), Chigaco, IL, USA, 27-30 August 2018
Conference: IEEE Vehicular Technology Conference
Type of Publication: A4 Article in conference proceedings
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Copyright information: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.