University of Oulu

X. Chen, C. Wu, T. Chen, Z. Liu, M. Bennis and Y. Ji, "Age of Information-Aware Resource Management in UAV-Assisted Mobile-Edge Computing Systems," GLOBECOM 2020 - 2020 IEEE Global Communications Conference, Taipei, Taiwan, 2020, pp. 1-6, doi: 10.1109/GLOBECOM42002.2020.9322632

Age of information-aware resource management in UAV-assisted mobile-edge computing systems

Saved in:
Author: Chen, Xianfu1; Wu, Celimuge2; Chen, Tao1;
Organizations: 1VTT Technical Research Centre of Finland, Finland
2Graduate School of Informatics and Engineering, University of Electro- Communications, Tokyo, Japan
3Department of Mathematical and Systems Engineering, Shizuoka University, Japan
4Centre for Wireless Communications, University of Oulu, Finland
5Information Systems Architecture Research Division, National Institute of Informatics, Tokyo, Japan
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.2 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202102154772
Language: English
Published: Institute of Electrical and Electronics Engineers, 2020
Publish Date: 2021-02-15
Description:

Abstract

This paper investigates the problem of age of information (AoI)-aware resource awareness in an unmanned aerial vehicle (UAV)-assisted mobile-edge computing (MEC) system, which is deployed by an infrastructure provider (InP). A service provider leases resources from the InP to serve the mobile users (MUs) with sporadic computation requests. Due to the limited number of channels and the finite shared I/O resource of the UAV, the MUs compete to schedule local and remote task computations in accordance with the observations of system dynamics. The aim of each MU is to selfishly maximize the expected long-term computation performance. We formulate the non-cooperative interactions among the MUs as a stochastic game. To approach the Nash equilibrium solutions, we propose a novel online deep reinforcement learning (DRL) scheme, which enables each MU to behave using its local conjectures only. The DRL scheme employs two separate deep Q- networks to approximate the Q-factor and the post-decision Q-factor for each MU. Numerical experiments show the potentials of the online DRL scheme in balancing the tradeoff between AoI and energy consumption.

see all

Series: IEEE Global Communications Conference
ISSN: 2334-0983
ISSN-E: 2576-6813
ISSN-L: 2334-0983
ISBN: 978-1-7281-8298-8
ISBN Print: 978-1-7281-8299-5
Pages: 1 - 6
DOI: 10.1109/GLOBECOM42002.2020.9322632
OADOI: https://oadoi.org/10.1109/GLOBECOM42002.2020.9322632
Host publication: GLOBECOM 2020 - 2020 IEEE Global Communications Conference
Conference: IEEE Global Communications Conference
Type of Publication: A4 Article in conference proceedings
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Copyright information: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.