University of Oulu

X. Chen et al., "Information Freshness-Aware Task Offloading in Air-Ground Integrated Edge Computing Systems," in IEEE Journal on Selected Areas in Communications, vol. 40, no. 1, pp. 243-258, Jan. 2022, doi: 10.1109/JSAC.2021.3126075

Information freshness-aware task offloading in air-ground integrated edge computing systems

Saved in:
Author: Chen, Xianfu1; Wu, Celimuge2; Chen, Tao3;
Organizations: 1VTT Technical Research Centre of Finland, 90570 Oulu, Finland
2Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo 182-8585, Japan
3VTT Technical Research Centre of Finland, 02150 Espoo, Finland
4College of Information Science and Electronic Engineering (ISEE), Zhejiang University, Hangzhou 310027, China
5Centre for Wireless Communications, University of Oulu, 90570 Oulu, Finland
6Department of Electrical Engineering and Computer Science, The Catholic University of America, Washington, DC 20064 USA
7Information Systems Architecture Research Division, National Institute of Informatics, Tokyo 101-8430, Japan
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 1 MB)
Persistent link:
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2022-08-30


This paper investigates an air-ground integrated multi-access edge computing system, which is deployed by an infrastructure provider (InP). Under a business agreement with the InP, a third-party service provider provides computing services to the subscribed mobile users (MUs). MUs compete for the shared spectrum and computing resources over time to achieve their distinctive goals. From the perspective of an MU, we deliberately define the age of update to capture the staleness of information from refreshing computation outcomes. Given the system dynamics, we model the interactions among MUs as a stochastic game. In the Nash equilibrium without cooperation, each MU behaves in accordance with the local system states and conjectures. We can hence transform the stochastic game into a single-agent Markov decision process. As another major contribution, we develop an online deep reinforcement learning (RL) scheme that adopts two separate double deep Q-networks to approximate the Q-factor and the post-decision Q-factor, respectively. The deep RL scheme allows each MU to optimize the behaviours with unknown dynamic statistics. Numerical experiments show that our proposed scheme outperforms the baselines in terms of the average utility under various system conditions.

see all

Series: IEEE journal on selected areas in communications
ISSN: 0733-8716
ISSN-E: 1558-0008
ISSN-L: 0733-8716
Volume: 40
Issue: 1
Pages: 243 - 258
DOI: 10.1109/jsac.2021.3126075
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Funding: This work was supported in part by the Academy of Finland under Grant 319759, Grant 319758, and Grant 317669; in part by the Zhejiang Lab Open Program under Grant 2021LC0AB06; in part by the Okawa Foundation for Information and Telecommunications; in part by the G-7 Scholarship Foundation; in part by the Japan Society for the Promotion of Science (JSPS) KAKENHI under Grant 19H04092, Grant 20H04174, Grant JP18KK0279, and Grant JP20H00592; in part by the Research Organization of Information and Systems (ROIS) National Institute Informatics (NII) Open Collaborative Research 2021 under Grant 21FA02; in part by the National Natural Science Foundation of China under Grant 61731002; in part by the Zhejiang Key Research and Development Plan under Grant 2019C01002; in part by the Academy of Finland 6G Flagship; in part by the European Coordinated CHIST-ERA LearningEdge and CONNECT; and in part by the University of Oulu Infotech NOOR Project.
Academy of Finland Grant Number: 319759
Detailed Information: 319759 (Academy of Finland Funding decision)
319758 (Academy of Finland Funding decision)
317669 (Academy of Finland Funding decision)
Copyright information: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.