University of Oulu

G. Lee, W. Saad and M. Bennis, "Online optimization for low-latency computational caching in Fog networks," 2017 IEEE Fog World Congress (FWC), Santa Clara, CA, 2017, pp. 1-6. doi: 10.1109/FWC.2017.8368529

Online optimization for low-latency computational caching in Fog networks

Saved in:
Author: Lee, Gilsoo1; Saad, Walid1; Bennis, Mehdi2
Organizations: 1Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA, USA
2Centre for Wireless Communications, University of Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.7 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe202003248970
Language: English
Published: Institute of Electrical and Electronics Engineers, 2017
Publish Date: 2020-03-24
Description:

Abstract

Enabling effective computation for emerging applications such as augmented reality or virtual reality via fog computing requires processing data with low latency. In this paper, a novel computational caching framework is proposed to minimize fog latency by storing and reusing intermediate computation results (IRs). Using this proposed paradigm, a fog node can store IRs from previous computations and can also download IRs from neighboring nodes at the expense of additional transmission latency. However, due to the unpredictable arrival of the future computational operations and the limited memory size of the fog node, it is challenging to properly maintain the set of stored IRs. Thus, under uncertainty of future computation, the goal of the proposed framework is to minimize the sum of the transmission and computational latency by selecting the IRs to be downloaded and stored. To solve the problem, an online computational caching algorithm is developed to enable the fog node to schedule, download, and manage IRs compute arriving operations. Competitive analysis is used to derive the upper bound of the competitive ratio for the online algorithm. Simulation results show that the total latency can be reduced up to 26.8% by leveraging the computational caching method when compared to the case without computational caching.

see all

ISBN: 978-1-5386-3666-4
ISBN Print: 978-1-5386-3667-1
Pages: 1 - 6
Article number: 8368529
DOI: 10.1109/FWC.2017.8368529
OADOI: https://oadoi.org/10.1109/FWC.2017.8368529
Host publication: 2017 IEEE Fog World Congress (FWC)
Conference: IEEE Fog World Congress
Type of Publication: A4 Article in conference proceedings
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: This research was supported by the U.S. National Science Foundation under Grants CNS-1460333 and IIS-1633363, by the Office of Naval Research (ONR) under Grant N00014-15-1-2709, and by NOKIA donation on fog (FOGGY project).
Copyright information: © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.