University of Oulu

Z. Li et al., "A Multi-Stream Feature Fusion Approach for Traffic Prediction," in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 2, pp. 1456-1466, Feb. 2022, doi: 10.1109/TITS.2020.3026836

A multi-stream feature fusion approach for traffic prediction

Saved in:
Author: Li, Zhishuai1,2; Xiong, Gang3,4; Tian, Yonglin1;
Organizations: 1State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
3Beijing Engineering Research Center of Intelligent Systems and Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
4Cloud Computing Center, Chinese Academy of Sciences, Beijing 100190, China
5System and Media Laboratory (SyMLab), Computer Science and Engineering Department, The Hong Kong University of Science and Technology, Hong Kong
6Department of Computer Science, University of Helsinki, 00014 Helsinki, Finland
7Center for Ubiquitous Computing, University of Oulu, 90570 Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 3.5 MB)
Persistent link:
Language: English
Published: Institute of Electrical and Electronics Engineers, 2022
Publish Date: 2022-08-30


Accurate and timely traffic flow prediction is crucial for intelligent transportation systems (ITS). Recent advances in graph-based neural networks have achieved promising prediction results. However, some challenges remain, especially regarding graph construction and the time complexity of models. In this paper, we propose a multi-stream feature fusion approach to extract and integrate rich features from traffic data and leverage a data-driven adjacent matrix instead of the distance-based matrix to construct graphs. We calculate the Spearman rank correlation coefficient between monitor stations to obtain the initial adjacent matrix and fine-tune it while training. As to the model, we construct a multi-stream feature fusion block (MFFB) module, which includes a three-channel network and the soft-attention mechanism. The three-channel networks are graph convolutional neural network (GCN), gated recurrent unit (GRU) and fully connected neural network (FNN), which are used to extract spatial, temporal and other features, respectively. The soft-attention mechanism is utilized to integrate the obtained features. The MFFB modules are stacked, and a fully connected layer and a convolutional layer are used to make predictions. We conduct experiments on two real-world traffic prediction tasks and verify that our proposed approach outperforms the state-of-the-art methods within an acceptable time complexity.

see all

Series: IEEE transactions on intelligent transportation systems
ISSN: 1524-9050
ISSN-E: 1558-0016
ISSN-L: 1524-9050
Volume: 23
Issue: 2
Pages: 1456 - 1466
DOI: 10.1109/tits.2020.3026836
Type of Publication: A1 Journal article – refereed
Field of Science: 113 Computer and information sciences
Funding: This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB1004803; in part by the National Natural Science Foundation of China under Grant 61773381, Grant U1909204, and Grant U1811463; in part by the Chinese Guangdong’s S&T project under Grant 2019B1515120030; and in part by the Academy of Finland under Grant 3196669, Grant 319670, Grant 325774, Grant 325570, and Grant 326305.
Academy of Finland Grant Number: 319670
Detailed Information: 319670 (Academy of Finland Funding decision)
326305 (Academy of Finland Funding decision)
Copyright information: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.