Publication Cover
Journal of Intelligent Transportation Systems
Technology, Planning, and Operations
Volume 28, 2024 - Issue 4
477
Views
8
CrossRef citations to date
0
Altmetric
Research Articles

Fusion attention mechanism bidirectional LSTM for short-term traffic flow prediction

, , , &
Pages 511-524 | Received 22 Nov 2021, Accepted 27 Oct 2022, Published online: 14 Nov 2022
 

Abstract

Short term forecasting is essential and challenging in time series data analysis for traffic flow research. A novel deep learning architecture on short-term traffic flow prediction was presented in this work. In conventional model-driven prediction method, a critical deviation in prediction accuracy was occurred in face of large fluctuations in traffic flow, while machine and deep learning-based approaches performed well in accuracy study than conventional regression-based models. Moreover, a fusion attention mechanism bidirectional long short-term memory model (ATT-BiLSTM) was proposed due to its bidirectional LSTM (BiLSTM) and attention mechanism units. The model not only dealt with forward and backward dependencies in time series data, but also integrated the attention mechanism to improve the ability on key information representation. The BiLSTM layer was exploited to capture bidirectional temporal and spatial features dependencies from historical data. The proposed model was also trained and validated using freeway toll datasets from Humen Bridge. The results showed that compared with ARIMA and SVR models, the indicators of the proposed model have been significantly improved. The ablation experiments were conducted to evaluate the role of the attention mechanism module. Compared with BiLSTM, CNN and 1DCNN-ATT-BiLSTM models, the MAE, RMSE and MAPE indexes of proposed model were reduced by 0.6–5.9%, 1.6–4.7% and 0.6–22.8%, respectively. More accurate predictions were obtained by the proposed model. The research results are of great significance to improve the level of traffic management.

Authors’ contributions

Zhihong Li, Han Xu, Wangtu Xu contributed to study conception and design. Han Xu, Xiuli Gao, and Zinan Wang contributed to data collection. Zhihong Li and Wangtu Xu contributed to analysis and interpretation of results. Zhihong Li and Han Xu contributed to draft manuscript preparation. All authors reviewed the results and approved the final version of the manuscript.

Disclosure statement

The authors declare that they have no conflicts of interest.

Data availability statement

The data used to support the findings in this study are available from the corresponding authors upon request.

Additional information

Funding

This paper was supported by National Social Science Foundation (21FGLB014).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.