211
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Joint learning of video images and physiological signals for lane-changing behavior prediction

, &
Pages 1234-1253 | Received 08 May 2020, Accepted 19 Apr 2021, Published online: 17 Jun 2021
 

Abstract

Understanding and predicting human driving behavior play an important role in the development of intelligent vehicle systems, particularly for Advanced Driver Assistance System (ADAS) to estimate dangerous operations and take appropriate actions. However, well-predicted lane-changing (LC) behavior is still challenging on account of the complexity and uncertainty of traffic status, and labeled data are required. To address this problem, we propose a novel framework, denoted as LCNet, for lane-changing behavior prediction via joint learning of the front view video images and driver physiological signals. Firstly, with a temporal consistency module, both labeled and unlabeled video frames can be utilized in the training phase, while no extra computation is required during inference. Secondly, a new penalty term is introduced for learning sequential physiological signals, which is sensitive to local continuity property. Finally, a new loss function is designed for LCNet to learn co-occurrence features from the video scene-optical flow branch and physiology branch efficiently. Moreover, the experiments are conducted on a real-world driving data set. The experimental results demonstrate that the LCNet can learn the underlying features of upcoming lane-changing behavior and significantly outperform the other advanced models.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research is supported by the High-level Talents Fund of Jianghan University [Grant Number 1029-06060001].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.