Abstract
Understanding and predicting human driving behavior play an important role in the development of intelligent vehicle systems, particularly for Advanced Driver Assistance System (ADAS) to estimate dangerous operations and take appropriate actions. However, well-predicted lane-changing (LC) behavior is still challenging on account of the complexity and uncertainty of traffic status, and labeled data are required. To address this problem, we propose a novel framework, denoted as LCNet, for lane-changing behavior prediction via joint learning of the front view video images and driver physiological signals. Firstly, with a temporal consistency module, both labeled and unlabeled video frames can be utilized in the training phase, while no extra computation is required during inference. Secondly, a new penalty term is introduced for learning sequential physiological signals, which is sensitive to local continuity property. Finally, a new loss function is designed for LCNet to learn co-occurrence features from the video scene-optical flow branch and physiology branch efficiently. Moreover, the experiments are conducted on a real-world driving data set. The experimental results demonstrate that the LCNet can learn the underlying features of upcoming lane-changing behavior and significantly outperform the other advanced models.
Disclosure statement
No potential conflict of interest was reported by the author(s).