Interpretable Early Prediction of Lane Changes Using a Constrained Neural Network Architecture

This paper proposes an interpretable machine learning structure for the early prediction of lane changes. The interpretability relies on interpretable templates, as well as constrained weights during the training process of a neural network. It is shown, that each template is separable and interpretable by means of automatically generated rule sets. For the validation of the proposed method, a publicly available dataset is used. The architecture is compared to reference publications that apply recurrent neural networks to the task of lane change prediction. The proposed method significantly improves the maximum prediction time of the lane changes while keeping low false alarm rates.