Requirements for Monitoring Inattention of the Responsible Human in an Autonomous Vehicle: The Recall and Precision Tradeoff

Recent fatal accidents with partially autonomous vehicles (AVs) show that the responsible human in a vehicle (RHV) can become inattentive enough not to be able to take over driving the vehicle when it gets into a situation that its driving automation system is not able to handle. Studies show that as the level of automation of an AV increases, the tendency for the RHV to become inattentive grows. To counteract this tendency, an AV needs to monitor its RHV for inattention and when inattention is detected, to somehow notify the RHV to pay attention. Requirements engineering for the monitoring software needs to trade off false positives (FPs) and false negatives (FNs) (or recall and precision) in detecting inattention. An FN (low recall) is bad because it represents not detecting an inattentive RHV. An FP (low precision) is bad because it leads to notifying the RHV too frequently, to the RHV’s ignoring notifications, and thus to degraded effectiveness of notification. The literature shows that most researchers just assume that FPs and FNs (recall and precision) are equally bad and weight them the same in any tradeoff. However, if, as for aircraft pilots, notification techniques can be found whose effectiveness do not degrade even with frequent repetition, then many FPs (low precision) can be tolerated in an effort to reduce the FNs (increase the recall) in detecting inattention, and thus, to improve safety.