Test and Evaluation Plan : AdaptIVe automated driving applications

The general objective of AdaptIVe is to develop and demonstrate new functionalities provided by partially-automated and highly-automated vehicles. These applications cover different speed regimes and driving scenarios and aim at improving safety, energy efficiency, dependability and user-acceptance of automated driving. The introduction of supervised automated driving is now posing new and specific questions: in particular, the functions embodying automated driving do influence not only a certain defined scenario (for example accidents, near accidents, or safety related situations) but the whole traffic flow. Therefore, the existing evaluation methods are insufficient, and new comprehensive approaches are required. SP7 “Evaluation” is a horizontal activity within AdaptIVe supporting the vertical subprojects. Its main objective is to develop a common evaluation framework for supervised automated driving applications which is described within this deliverable. This framework addresses two types of the assessment. The first part of the evaluation framework which this report focuses on considers the evaluation of the status quo which consists of the technical, user-related and real-life interaction (in-traffic) evaluation. The second part concentrates on the analysis of the future benefits with respect to safety and environmental aspects, which can be achieved by means of automated driving applications. This will be presented in more detail in the upcoming deliverable D7.3. With the development of each evaluation framework, previous work conducted by earlier projects is considered and included into the procedures where possible. Starting from an overview on the developed functions and the evaluation activities in previous projects the overall evaluation methodology is described in chapter 2. The evaluation process is split into four assessment types in analogy to the approach of the PReVAL project as well as the interactIVe project. In the technical assessment (chapter 3) the performance of the functions is investigated. The user-related assessment (chapter 4) analyses the interaction between the function and the user as well as the acceptance of the developed functions. The in-traffic assessment (chapter 5) focuses on the effects of automated driving on the surrounding traffic as well as non-users. The impact assessment (chapter 6) determines the potential effects of the function with respect to safety and environmental aspects (e.g. fuel consumption, traffic efficiency). Overall conclusions are presented in the final chapter 7. For each evaluation, the starting point is the function or system under investigation itself. Based on its description, a classification is performed to determine which evaluation methodologies are most appropriate for the assessment. Within the AdaptIVe sub projects 4 to 6, automation functionalities for close-distance, urban as well as highway scenarios will be developed, respectively. Since a complete evaluation of all of the AdaptIVe functions in all assessments is out of the scope of this project, only selected functions will be evaluated with selected methodologies in order to demonstrate the application of the evaluation framework. Examples of the evaluation procedure are provided with the presentation of each methodology. Within AdaptIVe, two general types of functions are distinguished: event based functions that only operate for a short period of time as well as continuous operating functions which once activated will operate over a longer time period. For each assessment framework research questions, hypotheses as well as indicators are defined that guide the respective evaluation. The research questions are the first step of the evaluation and provide information on what should be addressed. Based on those research questions, hypotheses to be tested are defined. Testing of the hypotheses is done by using indicators that can be calculated based on signals or be derived from measures logged during the tests. It should be noted, that not all of these research questions, hypotheses and defined indicators might be applicable for all functions or systems. Therefore, for each combination of system and chosen evaluation an appropriate subset needs to be considered. Many different test tools like balloon cars or real vehicles and test environments like test tracks, public roads or simulators are theoretically available for evaluation. Depending on the function or system under investigation, its development status as well as other requirements like legal boundaries or safety protocols, the most appropriate choice needs to be made for each evaluation. The different considered combinations and possibilities will be described alongside the evaluation frameworks in the respective sections.