Quantitative Methods for Assessing Similarity between Computational Results and Full-Scale Crash Tests

As computational analyses have become more integrated into the design of roadside safety hardware so too has the importance of establishing objective, quantifiable validation and verification methods and criteria. While analysts have always compared numerical and experimental results, the techniques used have generally been visual and subjective. There has been a need for the development of objective procedures for assessing the validity of computational analyses in the area of roadside hardware performance. This paper discusses a convenient means for engineers to quantify similarities and differences between acceleration-time histories computed from computational models and those measured in full-scale crash tests. This was achieved through the development of a computer program called RSVVP that computes several shape comparison metrics that quantify those differences. While the metrics themselves are deterministic, a subjective judgment still has to be made about how close to perfect is acceptable. Because of the highly nonlinear nature of crash events, there are often considerable differences in the results of essentially identical full-scale crash tests. Likewise, a computational model may not match “exactly” the results of a physical test, but the difference should be no greater than what is expected between physical tests. This paper also discusses the development of acceptance criteria for the metrics based on assessment of the repeatability of full-scale crash tests by quantifying the normal variation that is typically observed in such tests.