Assessing the Quality of Meta-models

Meta-models play a pivotal role in Model-Driven Engineering (MDE), as they define the abstract syntax of domain-specific languages, and hence, the structure of models. However, while they play a crucial role for the success of MDE projects, the community still lacks tools to check meta-model quality criteria, like design errors or adherence to naming conventions and best practices. In this paper, we present a language (mmSpec) and a tool (metaBest) to specify and check properties on meta-models and visualise the problematic elements. Then, we use them to evaluate over 295 meta-models of the ATL zoo by provisioning a library of 30 meta-model quality issues. Finally, from this evaluation, we draw recommendations for both MDE practitioners and meta-model tool builders.