A Survey of Ontology Benchmarks for Semantic Web Ontology Tools

Software engineering employs different benchmarks for a software evaluation. This enables software developers to continuously improve their product. The same needs are intrinsic for software tools in the semantic web field. While there are many different benchmarks already available, there has not been their overview and categorization yet. This work provides such an overview and categorization of benchmarks specifically oriented on benchmarks where an ontology plays an important role. Benchmarks are naturally categorized in line with ontology tool categorization along with an indication which activities those benchmarks are deliberate and which are non-deliberative. Although the article itself can already navigate a reader to an adequate benchmark, we moreover automatically designed a flexible rule-based recommendation tool based on the analysis of existing benchmarks.

[1]  Valentina Ivanova,et al.  Experiences from the anatomy track in the ontology alignment evaluation initiative , 2017, Journal of Biomedical Semantics.

[2]  Peter Woollard,et al.  Matching disease and phenotype ontologies in the ontology alignment evaluation initiative , 2017, J. Biomed. Semant..

[3]  Ondej Zamazal,et al.  The Ten-Year OntoFarm and its Fertilization within the Onto-Sphere , 2017 .

[4]  Giovanna Guerrini,et al.  Minimizing conservativity violations in ontology alignments: algorithms and evaluation , 2016, Knowledge and Information Systems.

[5]  Steffen Lohmann,et al.  OntoBench: Generating Custom OWL 2 Benchmark Ontologies , 2016, SEMWEB.

[6]  Bijan Parsia,et al.  The OWL Reasoner Evaluation (ORE) 2015 Resources , 2016, International Semantic Web Conference.

[7]  Patrick Lambrix,et al.  User Validation in Ontology Alignment , 2016, SEMWEB.

[8]  Jens Lehmann,et al.  DL-Learner - A framework for inductive learning on the Semantic Web , 2016, J. Web Semant..

[9]  Thomas Ertl,et al.  Visualizing ontologies with VOWL , 2016, Semantic Web.

[10]  Vojtech Svátek,et al.  PatOMat - Versatile Framework for Pattern-Based Ontology Transformation , 2015, Comput. Informatics.

[11]  Mark A. Musen,et al.  The protégé project: a look back and a look forward , 2015, SIGAI.

[12]  Peter Haase,et al.  Optique: Zooming in on Big Data , 2015, Computer.

[13]  Thomas Ertl,et al.  OntoViBe 2: Advancing the Ontology Visualization Benchmark , 2014, EKAW.

[14]  Bijan Parsia,et al.  The Manchester OWL Repository: System Description , 2014, International Semantic Web Conference.

[15]  Pascal Hitzler,et al.  Conference v2.0: An Uncertain Version of the OAEI Conference Benchmark , 2014, SEMWEB.

[16]  Thomas Ertl,et al.  OntoViBe: An Ontology Visualization Benchmark , 2014, VISUAL@EKAW.

[17]  Emanuel Santos,et al.  To repair or not to repair: reconciling correctness and coherence in ontology reference alignments , 2013, OM.

[18]  Jérôme Euzenat,et al.  Ontology matching benchmarks: Generation, stability, and discriminability , 2013, J. Web Semant..

[19]  Ian Horrocks,et al.  Evaluating Mapping Repair Systems with Large Biomedical Ontologies , 2013, Description Logics.

[20]  Steffen Staab,et al.  SPLODGE: Systematic Generation of SPARQL Benchmark Queries for Linked Open Data , 2012, SEMWEB.

[21]  Heiner Stuckenschmidt,et al.  MultiFarm: A benchmark for multilingual ontology matching , 2012, J. Web Semant..

[22]  Boris Motik,et al.  A novel approach to ontology classification , 2012, J. Web Semant..

[23]  Asunción Gómez-Pérez,et al.  The NeOn Methodology for Ontology Engineering , 2012, Ontology Engineering in a Networked World.

[24]  Günter Ladwig,et al.  FedBench: A Benchmark Suite for Federated Semantic Data Query Processing , 2011, SEMWEB.

[25]  Heiner Stuckenschmidt,et al.  Benchmarking Matching Applications on the Semantic Web , 2011, ESWC.

[26]  Ian Horrocks,et al.  Logic-based assessment of the compatibility of UMLS ontology sources , 2011, J. Biomed. Semant..

[27]  Huajun Chen,et al.  The Semantic Web , 2011, Lecture Notes in Computer Science.

[28]  Samantha Bail,et al.  JustBench: A Framework for OWL Benchmarking , 2010, International Semantic Web Conference.

[29]  San Murugesan,et al.  Handbook of Research on Web 2.0, 3.0, and X.0: Technologies, Business, and Social Applications , 2009 .

[30]  Christian Bizer,et al.  The Berlin SPARQL Benchmark , 2009, Int. J. Semantic Web Inf. Syst..

[31]  Alfio Ferrara,et al.  Towards a Benchmark for Instance Matching , 2008, OM.

[32]  Fausto Giunchiglia,et al.  A Large Scale Dataset for the Evaluation of Ontology Matching Systems , 2008 .

[33]  Heiner Stuckenschmidt,et al.  Results of the Ontology Alignment Evaluation Initiative , 2007 .

[34]  J. Euzenat,et al.  Results of the Ontology Alignment Evaluation Initiative 2007 , 2006, OM.

[35]  Li Ma,et al.  Towards a Complete OWL Ontology Benchmark , 2006, ESWC.

[36]  Jeff Heflin,et al.  Rapid Benchmarking for Semantic Web Knowledge Base Systems , 2005, SEMWEB.

[37]  Jeff Heflin,et al.  LUBM: A benchmark for OWL knowledge base systems , 2005, J. Web Semant..

[38]  Zhengxiang Pan Benchmarking DL Reasoners Using Realistic Ontologies , 2005, OWLED.

[39]  Ian Horrocks,et al.  OWL Web Ontology Language Reference-W3C Recommen-dation , 2004 .

[40]  Frank van Harmelen,et al.  Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema , 2002, SEMWEB.

[41]  R. Doyle The American terrorist. , 2001, Scientific American.

[42]  Bernard Ycart,et al.  Generating Random Benchmarks for Description Logics , 1998, Description Logics.

[43]  W. N. Borst,et al.  Construction of Engineering Ontologies for Knowledge Sharing and Reuse , 1997 .