A Requirements Driven Framework for Benchmarking Semantic Web Knowledge Base Systems

A key challenge for the semantic Web is to acquire the capability to effectively query large knowledge bases. As there will be several competing systems, we need benchmarks that will objectively evaluate these systems. Development of effective benchmarks in an emerging domain is a challenging endeavor. In this paper, we propose a requirements driven framework for developing benchmarks for semantic Web knowledge base systems (SW KBSs). In this paper, we make two major contributions. First, we provide a list of requirements for SW KBS benchmarks. This can serve as an unbiased guide to both the benchmark developers and personnel responsible for systems acquisition and benchmarking. Second, we provide an organized collection of techniques and tools needed to develop such benchmarks. In particular, the collection contains a detailed guide for generating benchmark workload, defining performance metrics, and interpreting experimental results

[1]  Frank van Harmelen,et al.  Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema , 2002, SEMWEB.

[2]  Jim Gray,et al.  The Benchmark Handbook for Database and Transaction Systems , 1993 .

[3]  Jeff Heflin,et al.  Rapid Benchmarking for Semantic Web Knowledge Base Systems , 2005, SEMWEB.

[4]  Deborah L. McGuinness,et al.  Owl web ontology language guide , 2003 .

[5]  Christoph Tempich,et al.  Towards a benchmark for Semantic Web reasoners - an analysis of the DAML ontology library , 2003, EON.

[6]  L. Stein,et al.  OWL Web Ontology Language - Reference , 2004 .

[7]  Jeff Heflin,et al.  On Logical Consequence for Collections of OWL Documents , 2005, SEMWEB.

[8]  Bernard Ycart,et al.  Generating Random Benchmarks for Description Logics , 1998, Description Logics.

[9]  Carolyn Turbyfill,et al.  A retrospective on the Wisconsin Benchmark , 1994 .

[10]  R. G. G. Cattell,et al.  The Engineering Database Benchmark , 1994, The Benchmark Handbook.

[11]  Diana Maynard,et al.  D2.1.4 Specication of a methodology, general criteria, and benchmark suites for benchmarking ontology tools , 2005 .

[12]  Vassilis Christophides,et al.  The RDFSuite: Managing Voluminous RDF Description Bases , 2000 .

[13]  Jeff Heflin,et al.  LUBM: A benchmark for OWL knowledge base systems , 2005, J. Web Semant..

[14]  Vassilis Christophides,et al.  The ICS-FORTH RDFSuite: Managing Voluminous RDF Description Bases , 2001, SemWeb.

[15]  Volker Haarslev,et al.  Racer: A Core Inference Engine for the Semantic Web , 2003, EON.

[16]  I. Horrocks,et al.  DL Systems Comparison , 1998 .

[17]  Jeff Heflin,et al.  An Evaluation of Knowledge Base Systems for Large OWL Datasets , 2004, SEMWEB.

[18]  Kenji Takahashi,et al.  Inquiry-based requirements analysis , 1994, IEEE Software.

[19]  Ian Sommerville,et al.  Requirements Engineering: Processes and Techniques , 1998 .

[20]  Atanas Kiryakov,et al.  OWLIM - A Pragmatic Semantic Repository for OWL , 2005, WISE Workshops.

[21]  David J. DeWitt,et al.  Benchmarking Database Systems A Systematic Approach , 1983, VLDB.

[22]  Nigel Shadbolt,et al.  Resource Description Framework (RDF) , 2009 .

[23]  Asunción Gómez-Pérez,et al.  A Benchmark Suite for Evaluating the Performance of the WebODE Ontology Engineering Platform , 2004, EON.

[24]  Ian Horrocks,et al.  DL Systems Comparison (Summary Relation) , 1998, Description Logics.