Towards the improvement of the Semantic Web technology

The Semantic Web technology needs to be thoroughly evalu- ated for providing objective results and to attain a massive improvement in their quality in order to be consolidated in the industrial and in the academic world. This paper presents software benchmarking as a process to carry out over the SemanticWeb technology in order to improve it and to search for best practices. It also describes a software benchmarking methodology and provides recommendations for performing evaluations in benchmarking activities.

[1]  Wang Zhi-jian Using Benchmarking to Advance Research:A Challenge to Software Engineering , 2005 .

[2]  Natalia Juristo Juzgado,et al.  Basics of Software Engineering Experimentation , 2010, Springer US.

[3]  Victor R. Basili,et al.  Paradigms for experimentation and empirical studies in software engineering , 1991 .

[4]  Steven R. Rakitin Software verification and validation - a practitioner's guide , 1997 .

[5]  Martin D. Westhead,et al.  A methodology for benchmarking Java Grande applications , 1999, JAVA '99.

[6]  Terry Wireman,et al.  Benchmarking Best Practices in Maintenance Management , 2003 .

[7]  B. Kitchenham,et al.  DESMET : A method for evaluating Software Engineering methods and tools , 2000 .

[8]  Claes Wohlin,et al.  Software inspection benchmarking-a qualitative and quantitative comparative opportunity , 2002, Proceedings Eighth IEEE Symposium on Software Metrics.

[9]  Victor R. Basili,et al.  Experimentation in software engineering , 1986, IEEE Transactions on Software Engineering.

[10]  Robert C. Camp,et al.  Benchmarking: The Search for Industry Best Practices That Lead to Superior Performance , 1989 .

[11]  Ivo Düntsch,et al.  Evaluation of software systems , 2002 .

[12]  Stefan Biffl,et al.  State-of-the-Art in Empirical Studies , 2002 .

[13]  Alain Abran,et al.  Guide to the Software Engineering Body of Knowledge : 2004 Version , 2005 .

[14]  William A. Florac,et al.  Goal-Driven Software Measurement. A Guidebook. , 1996 .

[15]  T.C. Lethbridge,et al.  Guide to the Software Engineering Body of Knowledge (SWEBOK) and the Software Engineering Education Knowledge (SEEK) - a preliminary mapping , 2001, 10th International Workshop on Software Technology and Engineering Practice.

[16]  Victor R. Basili,et al.  Quantitative Evaluation of Software Methodology , 1985 .

[17]  Binoy Ravindran,et al.  DynBench: A Dynamic Benchmark Suite for Distributed Real-Time Systems , 1999, IPPS/SPDP Workshops.

[18]  Adam A. Porter,et al.  Empirical studies of software engineering: a roadmap , 2000, ICSE '00.

[19]  Diana Maynard,et al.  D2.1.4 Specication of a methodology, general criteria, and benchmark suites for benchmarking ontology tools , 2005 .

[20]  Asunción Gómez-Pérez,et al.  Guidelines for Benchmarking the Performance of Ontology Management APIs , 2005, SEMWEB.

[21]  Michael J. Spendolini,et al.  The Benchmarking Book , 1992 .

[22]  Shari Lawrence Pfleeger,et al.  Preliminary Guidelines for Empirical Research in Software Engineering , 2002, IEEE Trans. Software Eng..

[23]  David Macii,et al.  FFT BENCHMARKING FOR DIGITAL SIGNAL PROCESSING TECHNOLOGIES , 2004 .

[24]  Barbara Kitchenham,et al.  DESMET: a methodology for evaluating software engineering methods and tools , 1997 .