Leveraging Mashups Approaches to Address Research Evaluation Challenges

The evaluation of research, i.e the assessment of productivity or measuring and comparing impact, is an instrument to select and promote personnel, assign research grants and measure the results of research projects. However, there is little consensus today on how research evaluation should be done, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory. The process is very often highly subjective and there are no universally accepted criteria. Computing reliable and useful evaluation criteria typically requires solving complex data integration problems and expressing custom evaluation metrics. In our current research work we show that leveraging mashups approaches we can address domain specific evaluation challenges. We aim at providing a mashup platform which will support the research evaluation domain. Finally we will explore what we can learn from this development in order to generalize our finding and tackle other domain specific mashup applications.

[1]  Esteban Zimányi,et al.  Defining ETL worfklows using BPMN and BPEL , 2009, DOLAP.

[2]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[3]  Bertrand Meyer,et al.  ViewpointResearch evaluation for computer science , 2009, CACM.

[4]  M. Jennions,et al.  The h index and career assessment by numbers. , 2006, Trends in ecology & evolution.

[5]  Francisco Herrera,et al.  hg-index: a new index to characterize the scientific output of researchers based on the h- and g-indices , 2010, Scientometrics.

[6]  Rodrigo Costas,et al.  The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level , 2007, J. Informetrics.

[7]  Jerome K. Vanclay,et al.  On the robustness of the h-index , 2007, J. Assoc. Inf. Sci. Technol..

[8]  Lars Grammel,et al.  An End User Perspective on Mashup Makers , 2008 .

[9]  Fabio Casati,et al.  ResEval: A Mashup Platform for Research Evaluation , 2010 .

[10]  Philip Ball,et al.  Index aims for fair ranking of scientists , 2005, Nature.

[11]  E. Garfield,et al.  Of Nobel class: A citation perspective on high impact research authors , 1992, Theoretical medicine.

[12]  Fabio Casati,et al.  A framework for rapid integration of presentation components , 2007, WWW '07.

[13]  Fabio Casati,et al.  Understanding Mashup Development , 2008, IEEE Internet Computing.

[14]  Avraham Leff,et al.  Relational Blocks: A Visual Dataflow Language for Relational Web-Applications , 2007, IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2007).