In this demonstration, we present ResEval Mash, a mashup platform for research evaluation, i.e., for the assessment of the productivity or quality of researchers, teams, institutions, journals, and the like - a topic most of us are acquainted with. The platform is specifically tailored to the need of sourcing data about scientific publications and researchers from the Web, aggregating them, computing metrics (also complex and ad-hoc ones), and visualizing them. ResEval Mash is a hosted mashup platform with a client-side editor and runtime engine, both running inside a common web browser. It supports the processing of also large amounts of data, a feature that is achieved via the sensible distribution of the respective computation steps over client and server. Our preliminary user study shows that ResEval Mash indeed has the power to enable domain experts to develop own mashups (research evaluation metrics); other mashup platforms rather support skilled developers. The reason for this success is ResEval Mash's domain-specificity.
[1]
M Mernik,et al.
When and how to develop domain-specific languages
,
2005,
CSUR.
[2]
Li Yan,et al.
From People to Services to UI: Distributed Orchestration of User Interfaces
,
2010,
BPM.
[3]
Daniela Fogli,et al.
Software Environments for End-User Development and Tailoring
,
2004,
PsychNology J..
[4]
Bernhard Rumpe,et al.
Domain specific modeling
,
2005,
Software & Systems Modeling.
[5]
Fabio Casati,et al.
Hosted Universal Composition: Models, Languages and Infrastructure in mashArt
,
2009,
ER.
[6]
Antonella De Angeli,et al.
Service Composition for Non-programmers: Prospects, Problems, and Design Recommendations
,
2010,
2010 Eighth IEEE European Conference on Web Services.