An Effective End-User Development Approach through Domain-Specific Mashups for Research Impact Evaluation

Over the last decade, there has been growing interest in the assessment of the performance of researchers, research groups, universities and even countries. The assessment of productivity is an instrument to select and promote personnel, assign research grants and measure the results of research projects. One particular assessment approach is bibliometrics i.e., the quantitative analysis of scientific publications through citation and content analysis. However, there is little consensus today on how research evaluation should be performed, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory. The process is very often highly subjective, and there are no universally accepted criteria. A number of dierent scientific data sources available on the Web (e.g., DBLP, Microsoft Academic Search, Google Scholar) that are used for such analysis purposes. Taking data from these diverse sources, performing the analysis and visualizing results in different ways is not a trivial and straight forward task. Moreover, the data taken from these sources cannot be used as it is due to the problem of name disambiguation, where many researchers share identical names or an author dierent name variations appear in the data. We believe that the personalization of the evaluation processes is a key element for the appropriate use and practical success of these research impact evaluation tasks. Moreover, people involved in such evaluation processes are not always IT experts and hence not capable to crawl data sources, merge them and compute the needed evaluation procedures. The recent emergence of mashup tools has refueled research on end-user development, i.e., on enabling end-users without programming skills to produce their own applications. Yet, similar to what happened with analogous promises in web service composition and business process management, research has mostly focused on technology and, as a consequence, has failed its objective. Plain technology (e.g., SOAP/WSDL web services) or simple modeling languages (e.g., Yahoo! Pipes) do not convey enough meaning to non-programmers. We believe that the heart of the problem is that it is impractical to design tools that are generic enough to cover a wide range of application domains, powerful enough to enable the specification of non-trivial logic, and simple enough to be actually accessible to non-programmers. At some point, we need to give up something. In our view, this something is generality since reducing expressive power would mean supporting only the development of toy applications, which is useless, while simplicity is our major aim. This thesis presents a novel approach for an effective end-user development, specifically for non-programmers. That is, we introduce a domain-specific approach to mashups that "speaks the language of users", i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with. We show what developing a domain-specific mashup platform means, which role the mashup meta-model and the domain model play and how these can be merged into a domain-specific mashup metamodel. We illustrate the approach by implementing a generic mashup platform, whose capabilities are based on our proposed mashup meta-model. Further, we illustrate how the generic mashup platform can be tailored for a specific domain, which is achieved through the development of ResEval Mash tool that is specifically developed for the research evaluation domain. Moreover, the thesis proposed an architectural design for mashup platforms, specifically it presents a novel approach for data-intensive mashup-based web applications, which proved to be a substantial contribution. The proposed approach is suitable for those applications, which deal with large amounts of data that travel between client and server. For the evaluation of our work and to determine the effectiveness and usability of our mashup tool, we performed two separate user studies. The results of the user studies confirm that domain-specific mashup tools indeed lower the entry barrier for non-technical users in mashup development. The methodology presented in this thesis is generic and can be applied for other domains. Moreover, following the methodological approach the developed mashup platform is also generic, that is, it can be tailored for other domains.

[1]  Yaron Goland,et al.  Web Services Business Process Execution Language , 2009, Encyclopedia of Database Systems.

[2]  Christoph Schroth,et al.  Brave New Web: Emerging Design Principles and Technologies as Enablers of a Global SOA , 2007, IEEE International Conference on Services Computing (SCC 2007).

[3]  Anthony F. J. van Raan,et al.  Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises , 1996, Scientometrics.

[4]  Fabio Casati,et al.  ResEval: An Open and Resource-oriented Research Impact Evaluation tool , 2010 .

[5]  Fabio Casati,et al.  Wisdom-Aware Computing: On the Interactive Recommendation of Composition Knowledge , 2010, ICSOC Workshops.

[6]  Volker Hoyer,et al.  FAST Platform: A Concept for User-Centric, Enterprise Class Mashup , 2009 .

[7]  Gabor Karsai,et al.  Composing Domain-Specific Design Environments , 2001, Computer.

[8]  Muhammad Imran,et al.  A Real-time Heuristic-based Unsupervised Method for Name Disambiguation in Digital Libraries , 2013, D Lib Mag..

[9]  Filippo Menczer,et al.  Crowdsourcing Scholarly Data , 2010 .

[10]  Li Yan,et al.  From People to Services to UI: Distributed Orchestration of User Interfaces , 2010, BPM.

[11]  Muhammad Imran,et al.  ResEval mash: a mashup tool for advanced research evaluation , 2012, WWW.

[12]  Brad A. Myers,et al.  Using HCI techniques to design a more usable programming system , 2002, Proceedings IEEE 2002 Symposia on Human Centric Computing Languages and Environments.

[13]  Muhammad Imran,et al.  A Scientific Resource Space for Advanced Research Evaluation Scenarios , 2011, SEBD.

[14]  Bihui Jin The AR-index: complementing the h-index , 2007 .

[15]  W. Green,et al.  The User Interface Design of the Fizz and Spark GSM Telephones , 1999 .

[16]  Fabio Casati,et al.  ResEval: A Mashup Platform for Research Evaluation , 2010 .

[17]  Leo Egghe,et al.  Little science, big science... and beyond , 1994, Scientometrics.

[18]  Anne-Marie Kermarrec,et al.  The many faces of publish/subscribe , 2003, CSUR.

[19]  Alexander Repenning,et al.  AgentCubes: Raising the Ceiling of End-User Development in Education through Incremental 3D , 2006, Visual Languages and Human-Centric Computing (VL/HCC'06).

[20]  Michael Schreiber,et al.  An empirical investigation of the g-index for 26 physicists in comparison with the h-index, the A-index, and the R-index , 2008, J. Assoc. Inf. Sci. Technol..

[21]  M Mernik,et al.  When and how to develop domain-specific languages , 2005, CSUR.

[22]  Francisco Herrera,et al.  hg-index: a new index to characterize the scientific output of researchers based on the h- and g-indices , 2010, Scientometrics.

[23]  S. E. Avons,et al.  Recency and duration neglect in subjective assessment of television picture quality , 2001 .

[24]  Leo Egghe,et al.  An h-index weighted by citation impact , 2008, Inf. Process. Manag..

[25]  Liv Langfeldt,et al.  How Professors Think: Inside the Curious World of Academic Judgment , 2011 .

[26]  Judit Bar-Ilan,et al.  Informetrics at the beginning of the 21st century - A review , 2008, J. Informetrics.

[27]  Holger Herbst,et al.  Business Rules in Systems Analysis: a Meta-Model and Repository System , 1996, Inf. Syst..

[28]  Paul Brown,et al.  DAMIA - A Data Mashup Fabric for Intranet Applications , 2007, VLDB.

[29]  Rodrigo Costas,et al.  The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level , 2007, J. Informetrics.

[30]  Daniele Braga,et al.  Search Computing Challenges and Directions , 2010, ICOODB.

[31]  P. Seglen,et al.  Education and debate , 1999, The Ethics of Public Health.

[32]  John T. Stasko,et al.  The buzz: supporting user tailorability in awareness applications , 2008, CHI.

[33]  Margaret M. Burnett,et al.  FAR: an end-user language to support cottage e-services , 2001, Proceedings IEEE Symposia on Human-Centric Computing Languages and Environments (Cat. No.01TH8587).

[34]  Antonella De Angeli,et al.  Service Composition for Non-programmers: Prospects, Problems, and Design Recommendations , 2010, 2010 Eighth IEEE European Conference on Web Services.

[35]  Eric A. Brewer,et al.  Intel Mash Maker: join the web , 2007, SGMD.

[36]  E. Garfield,et al.  Of Nobel class: A citation perspective on high impact research authors , 1992, Theoretical medicine.

[37]  Frank Leymann,et al.  WS-BPEL Extension for People ? BPEL4People , 2005 .

[38]  E. Michael Maximilien,et al.  A Domain-Specific Language for Web APIs and Services Mashups , 2007, ICSOC.

[39]  L. Egghe,et al.  Theory and practise of the g-index , 2006, Scientometrics.

[40]  M. Kenward,et al.  Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls , 2009, BMJ : British Medical Journal.

[41]  Scott R. Klemmer,et al.  Hacking , Mashing , Gluing : A Study of Opportunistic Design and Development , 2006 .

[42]  Serge Abiteboul,et al.  Modeling the mashup space , 2008, WIDM '08.

[43]  Marlon Dumas,et al.  The Move to Web Service Ecosystems , 2005 .

[44]  T. V. Leeuwen Testing the validity of the Hirsch-index for research assessment purposes , 2008 .

[45]  Nardine Osman,et al.  D5.1v3 Design of the Liquid Publications Integrated Platform , 2011 .

[46]  Michael Schreiber,et al.  Self-citation corrections for the Hirsch index , 2007 .

[47]  Santosh Misra,et al.  Metaphor mental model approach to intuitive graphical user interface design , 1998 .

[48]  Antonella De Angeli,et al.  End User Requirements for the Composable Web , 2010 .

[49]  Richard S. J. Tol,et al.  Rational (successive) h-indices: An application to economics in the Republic of Ireland , 2008, Scientometrics.

[50]  Carole A. Goble,et al.  Taverna: a tool for building and running workflows of services , 2006, Nucleic Acids Res..

[51]  M. Kosmulski A new Hirsch-type index saves time and works equally well as the original h-index , 2009 .

[52]  Muhammad Imran,et al.  On Development Practices for End Users , 2010, SeCO Workshop.

[53]  Robin K. S. Hankin,et al.  Beyond the Durfee square: Enhancing the h-index to score total publication output , 2008, Scientometrics.

[54]  Susanne Bødker,et al.  From implementation to design: tailoring and the emergence of systematization in CSCW , 1994, CSCW '94.

[55]  Katarina Stanoevska-Slabeva,et al.  Enterprise Mashups: Design Principles towards the Long Tail of User Needs , 2008, 2008 IEEE International Conference on Services Computing.

[56]  Concepción S. Wilson,et al.  The Literature of Bibliometrics, Scientometrics, and Informetrics , 2001, Scientometrics.

[57]  Alan L. Mackay,et al.  Publish or perish , 1974, Nature.

[58]  Susan Wiedenbeck,et al.  Facilitators and inhibitors of end-user development by teachers in a school , 2005, 2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05).

[59]  Antonella De Angeli,et al.  Developing Mashup Tools for End-Users: On the Importance of the Application Domain , 2012, Int. J. Next Gener. Comput..

[60]  Ralph Johnson,et al.  design patterns elements of reusable object oriented software , 2019 .

[61]  Arie Rip,et al.  Co-word maps of biotechnology: An example of cognitive scientometrics , 1984, Scientometrics.

[62]  Carole A. Goble,et al.  A formal semantics for the Taverna 2 workflow model , 2010, J. Comput. Syst. Sci..

[63]  Fabio Casati,et al.  Understanding Mashup Development , 2008, IEEE Internet Computing.

[64]  Anthony F. J. van Raan Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups , 2013, Scientometrics.

[65]  Judith Segal Some Problems of Professional End User Developers , 2007 .

[66]  A. Monk Cyclic interaction: a unitary approach to intention, action and the environment , 1998, Cognition.

[67]  Philip Ball,et al.  Index aims for fair ranking of scientists , 2005, Nature.

[68]  Antonella De Angeli,et al.  Empowering End-Users to Develop Service-Based Applications , 2011, IS-EUD.

[69]  Sune Lehmann,et al.  A quantitative analysis of indicators of scientific performance , 2008, Scientometrics.

[70]  Egon L. Willighagen,et al.  CDK-Taverna: an open workflow environment for cheminformatics , 2010, BMC Bioinformatics.

[71]  Antonella De Angeli,et al.  Conceptual and Usability Issues in the Composable Web of Software Services , 2010, ICWE Workshops.

[72]  Barry W. Boehm,et al.  Cost models for future software life cycle processes: COCOMO 2.0 , 1995, Ann. Softw. Eng..

[73]  Muhammad Imran,et al.  On the Systematic Development of Domain-Specific Mashup Tools for End Users , 2012, ICWE.

[74]  Jakob Nielsen,et al.  Usability engineering , 1997, The Computer Science and Engineering Handbook.

[75]  Ross Teague,et al.  Concurrent vs. post-task usability test ratings , 2001, CHI Extended Abstracts.

[76]  Jerome K. Vanclay,et al.  On the robustness of the h-index , 2007, J. Assoc. Inf. Sci. Technol..

[77]  Eser Kandogan,et al.  Koala: capture, share, automate, personalize business processes on the web , 2007, CHI.

[78]  Andy Purvis,et al.  The h index: playing the numbers game. , 2006, Trends in ecology & evolution.

[79]  Fabio Casati,et al.  Hosted Universal Composition: Models, Languages and Infrastructure in mashArt , 2009, ER.

[80]  Marlon Dumas,et al.  The Rise of Web Service Ecosystems , 2006, IT Professional.

[81]  J. E. Hirsch,et al.  An index to quantify an individual's scientific research output , 2005, Proc. Natl. Acad. Sci. USA.

[82]  Fabio Casati,et al.  Leveraging Mashups Approaches to Address Research Evaluation Challenges , 2011 .

[83]  Mary Shaw,et al.  Estimating the numbers of end users and end user programmers , 2005, 2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05).

[84]  Ronald Rousseau,et al.  New developments related to the Hirsch index , 2006 .

[85]  Matthias Jarke,et al.  Scenario Management: An Interdisciplinary Approach , 1998, Requirements Engineering.

[86]  Rodrigo Costas,et al.  Is g-index better than h-index? An exploratory study at the individual level , 2008, Scientometrics.

[87]  Ludo Waltman,et al.  Generalizing the H- and G-Indices , 2008, J. Informetrics.

[88]  Stefano Ceri,et al.  Web Modeling Language (WebML): a modeling language for designing Web sites , 2000, Comput. Networks.

[89]  Francisco Curbera,et al.  Web Services Business Process Execution Language Version 2.0 , 2007 .

[90]  N. M. Morris,et al.  On Looking into the Black Box: Prospects and Limits in the Search for Mental Models , 1986 .

[91]  Bernhard Rumpe,et al.  Domain specific modeling , 2005, Software & Systems Modeling.

[92]  MariAnne Karlsson,et al.  Safety semantics: A study on the effect of product expression on user safety behaviour , 2006 .

[93]  Muhammad Imran,et al.  ResEval Mash: a mashup tool that speaks the language of the user , 2012, CHI Extended Abstracts.

[94]  Muhammad Imran,et al.  Developing domain-specific mashup tools for end users , 2012, WWW.

[95]  Daniela Fogli,et al.  Software Environments for End-User Development and Tailoring , 2004, PsychNology J..

[96]  Antonella De Angeli,et al.  End-User Requirements for Wisdom-Aware EUD , 2011, IS-EUD.

[97]  Donald A. Norman,et al.  Cognitive artifacts , 1991 .

[98]  Yannis Manolopoulos,et al.  Generalized Hirsch h-index for disclosing latent facts in citation networks , 2007, Scientometrics.

[99]  Alan F. Blackwell,et al.  Children as Unwitting End-User Programmers , 2007, IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2007).

[100]  Bertrand Meyer,et al.  ViewpointResearch evaluation for computer science , 2009, CACM.

[101]  Henk F. Moed,et al.  Citation Analysis in Research Evaluation , 1899 .

[102]  Klaus Meißner,et al.  CRUISe: Composition of Rich User Interface Services , 2009, ICWE.

[103]  Yvonne Rogers,et al.  Citation counting, citation ranking, and h-index of human-computer interaction researchers: A comparison of Scopus and Web of Science , 2008, J. Assoc. Inf. Sci. Technol..

[104]  M. Jennions,et al.  The h index and career assessment by numbers. , 2006, Trends in ecology & evolution.

[105]  E. Hazelkorn Assessing Europe's University-Based Research , 2010 .

[106]  A. Pritchard,et al.  Statistical bibliography or bibliometrics , 1969 .