Summarizes the paper under headings of background or context, objectives, method, main results, and conclusions. Introduction Problem Statement Indicates what the problem is; where is occurs, and who observe it. Research Objective Defines the evaluation using the formalized style used in the MEEGA+ model. Context Indicates environmental factors such as institution, course, and participants involved in the evaluation. Related Work How this research relates to existing research (studies)? Research Method Reports the methods used in the research, such as the MEEGA+ method (Petri et al., 2018), case studies (Yin, 2017; Wohlin et al., 2012), GQM (Basili et al., 1994). Evaluation Planning Object of study Indicates the game selected fort the evaluation. Evaluation goal Presents the defined evaluation goal and the analysis questions following the MEEGA+ model. Analyse the <name of the selected game> for the purpose of evaluate the quality in terms of usability and player experience from the students’ point of view in the context of higher computing education. Analysis questions AQ1: Does the <name of the evaluated game> has a good usability? AQ2: Does the <name of the evaluated game> provides a positive player experience? AQ3: How old are the students that compose the sample of the study? AQ4: What is the gender of the students that compose the sample of the study? AQ5: What is the frequency that the students play digital and/or non-digital games? Context details Indicates the place that the evaluation took place, such as institution and course. Research design Indicates the research design applied, following the definition of the MEEGA+ model. Case study design (one-shot post-test only). Schedule Indicates the schedule of the evaluation such as date and time. Number of Indicates the number of the approval provided by the Ethics INCoD – Brazilian Institute for Digital Convergence JULY 2018 40 the Ethics Committee approval Committee (if necessary). Execution Sample Description of the sample characteristics (demographic information). Preparation What has been done to prepare the execution of the evaluation (i.e., schedule, materials)? Game applied Indicates how game application took place and any deviations from plan. Data collection performed How data collection took place and any deviations from plan. Analysis Answer the analysis questions Summarizes the data collected and describes how it was analysed and answers each of the analysis questions defined. Game quality level Indicates the quality level of the evaluated game, obtained from the MEEGA+ scale. Discussion Evaluation of results Interprets and explains the findings from the Analysis section. Threats to validity Discusses the main threats to validity and mitigation strategies applied. Conclusions and Future Work Summary Provides a concise summary of the research objective and evaluation execution. Findings Identifies the most important results of the study. Improvement opportunities Suggestions for other studies to further investigate. Acknowledgements Identifies any sponsors, participants, and contributors who do not fulfil authorship criteria. References Lists all cited literature in the format requested by the publisher. Appendices Includes supplementary data and/or detailed analyses which might help others to use the results. INCoD – Brazilian Institute for Digital Convergence JULY 2018 41 4. Conclusions In this technical report, we presented the MEEGA+ method, an evolution of an evaluation model of educational games used as instructional strategy for computing education, improving the initial model proposed by Savi et al. (2011). The MEEGA+ method aims to provide a systematic support for the evaluation of games for computing education, focusing on the quality evaluation of educational games (including digital as well as non-digital games) in terms of usability and player experience. It is composed of an evaluation model (MEEGA+ model) defining quality aspects to evaluate a game, and a process (MEEGA+ process) guiding the conduction of the game evaluation. As next steps, we plan to continue conducting case studies evaluating games (digital and non-digital) for computing education using the MEEGA+ method in order to conduct a statistical analysis to confirm the decomposition of the MEEGA+ factors and dimensions. In addition, we plan evaluate the quality of the MEEGA+ method from the expert’s perspective. INCoD – Brazilian Institute for Digital Convergence JULY 2018 42 Acknowledgments We would like to thank all the students and instructors that accepted participate in the applications of the games using the MEEGA+ method. This work was supported by the CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico – www.cnpq.br), an entity of the Brazilian government focused on scientific and technological development. This work was partially conducted during a visiting scholar period at University of Cádiz, sponsored by the CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior), Foundation within the Ministry of Education, Brazil (grant n. 88881.131485/2016-01). INCoD – Brazilian Institute for Digital Convergence JULY 2018 43 References Abt, C. C. (2002). Serious Games. Lanhan: University Press of America. ACM/IEEE-CS. (2013). Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science, 2013. Available on: <https://www.acm.org/education/CS2013final-report.pdf> Access: 06 Nov. 2017. Acuña, S. T.; Antonio, A.; Ferré, X.; López, M.; Maté, L. (2000). The Software process: modeling, evaluation and improvement. Handbook of Software Engineering and Knowledge Engineering. World Scientific Publishing Company. All, A., Castellar, E. P. N., & Looy, J. V. (2016). Assessing the effectiveness of digital game-based learning: Best practices. Computers & Education, 92–93, 90-103. Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. Longman. Andrade, D. F., Tavares, H. R., Valle, R. C. (2000). Teoria de Resposta ao Item: conceitos e aplicações. ABE — Associação Brasileira de Estatística, 4o SINAPE. Andrade, H. & Valtcheva, A. (2009). Promoting learning and achievement through selfassessment. Theory into Practice, 48, 12-19. Basili, V. R., Caldiera, G., & Rombach, H. D. (1994). Goal, Question Metric Paradigm. In J. J. Marciniak, Encyclopedia of Software Engineering, (pp. 528-532). New York: Wiley-Interscience. Battistella, P. & Gresse von Wangenheim, C. (2016). Games for teaching computing in higher education – a systematic review. IEEE Technology and Engineering Education Journal, 9(1), 8-30. Beecham, S., Hall, T., Britton, C., Cottee, M., & Rainer, A. (2005). Using an Expert Panel to Validate a Requirements Process Improvement Model. Journal of Systems and Software, 76(3), 251-275. Bowman, D. D. (2018). Declining Talent in Computer Related Careers. Journal of Academic Administration in Higher Education, 14(1), 1-4. Boyle, E. A., Connolly, T. M., & Hainey, T. (2011). The role of psychology in understanding the impact of computer games. Entertainment Computing, 2, 69–74. Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., Lim, T., Ninaus, M., Ribeiro, C., & Pereira, J. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education, 94, 178-192. Branch, R. M. (2010). Instructional Design: The ADDIE Approach. New York: Springer New York Dordrecht Heidelberg London. Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry, 189(194), 47. Brown, G. T. L., & Harris, L. R. (2013). Student self-assessment. In J. H. McMillan (Ed.). The SAGE handbook of research on classroom assessment (pp. 367-393). Thousand Oaks: Sage Publications. Brown, G., Andrade, H., & Chen, F. (2015). Accuracy in student self-assessment: Directions and cautions for research. Assessment in Education Principles Policy and Practice, 22(4), 1-26. Budgen, D., Turner, M., Brereton, P., & Kitchenham, B. (2008). Using mapping studies in Software Engineering. Proc. of the 20th Annual Workshop of Psychology of Programming Interest Group, (pp. 195-204). Lancaster University, Lancaster, USA. Calderón, A. & Ruiz M. (2015). A systematic literature review on serious games evaluation: An application to software project management. Computers & Education, 87, 396-422. Calderón, A., Ruiz M., & O'Connor, R. (2018). A multivocal literature review on serious games for software process standards education. Computer Standards & Interfaces, 57, 36-48. INCoD – Brazilian Institute for Digital Convergence JULY 2018 44 Çiftci, S. (2018). Trends of Serious Games Research from 2007 to 2017: A Bibliometric Analysis. Journal of Education and Training Studies, 6(2), 18-27. Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59(2), 661-686. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340. Dawes, J. (2008). Do data characteristics change according to the number of scale points used? An experiment using 5-point, 7-point and 10-point scales. International Journal of Market Research, 50(1), 61-77. DeVellis, R. F. (2016). Scale development: theory and applications (4th. ed.). Thousand Oaks: SAGE Publications. Djaouti, D., Alvarez J., Jessel J. P., & Rampnoux O. (2011). Origins of Serious Games. In: Ma M., Oikonomou A., Jain L. (Eds) Serious Games and Edutainment Applications. London: Springer. Dolan, E. L. & Collins, J. P. (2015). We must teach more effectively: here are four ways to get st
[1]
Jari Takatalo,et al.
Presence, Involvement, and Flow in Digital Games
,
2010,
Evaluating User Experience in Games.
[2]
Timothy C. Bell,et al.
Evaluation of games for teaching computer science
,
2013,
WiPSE '13.
[3]
Christiane Gresse von Wangenheim,et al.
How to Evaluate Educational Games: a Systematic Literature Review
,
2016,
J. Univers. Comput. Sci..
[4]
Gavin T. L. Brown,et al.
Accuracy in student self-assessment: directions and cautions for research
,
2015
.
[5]
D. Schanzenbach.
Limitations of Experiments in Education Research
,
2012,
Education Finance and Policy.
[6]
K. Poels,et al.
"It is always a lot of fun!": exploring dimensions of digital game experience using focus group methodology
,
2007,
Future Play.
[7]
H Mohamed,et al.
Development and potential analysis of Heuristic Evaluation for Educational Computer Game (PHEG)
,
2010,
5th International Conference on Computer Sciences and Convergence Information Technology.
[8]
Dietmar Pfahl,et al.
Reporting Experiments in Software Engineering
,
2008,
Guide to Advanced Empirical Software Engineering.
[9]
Richard W. Morshead.
Taxonomy of Educational Objectives Handbook II: Affective Domain
,
1965
.
[10]
Pearl Brereton,et al.
Using Mapping Studies in Software Engineering
,
2008,
PPIG.
[11]
Paul Parsons.
Preparing Computer Science Graduates for the 21st Century
,
2011
.
[12]
N. Malhotra,et al.
Marketing Research: An Applied Approach
,
2000
.
[13]
R. B. Johnson,et al.
Educational Research: Quantitative, Qualitative, and Mixed Approaches
,
2007
.
[14]
R. Hambleton,et al.
Fundamentals of Item Response Theory
,
1991
.
[15]
Jodi Asbell-Clarke,et al.
Challenging games help students learn: An empirical study on engagement, flow and immersion in game-based learning
,
2016,
Comput. Hum. Behav..
[16]
Eduardo H. Calvillo Gámez,et al.
On the core elements of the experience of playing video games
,
2009
.
[17]
Erin L. Dolan,et al.
We must teach more effectively: here are four ways to get started
,
2015,
Molecular biology of the cell.
[18]
Maria Kordaki,et al.
Digital card games in education: A ten year systematic review
,
2017,
Comput. Educ..
[19]
Peter Rittgen.
Quality and perceived usefulness of process models
,
2010,
SAC '10.
[20]
Mathias Weske,et al.
Business Process Management: Concepts, Languages, Architectures
,
2007
.
[21]
Pearl Brereton,et al.
Systematic literature reviews in software engineering - A tertiary study
,
2010,
Inf. Softw. Technol..
[22]
Maria Kordaki,et al.
Computer Card Games in Computer Science Education: A 10-Year Review
,
2016,
J. Educ. Technol. Soc..
[23]
Christiane Gresse von Wangenheim,et al.
Games for Teaching Computing in Higher Education – A Systematic Review
,
2016
.
[24]
R. Valle.
Teoria de resposta ao item
,
2000
.
[25]
Jeffrey Earp,et al.
An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games
,
2016,
Comput. Educ..
[26]
Ricardo Primi,et al.
Fundamentos da teoria da resposta ao item: TRI
,
2003
.
[27]
Fred D. Davis.
Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology
,
1989,
MIS Q..
[28]
J. Dawes.
Do Data Characteristics Change According to the Number of Scale Points Used? An Experiment Using 5-Point, 7-Point and 10-Point Scales
,
2008
.
[29]
L. Cronbach.
Coefficient alpha and the internal structure of tests
,
1951
.
[30]
Eric N. Wiebe,et al.
Measuring engagement in video game-based environments: Investigation of the User Engagement Scale
,
2014,
Comput. Hum. Behav..
[31]
Austen Rainer,et al.
Using an expert panel to validate a requirements process improvement model
,
2005,
J. Syst. Softw..
[32]
Norshuhada Shiratuddin,et al.
Heuristics Evaluation Strategy for Mobile Game-Based Learning
,
2010,
2010 6th IEEE International Conference on Wireless, Mobile, and Ubiquitous Technologies in Education.
[33]
Rory O'Connor,et al.
Software Engineering Education and Games: A Systematic Literature Review
,
2016,
J. Univers. Comput. Sci..
[34]
J. Keller.
The Arcs Model of Motivational Design
,
2010
.
[35]
William M. K. Trochim,et al.
Research methods knowledge base
,
2001
.
[36]
Mark Kasunic,et al.
Designing an Effective Survey
,
2005
.
[37]
Elaine Toms,et al.
The development and evaluation of a survey to measure user engagement
,
2010,
J. Assoc. Inf. Sci. Technol..
[38]
Robert F. DeVellis,et al.
Scale Development: Theory and Applications.
,
1992
.
[39]
Alejandro Calderón,et al.
A systematic literature review on serious games evaluation: An application to software project management
,
2015,
Comput. Educ..
[40]
Glyn Thomas,et al.
Using self- and peer-assessment to enhance students’ future-learning in higher education.
,
2011,
Journal of University Teaching and Learning Practice.
[41]
Jean-Pierre Jessel,et al.
Origins of Serious Games
,
2011,
Serious Games and Edutainment Applications.
[42]
Christiane Gresse von Wangenheim,et al.
MEEGA+, Systematic Model to Evaluate Educational Games
,
2019,
Encyclopedia of Computer Graphics and Games.
[43]
Serdar Çiftci,et al.
Trends of Serious Games Research from 2007 to 2017: A Bibliometric Analysis.
,
2018
.
[44]
Anissa All,et al.
Assessing the effectiveness of digital game-based learning: Best practices
,
2016,
Comput. Educ..
[45]
Cleon X. Pereira,et al.
CHEMIS3: A game for learning chemical concepts through elements of nature
,
2017,
2017 12th Iberian Conference on Information Systems and Technologies (CISTI).
[46]
Rory V. O'Connor,et al.
A multivocal literature review on serious games for software process standards education
,
2018,
Comput. Stand. Interfaces.
[47]
James M. Boyle,et al.
A systematic literature review of empirical evidence on computer games and serious games
,
2012,
Comput. Educ..
[48]
Benjamin S. Bloom,et al.
A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives
,
2000
.
[49]
Robert Maribe Branch,et al.
Instructional Design: The ADDIE Approach
,
2009
.
[50]
Peta Wyeth,et al.
GameFlow: a model for evaluating player enjoyment in games
,
2005,
CIE.
[51]
Kristina N. Bauer,et al.
Self-Assessment of Knowledge: A Cognitive Learning or Affective Measure?
,
2010
.
[52]
J. B. Brooke,et al.
SUS: A 'Quick and Dirty' Usability Scale
,
1996
.
[53]
Jürgen Münch,et al.
Software Process Definition and Management
,
2012,
The Fraunhofer IESE Series on Software and Systems Engineering.
[54]
Christiane Gresse von Wangenheim,et al.
SCRUMIA - An educational game for teaching SCRUM in computing courses
,
2013,
J. Syst. Softw..
[55]
Marta Indulska,et al.
Improving the quality of process reference models: A quality function deployment-based approach
,
2009,
Decis. Support Syst..
[56]
Christiane Gresse von Wangenheim,et al.
A Model for the Evaluation of Educational Games for Teaching Software Engineering
,
2011,
2011 25th Brazilian Symposium on Software Engineering.
[57]
Christiane Gresse von Wangenheim,et al.
How games for computing education are evaluated? A systematic literature review
,
2017,
Comput. Educ..
[58]
Sheng-Chin Yu,et al.
journal homepage: www.elsevier.com/locate/compedu
,
2022
.
[59]
Guttorm Sindre,et al.
Evaluating the effectiveness of learning interventions: an information systems case study
,
2003,
ECIS.
[60]
Clark C. Abt,et al.
Serious games
,
2016,
Springer International Publishing.
[61]
Thomas M. Connolly,et al.
The role of psychology in understanding the impact of computer games
,
2011,
Entertain. Comput..
[62]
M. C. Izquierdo,et al.
Estimating the reliability coefficient of tests in presence of missing values
,
2014
.
[63]
Syamsul Bahrin Zaibon.
USER TESTING ON GAME USABILITY, MOBILITY, PLAYABILITY, AND LEARNING CONTENT OF MOBILE GAME-BASED LEARNING
,
2015
.
[64]
Eduardo Figueiredo,et al.
Game Elements for Learning Programming: A Mapping Study
,
2018,
CSEDU.
[65]
Rajeev Sharma,et al.
Impact of self-assessment by students on their learning
,
2016,
International journal of applied & basic medical research.
[66]
Marc Prensky,et al.
Digital game-based learning
,
2000,
CIE.
[67]
John A. Ross.
The Reliability, Validity, and Utility of Self-Assessment
,
2006
.
[68]
Christiane Gresse von Wangenheim,et al.
A Large-Scale Evaluation of a Model for the Evaluation of Games for Teaching Software Engineering
,
2017,
2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering Education and Training Track (ICSE-SEET).
[69]
X. Ferré,et al.
THE SOFTWARE PROCESS : MODELLING , EVALUATION AND IMPROVEMENT
,
2000
.
[70]
E. Simpson.
THE CLASSIFICATION OF EDUCATIONAL OBJECTIVES, PSYCHOMOTOR DOMAIN.
,
1966
.
[71]
F. Samejima.
Estimation of latent ability using a response pattern of graded scores
,
1968
.
[72]
Michelle K. Smith,et al.
Active learning increases student performance in science, engineering, and mathematics
,
2014,
Proceedings of the National Academy of Sciences.
[73]
Heidi Andrade,et al.
Promoting Learning and Achievement Through Self-Assessment
,
2009
.