Evaluation: turning technology from toy to tool: report of the working group on evaluation

Evaluation is an educational process, not an end in itselfi we learn in order to help our students learn. This paper presents a pragmatic perspective on evaluation, viewing it as a matter of trade-offs. The space of possible evaluation approaches is analysed in terms of trade-offs among desired evidence, costs, and other constraints. This approach is illustrated with example scenarios and a list of selected resources is provided. Aim of the Working Group This working group set out to consider how pragmatic, empirical evaluation can be used to harness technology for teaching Computer Science and Information Systems. Educators reject the tendency to adopt ‘technology for technology’s sake’ and want to analyze technology in terms of its suitability for a teaching purpose and its impact—both costs and benefits—on teaching practice and outcomes. The question is not ‘Can we use technology in teaching?’, but ‘Can we use technology to enhance teaching and improve learning?’ Empirical evaluation and technology can form a powerful partnership to enhance teaching purposefully and usably. The working group explored the parameters of an effective partnership. Introduction Computer Science and Information Systems (CS/IS) are rife with examples of technology-driven projects that fail to address fundamental issues, with systems designed by introspection, with software evaluated by market share alone, with good ideas neglected after poor initial implementations. Evaluation is often Permission to make digitalmard copy of part or atl of this work for personal or classroom use is ranted without fee provided that copies are not made f or distributed for pro d or wmmercial advantage, the copyright notice, the titte of the publication and its date appear, and notice is given that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Integrating Tech. into C.S.E. 6/96 Barcelona, Spain 01996 ACM 0-89791 -844-4/96/0009 ...$3.50 Diane M. Miller University of Southern Mississippi, USA dmmiller@medea. gp.usm.edu Marian Petre (joint chair) Open University, UK M.Petre@open.ac.uk Paul Schragger Villanova University, USA schragge@monet. vilI.edul Fred Springsteel University of Missouri, USA csfreds @mizzoul .missouri.edu seen as an expensive, time-consuming, esoteric process with little practical relevance. But principled, practical evaluation— empirical study of actual practice, perhaps within a tightly focused question or a particular task—can identify crucial issues, debunk inappropriate folklore, give substance to intuition, disambiguate causes, and make the difference between failure and success. The introduction of new technologies increases the importance of evaluation in order to untangle the snarl of factors and influences that impinge on how technology is used in context. Unless educational technology can address educational objectives, the ‘nifty’ ideas it encompasses are no more than fashion. Evaluators need to base their analyses and designers neecl to base thleir designs on real practice; not everything that is ‘intuitive’ or ‘sexy’ is appropriate within real teaching environments. Evaluation offers a means of putting technology into perspective, so that it is viewed as a tool for addressing real problems—a means, rather than an end in itself. Technology as toy and tool The current leading-edge technologies, such as videoconferencing, multi-media, software vi sualizatiou, and Internetenabled applications (World Wide Web, electronic mail, bulletin board systems, etc.), are perceived to have immediate potential for use as educational tools. However, it is all too easy to mis-aplply these technologies, using them as flashy toys or interesting playthings. Technology-led adoption follows a ‘we have it—let’s use it’ enthusiasm. But that can be a blind alley for evaluation: often the need for an answer expires before we have a chance to ask the question. We should pursue an education-led deliberation: ‘We have it—but is it appropriate for this purpose?’ Technology remains a toy when it is used merely because it is attractive and exciting, but its real potential is unexplored. Technology is often introduced into education to attract and excite, without any more than an assumption that it might be useful. But, if applied without deliberative study of its use in context and without the evaluation of the technology’s impact on this use, ‘educational’ technology remains a toy.

[1]  J. Paris Performance assessment. , 1998, Journal of public health medicine.

[2]  Roy J. Daigle,et al.  Integrating collaborative problem solving throughout the curriculum , 1996, SIGCSE '96.

[3]  Carl Erickson The EOS laboratory environment for a course in operating systems , 1996, SIGCSE '96.

[4]  Curtis A. Carver,et al.  Felder's learning styles, Bloom's taxonomy, and the Kolb learning cycle: tying it all together in the CS2 course , 1996, SIGCSE '96.

[5]  D. Watson,et al.  Integrating Information Technology into Education , 2013, IFIP Advances in Information and Communication Technology.

[6]  Sirkka L. Jarvenpaa,et al.  The Use of Information Technology to Enhance Management School Education: A Theoretical View , 1995, MIS Q..

[7]  Michael V. Doran,et al.  A cognitive-based approach to introductory computer science courses: lesson learned , 1995, SIGCSE '95.

[8]  R. Stake The art of case study research , 1995 .

[9]  Tom Barone The Uses of Educational Research. , 1995 .

[10]  Nira Hativa,et al.  Computer-Based Integrated Learning Systems: Research and Theory. , 1994 .

[11]  Cheng-Chih Wu,et al.  The CSedRes toolbox: a resource to support research in computer science education , 1993, SGCS.

[12]  William E. Montague,et al.  What Works in Adult Instruction: The Management, Design and Delivery of Instruction. , 1993 .

[13]  Martin Tessmer,et al.  Planning and conducting formative evaluations , 1993 .

[14]  Roger Sälo Learning Discourse: Qualitative Research in Education. , 1993 .

[15]  A. Tuijnman Effectiveness Research into Continuing Education. , 1992 .

[16]  Shirley Booth,et al.  Learning to program : a phenomenographic perspective , 1992 .

[17]  T. Plomp,et al.  Implementation of Computers in Education. , 1991 .

[18]  J Kneedler Evaluating software for the OR. , 1985, OR manager.

[19]  L. Stenhouse Research as a basis for teaching , 1985 .

[20]  Daniel L. Stufflebeam,et al.  The Standards for Evaluation of Educational Programs, Projects, and Materials , 1983 .

[21]  B. Kintz,et al.  Computational Handbook of Statistics , 1968 .

[22]  Meredith D. Gall,et al.  Educational Research: An Introduction , 1965 .