Recent years have seen tremendous advancements and innovations in technology that also can be (and have been) used for teaching statistical thinking, reasoning, and literacy. However, these modern technological tools do not automatically yield better learning results than those achieved with traditional methods of instruction. More important than technology itself is a sound theoretical basis for building an effective technological tool. It is proposed that this theoretical basis should include aspects of usability, pedagogical aspects, and also content specific aspects. A brief description of the ACT tutoring systems program illustrates what a successful combination of these three aspects could look like. Then, the importance of the often neglected content specific aspects is demonstrated with examples from our own research. It is recommended that a special emphasis should be given to a systematic and sound evaluation of such technological tools. BACKGROUND Recent years have seen tremendous advancements and innovations in technology that also can be (and have been) used for teaching statistical thinking, reasoning, and literacy. Such technologies have been applied for basically all kinds of statistics education, such as in online courses, in tools for data analysis, and in tutoring programs that specifically attempt to improve statistical thinking. However, so far, the huge majority of these tools seem not to have been systematically evaluated in respect to whether they are really effective. And if they have been evaluated, usually only pre-post designs without control groups have been used (e.g., Kuhn, Hoppe, & Wichmann, 2006; Mills & Raju, 2011; Raffle & Brooks, 2005). Such designs might, however, only have very low internal validity (Rosenthal & Rosnow, 1991), that is, if improvements were found, it might not be clear whether these were attributable to the new technologies or to other factors that were not controlled for. So, “…in terms of future work in this field, there is a need for well-designed studies that control for confounding variables and other challenges related to empirical research” (Mills & Raju, 2011, p. 22). Apart from methodological concerns about the evaluation studies, the results found there were quite mixed and a general superiority of teaching with the new technologies over traditional teaching methods could not be recognized (e.g., Hardle, Klinke, & Ziegenhagen, 2007; Mills & Raju, 2011). Why is this so? In this paper, I propose that a systematic improvement in statistics education by using new technologies can only be achieved if the technical aspects are well connected to theoretical aspects that are relevant to the teachinglearning process. In the next paragraph these theoretical aspects will be briefly discussed and then one of these aspects, content or task specific theories, will be illustrated in more detail. THEORETICAL REQUIREMENTS FOR TECHNOLOGY TOOLS Fascinating as a new technology may be, it does not automatically guarantee that users will profit from it: Users must also be able to interact appropriately with the respective tools. How to optimize this interaction can be found by trial and error, but if a tool is to be applied in the long term it is worthwhile to develop or rely on a usability theory that enables the tool-builders to develop adequate user interfaces. Such user interfaces are especially important in tools that are applied for teaching purposes. If the contents to be taught are complex, as is the case for statistical knowledge, it might not always be possible to build interfaces that are fully intuitively understandable. In that case it is necessary to teach the user to acquire some kind of technological literacy (Gould, 2010). How to teach that best, should be based on a pedagogical theory. From such a pedagogical theory one should also be able to derive directions about which kinds of teaching strategies should be used for which students under which circumstances. And finally, the contents to be taught or the task to be solved also can make a remarkable difference. In particular, it might make a huge difference how a given task is represented to the learner or recipient of statistical
[1]
Jamie D. Mills,et al.
Teaching Statistics Online: A Decade's Review of the Literature About What Works
,
2011
.
[2]
P. Sedlmeier,et al.
Visual integration with stock-flow models: How far can intuition carry us?
,
2014
.
[3]
Gordon P. Brooks,et al.
Using Monte Carlo Software to Teach Abstract Statistical Concepts: A Case Study
,
2005
.
[4]
P. Sedlmeier.
Information Sampling and Adaptive Cognition: Intuitive Judgments about Sample Size
,
2005
.
[5]
K. Koedinger,et al.
Exploring the Assistance Dilemma in Experiments with Cognitive Tutors
,
2007
.
[6]
G. Gigerenzer,et al.
Teaching Bayesian reasoning in less than two hours.
,
2001,
Journal of experimental psychology. General.
[7]
Peter Sedlmeier,et al.
Improving Statistical Reasoning: Theoretical Models and Practical Implications
,
1999
.
[8]
Peter Sedlmeier,et al.
The distribution matters: two types of sample-size tasks
,
1998
.
[9]
Albert T. Corbett,et al.
Cognitive Tutor: Applied research in mathematics education
,
2007,
Psychonomic bulletin & review.
[10]
Peter Sedlmeier.
Intelligent Tutoring Systems
,
2001
.
[11]
Ralph L. Rosnow,et al.
Essentials of Behavioral Research: Methods and Data Analysis
,
1984
.
[12]
M. Ashcraft.
Math Anxiety: Personal, Educational, and Cognitive Consequences
,
2002
.
[13]
John R. Anderson,et al.
Cognitive Tutors: Lessons Learned
,
1995
.
[14]
L. Cosmides,et al.
Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty
,
1996,
Cognition.
[16]
Robert G. Gould,et al.
Statistics and the Modern Student
,
2010
.
[17]
A. Tversky,et al.
Subjective Probability: A Judgment of Representativeness
,
1972
.
[18]
Gerd Gigerenzer,et al.
Intuitions About Sample Size: The Empirical Law of Large Numbers
,
1997
.
[19]
Astrid Wichmann,et al.
Computational modelling and simulation fostering new approaches in learning probability
,
2006
.