This paper presents a quantitative assessment of the importance of adaptation to the learning environment as a component of the learning curve in performance data from a computer-based tutor. In Experiment 1, verbal protocols are used to investigate the nature of changes in low-level interactions that take place during learning with a computerized tutor called Stat Lady (Shute & Gluck, 1994). The data show consistent behavioral changes in the distribution of attention, which account for a substantial portion of the learning curve, independent of error rates. These changes primarily are decreases in the verbalization of on-screen text, although the elimination of interface confusion also contributes to the efficiency gain. Experiment 2 tests the generalizability of the results in a larger population of learners. It is shown that adaptation to the learning environment accounts for a comparable proportion of the learning curve in this new population. More than half of the learning curve could be accounted for by these changes in low-level interactions. These results suggest that more accurate learning models should include a representation of increasing knowledge of the instructional environment as the model interacts with that environment. An ACT-R (Anderson & Lebiere, 1998) model is provided that reproduces the qualitative and quantitative data from the verbal protocol participants. The model reproduces these behaviors via (1) the acquisition of declarative knowledge for the structure of the problem scenarios, and (2) subsymbolic procedural tuning for more efficient goal completion. The Curriculum and the Interface 3 The curriculum and the learning environment: A componential analysis of the learning curve Card, Moran, and Newell (1983) made the point that the study of the use of computers is an especially important applied research topic, due to the proliferation of computers in every aspect of contemporary society. A special case of computer use, which is becoming increasingly prevalent in education and training settings, is the interaction of students with a computer-based tutoring system. The primary research concern for those who design and deploy these systems is typically to maximize the degree to which students acquire the curriculum the tutor is intended to teach. There is, however, another sort of learning that is going on simultaneously the learning of the learning environment in which the curriculum is presented. Conventional wisdom tells us that people learn something about the computer interfaces they use and, with experience, become more facile at navigating through them. There is little that is new in such a claim. However, systematic data that document the development of such interface knowledge and the impact it has on performance are considerably less prevalent than this assumption. That is the contribution offered here. This paper describes two empirical investigations dedicated to producing a quantitative assessment of interface learning in a particular computer-based tutoring system. Why Use a Computer Tutor? There are a number of different types of computer-based software applications that we could have chosen to study. Text editors, spreadsheets, programming languages, and even games are all valid contenders for the study of interface learning. Why pick a tutoring system, where one also has issues of curriculum learning to worry about? First and foremost is that, as cognitive psychologists and instructional designers,we are interested in issues of curriculum learning, and a desire to better understand the processes and products of learning runs deep. The Curriculum and the Interface 4 Second, computers are often (and increasingly) used as instructional tools, intended to enhance the learning of the curriculum objectives (Lajoie & Derry, 1993). In some cases, these computerized learning environments lead to improved performance, suggesting that students have learned something from the experience (e.g., Anderson, Corbett, Koedinger, & Pelletier, 1995; Lesgold, Eggan, Katz, & Rao, 1992; Shute, 1995). Two measures are used to assess learning in these studies. The most universal is learning gain, generally measured as pretest-to-posttest improvement, although there are other variants. To the extent that there is evidence for improvement on the posttest, there is evidence for acquisition of the curriculum. Sometimes researchers include another dependent measure, as well, which is problem solving time. The claim is that, as knowledge of the curriculum grows stronger, problem solving time will decrease (e.g., Anderson, 1993; Anderson, Conrad, & Corbett, 1989). This habit originates from the tradition in cognitive psychology of using time as an indicator of degree of learning, and plotting this change over time. The resulting graph is best described as a power law learning curve (Newell & Rosenbloom, 1981). However, in the context of problem solving with a computer-based tutor, the questions of how much students are “learning the curriculum” versus “learning the environment,” and what effect this has on interpretations of the learning curve, often go unasked. In the absence of attention to these questions, decreases in solution times are generally assumed to result from the former. This assumption is almost entirely untested. The research described here provides an example of how careful analysis of verbal protocols and performance data can reveal components of the learning curve in addition to that part which is attributable to the development of cognitive skill. A Growing Need Almost a decade ago, a paper was published which foreshadowed the need to more The Curriculum and the Interface 5 deeply explore the composition of the learning curve in the context of learning from computer tutors. In their paper on student learning with the Lisp Tutor, Anderson, Conrad, and Corbett (1989) left open the possibility that some portion of the improvement they saw in coding time actually reflected mastery of the interface. They found that two important variables predicted students’ solution times. One was related to the amount of prior practice. Anderson et al. represented the acquisition of skill in Lisp coding as the acquisition of production rules for that skill, and their analysis showed that the number of prior production firings (amount of prior practice) was a good predictor of future problem solving time. A second variable that predicted problem solving time was how far a student had progressed through the curriculum (measured by lesson number). They made the following statement regarding this latter result: The effect of lesson number ... may just reflect an increased familiarity with the tutor interface. The fact that the same variable shows up for old productions as for new productions suggests that at least part of the phenomenon is a matter of general interface learning. It is also the case that lesson number is not significantly related to error rate. This is further evidence that the effect may be an interface effect and not reflect any real proficiency in coding. (p. 484) This acknowledgement of the role of “interface learning” suggests that focusing on curriculum learning alone does not portray the complete picture of student learning in computer environments. In this paper, we will examine the performance improvements of students working in a computerized learning environment called Stat Lady, which teaches introductory descriptive The Curriculum and the Interface 6 statistics (Shute & Gluck, 1994). There is a good deal of evidence from previous assessment studies involving the Stat Lady tutor that people using the system develop improved skill on the curriculum objectives (Shute, 1995; Shute, Gawlick, & Gluck, 1998). However, is the learning of curriculum objectives the only learning that is taking place? Our goal in this paper is to take the analysis of learning from the Stat Lady tutor well beyond a cursory glance at learning gain and broad measures of problem solving time. We will provide a description of how students interact with this computerized learning environment and how the interactions change as students learn. In two experiments, the careful decomposition of student behavior will show quantitatively that interface learning can account for a significant portion of the learning curve. As will be seen, consistent trends in verbalizations over practice opportunities suggest that learners quickly fine-tune their low-level interactions with the tutor in a manner that allows for more efficient problem solving. These adaptations at the level of interactions with the tutor account for a substantial portion of the change in problem solving time across practice opportunities. This paper describes those results, provides an interpretation in terms of the acquisition of declarative knowledge of the interface and the structure of the problem scenarios, and concludes with a discussion of the implications of these results for tutor design and student modeling. Experiment 1 The goal of Experiment 1 was to collect data that would provide a rich picture of students' learning both of the curriculum and of the interface as they interacted with the Stat Lady tutor. In particular, we were interested in how students' attention to different parts of the interface changed as they gained experience with the tutor. One source of such data is verbal The Curriculum and the Interface 7 protocols. Verbal protocols have played a significant role in contemporary studies of learning in many different domains (Ericsson & Simon, 1993; Newell & Simon, 1972). One of the fundamental assumptions that underlies the use of verbal protocol data is that the verbalizations reflect some subset of what is currently held, or was very recently held, in working memory (Ericsson & Simon, 1993). It seems reasonable to propose that the particular subset of working memory contents that gets verbalized would be that portion that i
[1]
Kevin A. Gluck,et al.
Effects of Practice and Learner Control on Short-and Long-Term Gain and Efficiency
,
1998,
Hum. Factors.
[2]
Allen Newell,et al.
The psychology of human-computer interaction
,
1983
.
[3]
DIMITRIOS PIERRAKOS,et al.
User Modeling and User-Adapted Interaction
,
1994,
User Modeling and User-Adapted Interaction.
[4]
Allen and Rosenbloom Paul S. Newell,et al.
Mechanisms of Skill Acquisition and the Law of Practice
,
1993
.
[5]
Valerie J. Shute,et al.
SMART: Student modeling approach for responsive tutoring
,
1995,
User Modeling and User-Adapted Interaction.
[6]
C. Lebiere,et al.
The Atomic Components of Thought
,
1998
.
[7]
John R. Anderson,et al.
General Principles for an Intelligent Tutoring Architecture.
,
1995
.
[8]
Allen Newell,et al.
Human Problem Solving.
,
1973
.
[9]
David E. Kieras,et al.
The Acquisition of Procedures from Text: A Production-System Analysis of Transfer of Training. Technical Report No. 16.
,
1985
.
[10]
John R. Anderson,et al.
Cognitive Modeling and Intelligent Tutoring
,
1990,
Artif. Intell..
[11]
Beverly Park Woolf,et al.
Student Modeling
,
2010,
Advances in Intelligent Tutoring Systems.
[12]
John R. Anderson,et al.
Cognitive Tutors: Lessons Learned
,
1995
.
[13]
John R. Anderson,et al.
Rules of the Mind
,
1993
.
[14]
John R. Anderson,et al.
Transfer of Declarative Knowledge in Complex Information-Processing Domains
,
1996,
Hum. Comput. Interact..
[15]
John R. Anderson,et al.
Skill Acquisition and the LISP Tutor
,
1989,
Cogn. Sci..
[16]
K. A. Ericsson,et al.
Protocol Analysis: Verbal Reports as Data
,
1984
.
[17]
John R. Anderson,et al.
Knowledge tracing: Modeling the acquisition of procedural knowledge
,
2005,
User Modeling and User-Adapted Interaction.
[18]
Bert Bredeweg,et al.
Student Modelling: The Key to Individualized Knowledge-Based Instruction
,
2010,
NATO ASI Series.
[19]
Susanne P. Lajoie,et al.
Computers As Cognitive Tools
,
2020
.