Detecting Changes in Math Strategy Use During Learning

Detecting Math Strategy Use During Learning Caitlin Tenison (ctenison@andrew.cmu.edu) Department of Psychology Pittsburgh, PA 15206 USA John R. Anderson (ja0s@andrew.cmu.edu) Department of Psychology Pittsburgh, PA 15206 USA Abstract The ability to accurately assess math problem solving strategy is an important part of understanding the effects of practice. Unfortunately the measures researchers trust are often unreliable and ill suited for studying the effects of practice. In the current study we are interested in identifying intermediary strategies that emerge as people switch from computational to retrieval strategies. To build a more accurate assessment of strategy we combine latency, neural evidence, and verbal reports using a mixture model. We compare the model’s predictions of strategy use with concurrent assessments collected during the problem solving. The results suggest that while participants consider a partial computation-retrieval strategy, distinct from pure computation, our model finds no evidence of such a partial state; however, distinction is found between early and well-practiced retrieval. These results suggest a discrepancy between the distinctions people make when reporting strategy use and the distinctions in the cognitive processes underlying strategy use. Keywords: fMRI; Mixture Model; Problem Solving; Strategy use. Introduction Tools to assess math strategy use are critical to the study of math problem solving. A researcher can glean only so much from knowing a person’s solution to a task because it provides little information about the processes that were used to arrive at the solution. Take, for example, the problem of adding up all the numbers from 1 to 100. The solver could mentally keep a running total, adding each number, or creating a formula to arrive at the answer (i.e., 100*(100+1)/2). The strategy a student uses to solve a math problem reflects a valuable measure of their understanding of the mathematical concepts underlying the problem. As students gain practice working with problems, the strategies they use to solve the problems change. Practice often causes participants to switch from strategies that use calculations (referred to here as computational strategies) to strategies that involve recall of previously learned facts (referred to here as retrieval strategies) (Imbo & Vandierendonck, 2008; Ischebeck et al., 2007). According to the adaptive strategy choice model, the shift to retrieval strategies arises out of an increased association between the math problem and the solution such that a participant can retrieve the answer from memory (Siegler & Shipley, 1995). Work studying children learning arithmetic suggests that strategies emerge and/or decline in use through a mix of metacognitive strategy discovery and associative mechanisms of gradual learning (Shrager and Siegler, 1998). This idea, summarized by Siegler’s ‘overlapping waves theory’, describes the gradual changes in childrens’ strategy use over time from less efficient to more efficient strategies. The changes in strategy use are an important feature for understanding learning, and consequently the ability to accurately assess these changes is necessary for the study of math learning. The different methods for assessing strategy use have tradeoffs, when being used to assess a dynamic learning task. Assessing strategy use becomes especially difficult when studying math learning in the fMRI scanner. A verbal protocol in the context of an fMRI study cannot be collected without impacting the quality of the data. Speaking modulates breathing, which in turn has been shown to have an effect on the blood-oxygen-level-dependent (BOLD) response (Birn, Smith, Jones, & Bandettini, 2008). A number of experiments have explored means to simplify concurrent verbal assessment to reduce its reactivity. For instance, in several studies participants were provided with a list of strategies after each problem and encouraged to choose the option that best represented the strategy that they used (Campbell & Timm, 2000; Grabner et al., 2011; Imbo & Vandierendonck, 2008). In support of the effectiveness of this technique, Grabner et al. (2011) found similar brain responses for items reported to be solved with the same strategy. This method of concurrent assessment, however, has two flaws. First, suggesting alternative strategies may alter the participant’s problem-solving methodology, and second, a participant is forced to choose among the provided strategies, which may not include the specific method used in problem solving. These two flaws can be avoided by use of a retrospective strategy assessment. Retrospective strategy assessments (RSAs) are a less reactive form of strategy assessment, but are also less accurate (Russo, Johnson, & Stephens, 1989). During retrospective strategy assessments, researchers ask the participant to report strategy use after the entire task has been completed, often with a list of problems to help cue memory (Grabner et al., 2009). The advantage of RSAs is that task data remain unaffected by the assessment of strategy. Additionally, the RSA allows for a more detailed report on specific strategy than concurrent assessments. Nevertheless, this form of assessment is ill suited for dynamic learning tasks in which solution strategies change