Numerically-Driven Inferencing in Instruction: The Relatively Broad Transfer of Estimation Skills Edward L. Munnich (munnich@berkeley.edu) Michael A. Ranney (ranney@cogsci.berkeley.edu) Daniel M. Appel (dappel@berkeley.edu) University of California, Graduate School of Education, 4533 Tolman Hall, Berkeley, CA 94720-1670 estimates for quantities across a broad range of issues without specific instruction on those issues. Abstract What is the current U.S. immigration rate? Policy-makers, voters, and consumers should have a sense of quantities of this kind in order to help shape effective policies, and schools must prepare students for such roles. We examine the Numerically-Driven Inferencing paradigm (NDI), using a method in which participants: Estimate policy-relevant quantities, state Preferences for these, receive actual quantities as feedback to Incorporate, and offer preferences again to exhibit any policy Changes (EPIC). Past work has generally suggested rather poor estimation of such base rates, but there is potential for improvement as one carries out many estimates over various issues, and perhaps a benefit for taking a more analytic approach to estimation. Here we consider whether one can improve estimation skills broadly by using multiple perspectives in estimation problems, and by working out of conflicts that arise among multiple, locally coherent, numerical understandings. Using an NDI curriculum that emphasized disconfirmation, we found that estimation improved across a wide variety of questions. Theoretical Framework This project builds on the Numerically-Driven Inferencing paradigm (NDI; Ranney, Cheng, Nelson, & Garcia de Osuna, 2001), which examines how understandings of relevant base rate information (e.g., the present U.S. immigration rate) affects people’s attitudes on public policy issues (e.g., given the immigration rate, what would you prefer that rate to be?). With NDI’s methods, people need not be asked whether they are for or against a particular issue, but rather what they would prefer the numbers to be. Indeed, it is not uncommon that those who consider themselves to be in favor of reducing immigration (e.g., believing the current base rate of a policy-relevant quantity to be 10%, one might prefer 5%) have more in common than they realize with those who claim to favor an increase (e.g., believing the rate to be 1%, but sharing a preference for 5%). However, if such people were only asked the extent to which they favor or oppose an issue, they would appear to be at odds. In contrast, NDI asserts that qualitative attitudes have some—albeit not necessarily direct— relationships with relevant quantities, and aims to explore the nature of the relationships. By focusing on numerical concepts, NDI can shed light on how these concepts interact with people’s initial attitudes, and the extent to which learning actual values shapes subsequent attitudes: Do we maintain preferences for the same absolute rates, or for the same proportions relative to actual rates? To what extent do we shift our policy stances after surprising feedback (Munnich, Ranney, Nelson, Garcia de Osuna, & Brazil, NDI builds on research in many fields, such as attitude, conceptual change, mental models, and judgment and decision-making (although NDI deals directly with base rates—not through Bayesian analyses). In particular, NDI has drawn on work in scientific conceptual change including the Theory of Explanatory Coherence (TEC; Ranney & Thagard, 1988; Thagard, 1989), which describes change as spawned by incoherence and conflicts among ideas, such that people try to revise their beliefs to increase global coherence. In an illustration of this, Ranney, Schank, Mosmann, and Montoya (1993; based on a misconception noted by Keysar, 1990) found that most participants initially believed that Berlin lay on the East/West German border, but revised their beliefs as they incrementally received What is the current annual U.S. immigration rate (including both legal and illegal immigration)? Please take a moment to estimate this quantity, and reflect on the kinds of skills you used to generate your estimate. One might assume that those who know about immigration issues are good at estimating immigration rates, while those who know about environmental issues are good at estimating per capita garbage production, but that there is no general skill for estimating across content domains. Research on estimation suggests that people can improve the accuracy of estimates in a variety of ways, including using category information (e.g., Huttenlocher, Hedges, & Prohaska, 1988), or learning relevant “seed” numbers (e.g., Brown & Siegler, 2001), but there is no indication that such benefits transfer broadly to estimation over a wide variety of quantities, to say nothing of problem solving skills more generally. However, we suggest that in domains ranging from estimation to physics problem-solving, it is important to learn to seek alternatives to initial conceptions of problems, which brings the possibility of disconfirming hypotheses. The potential value of such a strategy is illustrated Johnson-Laird and Hasson (2003), who have found that when some premises are consistent with an invalid conclusion, counterexamples are useful in rejecting the conclusion. The focus of the present paper is on the extent to which analytic estimation skills can transfer broadly, so that people might improve their
[1]
M. Ranney,et al.
Qualitative and Quantitative Effects of Surprise: (Mis)estimates, Rationales, and Feedback-Induced Preference Changes While Considering Abortion
,
2004
.
[2]
Michael Ranney,et al.
Toward an Integration of the Social and the Scientific: Observing, Modeling, and Promoting the Explanatory Coherence of Reasoning
,
2003
.
[3]
P. Thagard,et al.
Explanatory coherence
,
1993
.
[4]
Paul Thagard,et al.
Explanatory Coherence and Belief Revision in Naive Physics
,
1988
.
[5]
John R. Anderson,et al.
The Transfer of Cognitive Skill
,
1989
.
[6]
R. Siegler,et al.
Seeds aren’t anchors
,
2001,
Memory & cognition.
[7]
W. D. Gray,et al.
Transfer of Cognitive Skills
,
1987
.
[8]
P. Johnson-Laird,et al.
Counterexamples in sentential reasoning
,
2003,
Memory & cognition.
[9]
Edward L. Munnich,et al.
Policy Shift Through Numerically-Driven Inferencing: An EPIC Experiment About When Base Rates Matter
,
2003
.
[10]
R. Shillcock,et al.
Proceedings of the Twenty-Sixth Annual Conference of the Cognitive Science Society
,
1998
.
[11]
N. Schwarz.
Self-reports: How the questions shape the answers.
,
1999
.
[12]
Larry V. Hedges,et al.
Hierarchical organization in ordered domains: Estimating the dates of events
,
1988
.
[13]
Bucciarelli,et al.
Proceedings of the Twenty-Seventh Annual Conference of the Cognitive Science Society
,
2005
.