Logo Needs Research: A Response to Papert's Paper

Professor Papert's paper is a thoughtful rejoinder to charges that "research has shown" that "Logo has not delivered on its promises." His critique of the research is essentially sound, in my judgment, but what he proposes as an alternative-computer criticism-seems to me to suffer from flaws just as serious. We would all be better served, I think, by encouraging disciplined inquiry of various sorts into the consequences of educational programs where computers are used. Papert does the educational research community a service in pointing out the limitations of conventional experiments--what he calls the "treatment model"--for studying the effects of programs using Logo. He is right to insist that Logo is not an independent source of educational effects. It is, of course, the entire educational program into which Logo fits that is the source of any effects. He is right to question whether an educational program that sets out to challenge the prevailing system should be evaluated on its attainment of the goals of the existing system. He is right to insist that a program that claims to produce a wide spectrum of effects and widely different effects on different children should not be judged on the basis of group means on a few narrow outcomes. He is right to warn against basing policy decisions on the results of a few limited early studies. Papert goes too far, however, when he concludes that these shortcomings of conventional experiments are reason enough to rule them out, apparently along with all other forms of systematic inquiry except case studies, as methods for studying the effects of educational programs such as Logo. Case studies and anecdotal evidence have shortcomings just as limiting. Papert's story of children making clocks in Mr. Franz's class illustrates these shortcomings. This anecdote, like all anecdotes, leaves us completely unable to answer some absolutely fundamental questions: How typical is what is reported? and How reliable, valid, and generalizable are the claims made in the anecdote about effects on children? Answers to questions about representativeness of the anecdote require systematic sampling of observations. Answers to questions of reliability require repeated observations; questions of validity require some form of cross validation of the judgments made by the person responsible for the anecdote. And questions of generalizability require systematic attention to other situations with which this one might be compared. None of these are found in computer criticism as Professor Papert describes and illustrates it. The shortcomings of conventional experiments that Professor Papert points out are well known to the educational research and evaluation community, and a number of research strategies have been devised for ameliorating them. The chapter by Clark and Salomon (1986) is a useful review of contemporary work on these methodological issues. In an earlier article in ER, Salomon and Gardner (1986) replied to many of these same criticisms, suggesting that case studies and other forms of open-ended holistic research yield complementary benefits to tightly controlled experimental research. In the earliest stages of experimentation with a new educational program, case studies are likely to be more valuable; as we gain experience with the program we should be able to pose more focused questions that deserve the confirmatory power of experimentation. But instead of viewing the emergence of experiments on Logo as a sign of an advancing dialogue among researchers, Papert views it as a threat. One wonders whether he would have reacted the same way if the results of initial experiments had been uniformly positive.