Why do people say nasty things about self-reports?

paper, I would have offered a paradoxical analysis, 'Accept as is - in spite of the author's sensitive appreciation of the strengths and liabilities of self-report measures'. The remainder of this comment will attempt to flesh-out the logic and the paradox behind my hypothetical assessment. Paradox rarely rears its ugly head in science, yet in spite of careful analyses like Spector's, and growing evidence of the validity of self-reports, it seems as if self-report-bashing might be an article of faith of some Scientific Apostle's Creed, 'I believe in good science; the empirical determination of theory choice, the control of extraneous variables, and the fallibility of self-report measures . . .' Spector (1994) acknowledges that cross-sectional self-report studies are sometimes inadequate. At times the problems are due to the studies' passive observational nature (and might be equally problematic had non-self-report measures been employed) and sometimes due to the self-report measurement strategy itself. The former cluster of problems should not be heard as a critique of self-reports per se, so I will instead direct my attention to the latter group of problems. There are known contaminations to self-report measures (e.g. social desirability, selective memory, etc.) that need to be considered. However, Donald Campbell's famous aphorism that good scientists are ontological realists but epistemological fallibalists - suggests that the fallibility of self-reports is not in itself a cogent critique. The immediate question arises: What measurement strategy do you propose to use instead of a self-report (e.g. behavioral, physiological, significant-other, expert judge, archival), and what are the grounds for believing that your alternative measurement strategy is less fallible than a self-report?

[1]  Robert Plomin,et al.  Nature and nurture an introduction to human behavioral genetics , 1996 .

[2]  Paul E. Spector Using self‐report questionnaires in OB research: A comment on the use of a controversial method , 1994 .

[3]  G. Howard Steps Toward a Science of Free Will , 1993 .

[4]  George S. Howard,et al.  When psychology looks like a "soft" science, it's for good reasonp. , 1993 .

[5]  G. Howard No middle voice. , 1992 .

[6]  G. Castro,et al.  Earth in the Balance: Ecology and the Human Spirit , 1992 .

[7]  G. Howard Culture tales. A narrative approach to thinking, cross-cultural psychology, and psychotherapy. , 1991, The American psychologist.

[8]  Joseph F. Rychlak,et al.  The psychology of rigorous humanism, 2nd ed. , 1988 .

[9]  D. Cole,et al.  Construct validity and the relation between depression and social skill. , 1987 .

[10]  G. Howard,et al.  Reliability, sensitivity to measuring change, and construct validity of a measure of counselor adaptability , 1986 .

[11]  S. Maxwell,et al.  Construct validity of measures of college teaching effectiveness. , 1985 .

[12]  G. Howard On Studying Humans , 1984 .

[13]  S. Maxwell,et al.  Effects of mono- versus multiple-operationalization in construct validation efforts. , 1981 .

[14]  Richard L. Wiener,et al.  Is a Behavioral Measure the Best Estimate of Behavioral Parameters? Perhaps Not , 1980 .

[15]  J. Gibbs The meaning of ecologically oriented inquiry in contemporary psychology. , 1979 .

[16]  A. Constantinople Masculinity-femininity: an exception to a famous dictum? , 1973, Psychological bulletin.

[17]  Ma. de la Natividad Jiménez Salas,et al.  The Conduct of Inquiry , 1967 .

[18]  R. B. Macleod THE TEACHING OF PSYCHOLOGY AND PSYCHOLOGY WE TEACH. , 1965, The American psychologist.

[19]  N. Sanford WILL PSYCHOLOGISTS STUDY HUMAN PROBLEMS? , 1965, The American psychologist.

[20]  Abraham Kaplan,et al.  The Conduct of Inquiry: Methodology for Behavioural Science , 1965 .

[21]  D. Campbell,et al.  EXPERIMENTAL AND QUASI-EXPERIMENT Al DESIGNS FOR RESEARCH , 2012 .

[22]  Koch Sigmund Ed,et al.  Psychology: A Study of A Science , 1962 .

[23]  A. Whitehead Science and the Modern World , 1926 .