A variety of response effects that had been found previously in interview surveys were tested in a mail survey of a heterogeneous local population. These included experiments on question order, response order, no-opinion filters, middleresponse alternatives, and acquiescence. The results generally supported earlier findings based on student samples which had shown that order effects were eliminated in self-administered surveys but that question-form effects occurred as in interview surveys. One question-order effect, however, was found in the mail survey, and a type of response-order effect (a primacy effect) that had not been previously tested also occurred. Interactions between education and response effects that had sometimes been found in interview surveys were not present in the mail survey. For a long time survey researchers have recognized that responses to survey questions can be affected by the mode of the survey (e.g., mail, telephone, or face-to-face). Numerous experiments have also shown that responses to survey questions can be significantly affected by the form and order in which they are presented to respondents (e.g., Schuman and Presser, 1981). Although there are differences between interview and self-administered surveys that may have a bearing on form and order response effects, most of the research demonstrating such effects comes from either one survey mode or another with very little attempt made to compare these effects across survey methods. This study will use a mail survey to conduct tests for a number of response effects previously found in interview surveys. There are two main differences between mail and interview surveys that may be sources of differences in response effects. The first derives STEPHEN A. AYIDIYA is a Visiting Assistant Professor of Sociology and Urban Studies at the College of Wooster. MCKEE J. MCCLENDON iS Professor of Sociology at the University of Akron. This research was supported in part by a grant from the Center for Educational Research and Development of the University of Akron. An earlier version of this paper was presented at the 42nd annual conference of the American Association for Public Opinion Research in Hershey, Pennsylvania, 14-17 May 1987. Public Opinion Quarterly Volume 54:229-247 ? 1990 by the American Association for Public Opinion Research Published by The University of Chicago Press / 0033-362X/90/0054-02/$2.50 This content downloaded from 207.46.13.191 on Mon, 01 Aug 2016 04:24:20 UTC All use subject to http://about.jstor.org/terms 230 Stephen A. Ayidiya and McKee J. McClendon from differences in the mode of administration of the survey questions. In mail surveys there are no interviewers in person or by voice contact. Therefore, response effects which may be due in part to the presence of an interviewer, such as acquiescence (Carr, 1971; Lenski and Leggett, 1960) and social desirability biases (De Maio, 1984:274), may be reduced. Furthermore, respondents to mail surveys are not as constrained by the serial order of the questions and response categories as are respondents in interviews. Respondents may read over the entire questionnaire or even reread questions and response categories in any order before they record their answers (Bishop et al., 1988). It is also easier for mail survey respondents to go back and change answers to earlier questions on the basis of what they read in later questions. As a result of reduced serial-order constraints, questionand responseorder effects are likely to be minimized in mail surveys (Smith, 1982). A final aspect of the mail survey is that respondents may read and answer questions at their own pace and at more than one sitting. Some researchers believe that response effects may be greater in interview surveys (especially by telephone) partly because respondents have little time to think about questions and to carefully weigh their answers, and are therefore more likely to give "top-of-the-head" responses (Hippler and Schwarz, 1987). The second source of differences between mail and interview surveys arises from potential differences between modes in the characteristics of respondents. Persons with low education are believed to be more underrepresented in mail surveys than in interview surveys because they have greater difficulty reading questions and following written instructions (Dillman, 1978:53; Bailey, 1987:149-159). It is also widely believed that mail respondents are more interested in the topics covered by the survey and have stronger opinions on these topics because they have the opportunity to look over the questionnaire before deciding to respond (Dillman, 1978; Bailey, 1987; Pearl and Fairley, 1985). Although these beliefs do not appear to be well documented empirically, they provide logical grounds for expecting greater selfselection biases in mail surveys. Since low education and low attitude strength have been found to be related to some, but not all, types of response effects (Schuman and Presser, 1981; Krosnick and Schuman, 1988), mail respondents may be less prone to response effects than interview respondents. Despite these important differences between the two methods which may influence response effects, there has been little effort to compare form and order effects across different survey modes. In an important exception to this generalization, however, Bishop et al. (1988) recently conducted tightly designed parallel experiments in the United States and West Germany to examine differences in a variety of response This content downloaded from 207.46.13.191 on Mon, 01 Aug 2016 04:24:20 UTC All use subject to http://about.jstor.org/terms Response Effects in Mail Surveys 231 effects between telephone and self-administered surveys. They found that order effects which occurred in their telephone surveys were absent in the self-administered mode; form effects, however, were much the same in both survey modes. The fact that the findings of Bishop et al. (1988) were replicated cross-culturally increases their scope and reliability. The fact that the response rates were high in both the telephone and self-administered modes increases confidence that the differences in response effects between the two survey modes were due to the mode of administration rather than to differences in selfselection. Since Bishop et al. (1988) used student samples, however, it is important to see if their conclusions generalize to more heterogeneous populations. This study replicates a number of question form and order experiments in a mail survey of a local population that have produced reliable response effects in previous local and/or national interview surveys. The experiments include two question-order and three response-order experiments, four no-opinion filter experiments, one experiment on including a middle-response alternative, and three acquiescence experiments. Some of these experiments are identical to those conducted by Bishop and colleagues, some use different questions from those that they used to address the same types of response effects, and others are tests of a response effect (acquiescence) that they did not investigate. An advantage of this study is that it uses standard mail survey procedures and a general population sample. Its main disadvantage is that it does not include a simultaneous interview survey with which to compare the mail survey results. Because of the absence of a companion interview survey, any differences in response effects between the mail survey and previous interview surveys might be due to factors other than the mode of the survey, such as differences in populations, period effects, and questionnaire context effects. Therefore, we will not conduct tests for differences between the mail and interview results; instead we will focus on tests for whether the response effects occur in the mail survey and on comparisons of our conclusions with those of Bishop et al. (1988) where applicable.
[1]
P. Converse,et al.
The American voter
,
1960
.
[2]
John C. Leggett,et al.
Caste, Class, and Deference in the Research Interview
,
1960,
American Journal of Sociology.
[3]
Leslie G. Carr,et al.
The Srole Items and Acquiescence
,
1971
.
[4]
M. Jackman.
Education and prejudice or education and response-set?
,
1973,
American sociological review.
[5]
Clyde L. Rich.
Is Random Digit Dialing Really Necessary?
,
1977
.
[6]
K. Bailey.
Methods of Social Research
,
1978
.
[7]
D. Dillman.
Mail and telephone surveys : the total design method
,
1979
.
[8]
John P. Robinson,et al.
Questions and answers in attitude surveys
,
1982
.
[9]
Tom W. Smith.
Conditional Order Effects
,
1982
.
[10]
H. Schuman,et al.
THE NORM OF EVEN-HANDEDNESS IN SURVEYS AS IN LIFE*
,
1983
.
[11]
A. Tuchfarber,et al.
Effects of Filter Questions in Public Opinion Surveys
,
1983
.
[12]
Alfred J. Tuchfarber,et al.
The Importance of Replicating a Failure to Replicate: Order Effects on Abortion Items
,
1985
.
[13]
David Fairley,et al.
Testing for the Potential for Nonresponse Bias in Sample Surveys
,
1985
.
[14]
Charles F. Turner,et al.
Surveying Subjective Phenomena, Volume 2
,
1985
.
[15]
J. Krosnick,et al.
AN EVALUATION OF A COGNITIVE THEORY OF RESPONSE-ORDER EFFECTS IN SURVEY MEASUREMENT
,
1987
.
[16]
R. Abelson,et al.
Social Information Processing and Survey Methodology.
,
1987
.
[17]
F. Strack,et al.
A comparison of response effects in self-administered and telephone surveys
,
1987
.
[18]
Norbert Schwarz,et al.
Response Effects in Surveys
,
1987
.
[19]
J. Krosnick,et al.
Attitude intensity, importance, and certainty and susceptibility to response effects.
,
1988
.
[20]
R. Tourangeau,et al.
Cognitive Processes Underlying Context Effects in Attitude Measurement
,
1988
.
[21]
R. Groves,et al.
Telephone Survey Methodology.
,
1990
.