Survey Incentives: Cash vs. In-Kind; Face-to-Face vs. Mail; Response Rate vs. Nonresponse Error

Experiments on incentives have a long history in survey research, particularly in mail surveys but increasingly in other modes too. Most of this work has focused on variations in incentive type, timing, and amount on response rates. There has been relatively little research examining the effect of different incentives on nonresponse error. Furthermore, most of the studies on incentives have been within a single mode. No studies, to our knowledge, have examined the effect of incentives in a study using more than one mode. This paper adds to the methodological literature on incentives in several ways. Although we compare a cash incentive vs. an in-kind incentive, which has been done several times, we examine these effects in both a face-to-face and mail survey. The cash incentive yielded higher response rates than the in-kind incentive in the mail survey. Furthermore, the in-kind incentive was related to one of the topics of a metropolitan area survey, permitting us to examine possible effects of the incentive on the composition of the resulting sample as an indicator of potential nonresponse error. Specifically, the in-kind incentive was a set of passes to regional parks, and the survey included a variety of questions on behavior and attitudes related to park use. We found that, although there were demographic differences between incentive groups, the response distributions on key related variables did not vary by incentive type. The effect on response rates has been the subject of many studies on survey incentives, and a large body of literature already exists on the role of incentives in increasing survey response rates. These studies cover such topics as the effect of offering an incentive vs. no incentive, the size of the incentive, whether the incentive is monetary or non-monetary, and whether the incentive is prepaid or promised (conditional upon completion). The authors thank the editor and three anonymous reviewers for their helpful comments. This article was first submitted to IJPOR March 23, 2004. The final version was received December 24, 2004. 90 I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H Generally, the use of incentives has become fairly common, and there is agreement that incentives, both monetary and non-monetary, increase overall response rates (e.g. Goyder, 1987; Sudman & Bradburn, 1974; Willimack, Schuman, Pennell, & Lepkowski, 1995; Nederhof, 1983). The difference in response rates between an incentive and a zero incentive condition is even greater when the burden of the interview is increased (Singer, Van Hoewyk, Gebler, Raghunathan, & McGonagle, 1999). The dilemma for survey administrators is deciding what kind of incentive, at what value, and/or when in the survey process to offer it. When combined with good design and consistent follow-up, even token amounts, such as a one-dollar bill, sufficiently increase response rates (James & Bolstein, 1990; Mizes, Fleece, & Roos, 1984). In general, non-monetary incentives (gifts) are less effective than monetary gifts (Church, 1993; Singer et al., 1999), and prepaid incentives are more effective than promised incentives (Singer, 2002). Despite a large literature on response rates, there are relatively few studies on the effect of incentives in general, or the type of incentive in particular, on response distributions or on nonresponse error. Using incentives may affect the sample composition by drawing in respondents with characteristics different than those who would otherwise participate, thereby resulting in different responses. In one of the few such studies, Singer et al. (1999) reported mixed results for the effect of an incentive on sample composition. On the other hand, some have proposed that using incentives may actually correct sampling biases by drawing respondents underrepresented in a probability sample, such as those with low income and minorities (Singer et al., 1999; Martin, Abreu, & Winters, 2001). Also, it is conceivable that an incentive could directly influence response distributions by affecting the reporting of opinions. Finally, the effects of incentives on response rates have been found to exist in mail (see Church, 1993, for a review), telephone (Singer et al., 1999), and faceto-face surveys (Singer et al., 1999; Willimack et al., 1995). However, we are aware of no studies that examine the effects of incentives across modes. Our study contributes to the incentive literature in two ways: First, by examining the effect of a cash or in-kind incentive in a face-to-face and mail study and, secondly, by examining the effect of such incentives on response distributions. Specifically, we contrast a $5 incentive with an in-kind incentive (a set of passes to regional parks) in a face-to-face and mail survey on quality of life in the Detroit metropolitan area. Unlike previous experiments testing in-kind incentives, the incentives we use are related to some of the attitudes and behaviors being measured in the survey. The survey asked respondents about community life issues and included a variety of questions related to the use of parks and other recreations facilities, allowing us to examine the effect of the type of incentive on response distributions in both modes. S U R V E Y I N C E N T I V E S 91

[1]  R M Groves,et al.  Consequences of reducing nonresponse in a national telephone survey. , 2000, Public opinion quarterly.

[2]  A. Manchester Silent minority. , 1999, Nursing standard (Royal College of Nursing (Great Britain) : 1987).

[3]  Robert W. Marans,et al.  Understanding environmental quality through quality of life studies: the 2001 DAS and its use of subjective and objective indicators☆ , 2003 .

[4]  E. Martín,et al.  Money and Motive: Effects of Incentives on Panel Attrition in the Survey of Income and Program Participation , 2001 .

[5]  Paul M. Biner,et al.  The interactive effects of monetary incentive justification and questionnaire length on mail survey response rates , 1994 .

[6]  Alan Blair Social Research , 1935, Nature.

[7]  Diane K. Willimack,et al.  EFFECTS OF A PREPAID NONMONETARY INCENTIVE ON RESPONSE RATES AND RESPONSE QUALITY IN A FACE-TO-FACE SURVEY , 1995 .

[8]  E. Singer,et al.  Experiments with incentives in telephone surveys. , 2000, Public opinion quarterly.

[9]  J. M. James,et al.  THE EFFECT OF MONETARY INCENTIVES AND FOLLOW-UP MAILINGS ON THE RESPONSE RATE AND RESPONSE QUALITY IN MAIL SURVEYS , 1990 .

[10]  Norbert Schwarz,et al.  Response Effects in Surveys , 1987 .

[11]  Eric R. Ziegel,et al.  Survey Errors and Survey Costs , 1990 .

[12]  E. Singer,et al.  Leverage-saliency theory of survey participation: description and an illustration. , 2000, Public opinion quarterly.

[13]  D. Dillman Mail and telephone surveys : the total design method , 1979 .

[14]  J. S. Mizes,et al.  Incentives for Increasing Return Rates: Magnitude Levels, Response Bias, amd Format , 1984 .

[15]  H. Schuman,et al.  Pens and Polls in Nicaragua: An Analysis of the 1990 Preelection Surveys , 1992 .

[16]  E. Singer,et al.  The effects of response rate changes on the index of consumer sentiment. , 2000, Public opinion quarterly.

[17]  A. H. Church ESTIMATING THE EFFECT OF INCENTIVES ON MAIL SURVEY RESPONSE RATES: A META-ANALYSIS , 1993 .

[18]  Paul Beatty,et al.  RESPONSE RATES AND RESPONSE CONTENT IN MAIL VERSUS FACE-TO-FACE SURVEYS , 1994 .

[19]  A. Gouldner THE NORM OF RECIPROCITY: A PRELIMINARY STATEMENT * , 1960 .

[20]  H. Cooper,et al.  A Quantitative Review of Research Design Effects on Response Rates to Questionnaires , 1983 .

[21]  Anton J. Nederhof,et al.  The Effects of Material Incentives in Mail Surveys: Two Studies , 1983 .

[22]  E. Ziegel,et al.  Nonresponse In Household Interview Surveys , 1998 .