Comparing N = 1 Effect Size Indices in Presence of Autocorrelation

Generalization from single-case designs can be achieved by replicating individual studies across different experimental units and settings. When replications are available, their findings can be summarized using effect size measurements and integrated through meta-analyses. Several procedures are available for quantifying the magnitude of treatment effect in N = 1 designs, and some of them are studied in this article. Monte Carlo simulations were used to generate different data patterns (trend, level change, and slope change). The experimental conditions simulated were defined by the degrees of serial dependence and phase length. Out of all the effect size indices studied, the percentage of nonoverlapping data and standardized mean difference proved to be less affected by autocorrelation and to perform better for shorter data series. The regression-based procedures proposed specifically for single-case designs did not differentiate between data patterns as well as did simpler indices.

[1]  Richard I. Parker,et al.  Increased reliability for single-case research results: is the bootstrap the answer? , 2006, Behavior therapy.

[2]  Ann Casey,et al.  A Methodology for the Quantitative Synthesis of Intra-Subject Design Research , 1985 .

[3]  B. Gorman,et al.  Calculating effect sizes for meta-analysis: the case of the single case. , 1993, Behaviour research and therapy.

[4]  Ann Casey,et al.  Nonaversive Procedures in the Treatment of Classroom Behavior Problems , 1985 .

[5]  Thomas R. Kratochwill,et al.  Meta-analysis for single-case consultation outcomes: Applications to research and practice , 1995 .

[6]  J. McKean,et al.  An improved portmanteau test for autocorrelated errors in interrupted time-series regression models , 2007, Behavior research methods.

[7]  J. McKean,et al.  Design Specification Issues in Time-Series Intervention Models , 2000 .

[8]  R. Gorsuch Three methods for analyzing limited time-series (N of 1) data. , 1983 .

[9]  T. Scruggs,et al.  The utility of the PND statistic: a reply to Allison and Gorman. , 1994, Behaviour research and therapy.

[10]  Antonio Solanas,et al.  Randomization Tests for Systematic Single-Case Designs Are Not Always Appropriate , 2005 .

[11]  D. J. Hansen,et al.  Social Skills Interventions , 2010 .

[12]  T. Matyas,et al.  Visual analysis of single-case time series: Effects of variability, serial dependence, and magnitude of intervention effects. , 1990, Journal of applied behavior analysis.

[13]  Richard I. Parker,et al.  Useful effect size interpretations for single case research. , 2007, Behavior therapy.

[14]  Joseph W. McKean,et al.  Autocorrelation Effects on Least-Squares Intervention Analysis of Short Time Series , 1999 .

[15]  C. Peterson,et al.  Social Skills Interventions , 2002 .

[16]  S C O T T B E L L I N I,et al.  A Meta-Analysis of School-Based Social Skills Interventions for Children With Autism Spectrum Disorders , 2007 .

[17]  Richard I. Parker,et al.  Controlling Baseline Trend in Single-Case Research. , 2006 .

[18]  Kenneth M. Greenwood,et al.  Problems with the application of interrupted time series analysis for brief single-subject data , 1990 .

[19]  Jonathan M. Campbell Statistical Comparison of Four Effect Sizes for Single-Subject Designs , 2004, Behavior modification.

[20]  Joseph W. McKean,et al.  Autocorrelation estimation and inference with small samples. , 1991 .

[21]  Ronald D. Franklin,et al.  Antecedent Exercise in the Treatment of Disruptive Behavior: A Meta‐Analytic Review , 1995 .

[22]  Kenneth A. Kavale,et al.  Social Skills Interventions with Students with Emotional and Behavioral Problems: A Quantitative Synthesis of Single-Subject Research , 1998 .

[23]  Kimberly J. Vannest,et al.  Percentage of All Non-Overlapping Data (PAND) , 2007 .

[24]  B F Skinner,et al.  Behavior modification. , 1974, Science.

[25]  Jacob Cohen,et al.  THINGS I HAVE LEARNED (SO FAR) , 1990 .

[26]  Jacob Cohen The earth is round (p < .05) , 1994 .

[27]  Hsen-Hsing Ma An Alternative Method for Quantitative Synthesis of Single-Subject Researches , 2006, Behavior modification.

[28]  R. Kirk Practical Significance: A Concept Whose Time Has Come , 1996 .

[29]  C. Sharpley,et al.  Autocorrelation in behavioral data: An alternative perspective. , 1988 .

[30]  Jeffrey D. Kromrey,et al.  Determining the Efficacy of Intervention: The Use of Effect Sizes for Data Analysis in Single-Subject Research , 1996 .

[31]  T. Scruggs,et al.  Summarizing Single-Subject Research , 1998, Behavior modification.

[32]  Herbert Friedman,et al.  Simplified Determinations of Statistical Power, Magnitude of Effect and Research Sample Sizes , 1982 .

[33]  R. R. Jones,et al.  Effects of serial dependency on the agreement between visual and statistical inference. , 1978, Journal of applied behavior analysis.

[34]  Richard I. Parker,et al.  Evaluating Single-Case Research Data: A Comparison of Seven Statistical Methods , 2003 .

[35]  Larry E. Toothaker,et al.  N = 1 Designs: The Failure of ANOVA-Based Tests , 1983 .

[36]  R. Rosenthal,et al.  Statistical Procedures and the Justification of Knowledge in Psychological Science , 1989 .

[37]  Leland Wilkinson,et al.  Statistical Methods in Psychology Journals Guidelines and Explanations , 2005 .

[38]  Kenneth A. Kavale,et al.  Early Language Intervention , 1988 .

[39]  B. Gorman,et al.  Design and Analysis of Single-Case Research , 1996 .