A multiple randomization testing procedure for level, trend, variability, overlap, immediacy, and consistency in single-case phase designs.

We present an approach to draw multiple and powerful inferences for each data aspect of single-case ABAB phase designs: level, trend, variability, overlap, immediacy, and consistency of data patterns. We show step-by-step how effect size measures can be calculated for each data aspect and subsequently integrated as test statistics in multiple randomization tests. To control for Type I errors, we discuss three methods for adjusting the obtained p-values based on the false discovery rate: the multiple testing correction proposed by Benjamini and Hochberg (1995), the adaptive correction suggested by Benjamini and Hochberg (2000), and the correction taking into account the dependency between the tests (Benjamini & Yekutieli, 2001). We apply this approach to a published data set and compare the results to the conclusions drawn by the authors based on visual analysis. The multiple randomization testing procedure can give more detailed information about which data aspects are affected by the single-case intervention. We provide generic R-code to execute the presented analyses.

[1]  Wim Van Den Noortgate,et al.  Students’ misconceptions of statistical inference: A review of the empirical evidence from research on statistics education , 2007 .

[2]  Randall R. Robey,et al.  Evaluating Single-Subject Treatment Research: Lessons Learned from the Aphasia Literature , 2006, Neuropsychology Review.

[3]  Justin D. Smith,et al.  Single-case experimental designs: a systematic review of published research and current standards. , 2012, Psychological methods.

[4]  M. Wolery,et al.  The Use of Single-Subject Research to Identify Evidence-Based Practice in Special Education , 2005 .

[5]  Jeffrey D. Kromrey,et al.  Determining the Efficacy of Intervention: The Use of Effect Sizes for Data Analysis in Single-Subject Research , 1996 .

[6]  Patrick Onghena,et al.  One by One: Accumulating Evidence by using Meta-Analytical Procedures for Single-Case Experiments , 2017, Brain Impairment.

[7]  Patrick Onghena,et al.  Analysis of single-case data: Randomisation tests for measures of effect size , 2014, Neuropsychological rehabilitation.

[8]  Richard I. Parker,et al.  The Relationship Between Visual Analysis and Five Statistical Analyses in a Simple AB Single-Case Research Design , 2006, Behavior modification.

[9]  T. Matyas,et al.  Visual analysis of single-case time series: Effects of variability, serial dependence, and magnitude of intervention effects. , 1990, Journal of applied behavior analysis.

[10]  Eugene S. Edgington,et al.  Overcoming Obstacles to Single-Subject Experimentation , 1980 .

[11]  John Ferron,et al.  The Functioning of Single-Case Randomization Tests With and Without Random Assignment , 2003 .

[12]  Amy M. Briesch,et al.  An Application of the What Works Clearinghouse Standards for Evaluating Single-Subject Research , 2013 .

[13]  Koen J. F. Verhoeven,et al.  Implementing false discovery rate control: increasing your power , 2005 .

[14]  John Ferron,et al.  Analyzing single-case data with visually guided randomization tests , 1998 .

[15]  Kristin L. Sainani The Problem of Multiple Testing , 2009, PM & R : the journal of injury, function, and rehabilitation.

[16]  E S Edgington,et al.  Randomized single-subject experimental designs. , 1996, Behaviour research and therapy.

[17]  Y. Benjamini,et al.  THE CONTROL OF THE FALSE DISCOVERY RATE IN MULTIPLE TESTING UNDER DEPENDENCY , 2001 .

[18]  E. Edgington Statistical inference from N--1 experiments. , 1967, The Journal of psychology.

[19]  W. Shadish,et al.  A standardized mean difference effect size for single case designs , 2012, Research synthesis methods.

[20]  Jacob Cohen,et al.  A power primer. , 1992, Psychological bulletin.

[21]  Eugene S. Edgington,et al.  Randomization Tests , 2011, International Encyclopedia of Statistical Science.

[22]  David L. Gast,et al.  Visual Analysis of Graphic Data , 2009 .

[23]  Robert Gaylord-Ross,et al.  Visual Inspection and Statistical Analysis in Single-Case Designs , 1990 .

[24]  B. Lloyd,et al.  Randomization Tests for Single Case Designs with Rapidly Alternating Conditions: An Analysis of p-Values from Published Experiments , 2018, Perspectives on Behavior Science.

[25]  Patrick Onghena,et al.  Confidence intervals for single-case effect size measures based on randomization test inversion , 2017, Behavior research methods.

[26]  Patrick Onghena,et al.  Consistency in Single-Case ABAB Phase Designs: A Systematic Review. , 2019, Behavior modification.

[27]  Y. Benjamini,et al.  Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .

[28]  Patrick Onghena,et al.  Customization of pain treatments: single-case design and analysis. , 2005, The Clinical journal of pain.

[29]  E. Edgington Randomization Tests for One-Subject Operant Experiments , 1975 .

[30]  John Ferron,et al.  Tests for the Visual Analysis of Response-Guided Multiple-Baseline Data , 2006 .

[31]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[32]  H. Khamis,et al.  Simple solution to a common statistical problem: interpreting multiple tests. , 2004, Clinical therapeutics.

[33]  Ann Casey,et al.  A Methodology for the Quantitative Synthesis of Intra-Subject Design Research , 1985 .

[34]  John M Ferron,et al.  Nonparametric statistical tests for single-case systematic and randomized ABAB…AB and alternating treatment intervention designs: new developments, new directions. , 2012, Journal of school psychology.

[35]  Patrick Onghena,et al.  Randomized single-case AB phase designs: Prospects and pitfalls , 2018, Behavior Research Methods.

[36]  Skye McDonald,et al.  What's New in the Clinical Management of Disorders of Social Cognition? , 2017, Brain Impairment.

[37]  Justin D. Lane,et al.  Visual analysis in single case experimental design studies: Brief review and guidelines , 2014, Neuropsychological rehabilitation.

[38]  L. Garamszegi,et al.  Comparing effect sizes across variables: generalization without the need for Bonferroni correction , 2006 .

[39]  Robert H. Horner,et al.  The Role of Between-Case Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research. NCER 2015-002. , 2015 .

[40]  Michael Perdices,et al.  Single-subject designs as a tool for evidence-based clinical practice: Are they unrecognised and undervalued? , 2009, Neuropsychological rehabilitation.

[41]  Richard A Armstrong,et al.  When to use the Bonferroni correction , 2014, Ophthalmic & physiological optics : the journal of the British College of Ophthalmic Opticians.

[42]  Jennifer B. Ganz,et al.  Methodological standards in single-case experimental design: Raising the bar. , 2018, Research in developmental disabilities.

[43]  Robert H. Horner,et al.  Single-Case Designs Technical Documentation. , 2010 .

[44]  M. Ylvisaker,et al.  Context-Sensitive Behavioral Supports for Young Children with TBI: Short-Term Effects and Long-Term Outcome , 2003, The Journal of head trauma rehabilitation.

[45]  Shinichi Nakagawa A farewell to Bonferroni: the problems of low statistical power and publication bias , 2004, Behavioral Ecology.

[46]  Patrick Onghena,et al.  Randomization and Data-Analysis Items in Quality Standards for Single-Case Experimental Studies , 2015 .

[47]  Kenneth J. Ottenbacher When Is A Picture Worth A Thousand $Rh Values? A Comparison Of Visual And Quantitative Methods To Analyze Single Subject Data , 1990 .

[48]  Robert H. Horner,et al.  The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016: Explanation and elaboration. , 2016 .

[49]  Robbie C. M. van Aert,et al.  Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking , 2016, Front. Psychol..

[50]  Thomas E. Scruggs,et al.  The Quantitative Synthesis of Single-Subject Research , 1987 .

[51]  Rumen Manolov,et al.  Linear Trend in Single-Case Visual and Quantitative Analyses , 2018, Behavior modification.

[52]  Leland Wilkinson,et al.  Statistical Methods in Psychology Journals Guidelines and Explanations , 2005 .

[53]  Patrick Onghena,et al.  Randomization Tests for Extensions and Variations of ABAB Single-Case Experimental Designs: A Rejoinder , 1992 .

[54]  S. Natasha Beretvas,et al.  A review of meta-analyses of single-subject experimental designs: Methodological issues and practice , 2008 .

[55]  Benjamin W. Smith,et al.  Effect size calculations and single subject designs , 2005 .

[56]  Geoffrey Mitchell,et al.  The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 Statement. , 2016, Journal of clinical epidemiology.

[57]  Y. Benjamini,et al.  On the Adaptive Control of the False Discovery Rate in Multiple Testing With Independent Statistics , 2000 .

[58]  Patrick Onghena,et al.  Assessing Consistency in Single-Case A-B-A-B Phase Designs , 2020, Behavior modification.

[59]  Patrick Onghena,et al.  Journal of Modern Applied Statistical Methods the Single-case Data Analysis Package: Analysing Single-case Experiments with R Software Statistical Software Applications and Review: the Single-case Data Analysis Package: Analysing Single-case Experiments with R Software Patrick Onghena , 2022 .

[60]  Rumen Manolov,et al.  Analyzing Data From Single-Case Alternating Treatments Designs , 2018, Psychological methods.

[61]  Robert H. Horner,et al.  Single-Case Intervention Research Design Standards , 2013 .

[62]  Joel R. Levin,et al.  Meta- and statistical analysis of single-case intervention research data: quantitative gifts and a wish list. , 2014, Journal of school psychology.

[63]  B. L. Welch ON THE z-TEST IN RANDOMIZED BLOCKS AND LATIN SQUARES , 1937 .

[64]  Benjamin G. Solomon,et al.  Violations of Assumptions in School-Based Single-Case Data , 2014, Behavior modification.

[65]  H. J. Arnold Introduction to the Practice of Statistics , 1990 .

[66]  James E. Pustejovsky,et al.  A standardized mean difference effect size for multiple baseline designs across individuals , 2013, Research synthesis methods.

[67]  James E. Pustejovsky,et al.  Design-Comparable Effect Sizes in Multiple Baseline Designs , 2014 .

[68]  Kimberly J. Vannest,et al.  An improved effect size for single-case research: nonoverlap of all pairs. , 2009, Behavior therapy.

[69]  James E Pustejovsky,et al.  Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications. , 2014, Journal of school psychology.