The Generic Holdout: Preventing False-Discoveries in Adaptive Data Science

Adaptive data analysis has posed a challenge to science due to its ability to generate false hypotheses on moderately large data sets. In general, with non-adaptive data analyses (where queries to the data are generated without being influenced by answers to previous queries) a data set containing $n$ samples may support exponentially many queries in $n$. This number reduces to linearly many under naive adaptive data analysis, and even sophisticated remedies such as the Reusable Holdout (Dwork et. al 2015) only allow quadratically many queries in $n$. In this work, we propose a new framework for adaptive science which exponentially improves on this number of queries under a restricted yet scientifically relevant setting, where the goal of the scientist is to find a single (or a few) true hypotheses about the universe based on the samples. Such a setting may describe the search for predictive factors of some disease based on medical data, where the analyst may wish to try a number of predictive models until a satisfactory one is found. Our solution, the Generic Holdout, involves two simple ingredients: (1) a partitioning of the data into a exploration set and a holdout set and (2) a limited exposure strategy for the holdout set. An analyst is free to use the exploration set arbitrarily, but when testing hypotheses against the holdout set, the analyst only learns the answer to the question: "Is the given hypothesis true (empirically) on the holdout set?" -- and no more information, such as "how well" the hypothesis fits the holdout set. The resulting scheme is immediate to analyze, but despite its simplicity we do not believe our method is obvious, as evidenced by the many violations in practice. Our proposal can be seen as an alternative to pre-registration, and allows researchers to get the benefits of adaptive data analysis without the problems of adaptivity.

[1]  Toniann Pitassi,et al.  Preserving Statistical Validity in Adaptive Data Analysis , 2014, STOC.

[2]  D. Barch,et al.  Introduction to the special issue on reliability and replication in cognitive and affective neuroscience research , 2013, Cognitive, affective & behavioral neuroscience.

[3]  Brian A. Nosek,et al.  Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015 , 2018, Nature Human Behaviour.

[4]  D. Freedman A Note on Screening Regression Equations , 1983 .

[5]  R. Tibshirani,et al.  Selective Sequential Model Selection , 2015, 1512.02565.

[6]  Raef Bassily,et al.  Algorithmic stability for adaptive data analysis , 2015, STOC.

[7]  Peggy Hall,et al.  The NHGRI GWAS Catalog, a curated resource of SNP-trait associations , 2013, Nucleic Acids Res..

[8]  Y. Benjamini,et al.  Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .

[9]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[10]  Gideon Nave,et al.  Evaluating replicability of laboratory experiments in economics , 2016, Science.

[11]  A. Gelman,et al.  The statistical crisis in science , 2014 .

[12]  Toniann Pitassi,et al.  The reusable holdout: Preserving validity in adaptive data analysis , 2015, Science.

[13]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[14]  Dennis L. Sun,et al.  Optimal Inference After Model Selection , 2014, 1410.2597.

[15]  E. Candès,et al.  Controlling the false discovery rate via knockoffs , 2014, 1404.5609.

[16]  Jonathan Ullman,et al.  Preventing False Discovery in Interactive Data Analysis Is Hard , 2014, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science.

[17]  Ang Li,et al.  Accumulation Tests for FDR Control in Ordered Hypothesis Testing , 2015, 1505.07352.

[18]  Toniann Pitassi,et al.  Generalization in Adaptive Data Analysis and Holdout Reuse , 2015, NIPS.

[19]  Ang Li,et al.  Multiple testing with the structure‐adaptive Benjamini–Hochberg algorithm , 2016, Journal of the Royal Statistical Society: Series B (Statistical Methodology).

[20]  Simon C. Potter,et al.  Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls , 2007, Nature.

[21]  William Fithian,et al.  AdaPT: an interactive procedure for multiple testing with side information , 2016, Journal of the Royal Statistical Society: Series B (Statistical Methodology).