Stimulus Onset Hub: an Open-Source, Low Latency, and Opto-Isolated Trigger Box for Neuroscientific Research Replicability and Beyond

There is currently a replication crisis in many fields of neuroscience and psychology, with some estimates claiming up to 64% of research in psychological science is not reproducible. Three common culprits which have been suspected to cause the failure to replicate such studies are small sample sizes, “hypothesizing after the results are known,” and “p-hacking.” Here, we introduce accurate stimulus timing as an additional possibility. Accurate stimulus onset timing is critical to almost all psychophysical research. Auditory, visual, or manual response time stimulus onsets are typically sent through wires to various machines that record data such as: eye gaze positions, electroencephalography, stereo electroencephalography, and electrocorticography. These stimulus onsets are collated and analyzed according to experimental condition. If there is variability in the temporal accuracy of the delivery of these onsets to external systems, the quality of the resulting data and scientific analyses will degrade. Here, we describe an approximately $200 Arduino based system and associated open-source codebase which achieved a 5.34 microsecond delay from the inputs to the outputs while electrically opto-isolating the connected external systems. Using an oscilloscope, the device is configurable for different environmental conditions particular to each laboratory (e.g. light sensor type, screen type, speaker type, stimulus type, temperature, etc). This low-cost open-source project delivered electrically isolated stimulus onset Transistor-Transistor Logic triggers with a median precision of 5.34 microseconds and was successfully tested with 7 different external systems that record eye and neurological data.

[1]  L. HARKing: Hypothesizing After the Results are Known , 2002 .

[2]  Charles E. Davis,et al.  High Resolution Human Eye Tracking During Continuous Visual Search , 2018, Front. Hum. Neurosci..

[3]  Jeffrey W. Sherman,et al.  On the scientific superiority of conceptual replications for scientific progress , 2016 .

[4]  Nick Hammond,et al.  Self-validating presentation and response timing in cognitive paradigms: How and why? , 2004, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[5]  Nicholas Gaspelin,et al.  How to get statistically significant effects in any ERP experiment (and why you shouldn't). , 2017, Psychophysiology.

[6]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[7]  Amanda F. Mejia,et al.  Zen and the Art of Multiple Comparisons , 2015, Psychosomatic medicine.

[8]  Valen E. Johnson,et al.  On the Reproducibility of Psychological Science , 2017, Journal of the American Statistical Association.

[9]  J. Ioannidis,et al.  Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature , 2017, PLoS biology.

[10]  Ales Prokes,et al.  Influence of Temperature Variation on Optical Receiver Sensitivity and its Compensation , 2007 .

[11]  Donald E. Knuth,et al.  Computer programming as an art , 1974, CACM.

[12]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[13]  T. K. Chaki,et al.  Electromagnetic interference shielding effectiveness of conductive carbon black and carbon fiber‐filled composites based on rubber and rubber blends , 2001 .

[14]  Maarten Kamermans,et al.  A novel mechanism of cone photoreceptor adaptation , 2017, PLoS biology.

[15]  A. Gelman,et al.  The garden of forking paths : Why multiple comparisons can be a problem , even when there is no “ fishing expedition ” or “ p-hacking ” and the research hypothesis was posited ahead of time ∗ , 2019 .

[16]  Stanislas Chambon,et al.  Performance of an Ambulatory Dry-EEG Device for Auditory Closed-Loop Stimulation of Sleep Slow Oscillations in the Home Environment , 2018, Front. Hum. Neurosci..

[17]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[18]  David Thomas,et al.  The Art in Computer Programming , 2001 .

[19]  D. Borsboom,et al.  The poor availability of psychological research data for reanalysis. , 2006, The American psychologist.

[20]  Tom Whitehouse,et al.  Toward an Experimental Timing Standards Lab: Benchmarking precision in the real world , 2002, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[21]  Martin A. Lindquist,et al.  Evaluating the consistency and specificity of neuroimaging data using meta-analysis , 2009, NeuroImage.

[22]  Charles E. Davis,et al.  Zapping 500 faces in less than 100 seconds: Evidence for extremely fast and sustained continuous visual search , 2018, Scientific Reports.

[23]  Donald Ervin Knuth,et al.  The Art of Computer Programming , 1968 .

[24]  Eugenio Culurciello,et al.  Design Constraints for Mobile, High-Speed Fluorescence Brain Imaging in Awake Animals , 2012, IEEE Transactions on Biomedical Circuits and Systems.

[25]  Jonathan D. Wren,et al.  Algorithmic identification of discrepancies between published ratios and their reported confidence intervals and P‐values , 2018, Bioinform..

[26]  Xun He,et al.  A consumer-grade LCD monitor for precise visual stimulation , 2018, Behavior research methods.

[27]  Alessandro D’Ausilio,et al.  Arduino: A low-cost multipurpose lab equipment , 2011, Behavior Research Methods.

[28]  Ken Kelley,et al.  Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty , 2017, Psychological science.

[29]  Thomas E. Nichols,et al.  Scanning the horizon: towards transparent and reproducible neuroimaging research , 2016, Nature Reviews Neuroscience.

[30]  D H Brainard,et al.  The Psychophysics Toolbox. , 1997, Spatial vision.

[31]  Brian A. Nosek,et al.  Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.

[32]  J. Vandekerckhove,et al.  A Bayesian Perspective on the Reproducibility Project: Psychology , 2016, PloS one.

[33]  P. Mermelstein,et al.  Opposite Effects of mGluR1a and mGluR5 Activation on Nucleus Accumbens Medium Spiny Neuron Dendritic Spine Density , 2016, PloS one.