Stimulus Onset Hub: an Open-Source, Low Latency, and Opto-Isolated Trigger Box for Neuroscientific Research Replicability and Beyond

There is currently a replication crisis in many fields of neuroscience and psychology, with some estimates claiming up to 64% of research in psychological science is not reproducible. Three common culprits which have been suspected to cause the failure to replicate such studies are small sample sizes, “hypothesizing after the results are known,” and “p-hacking.” Here, we introduce accurate stimulus timing as an additional possibility. Accurate stimulus onset timing is critical to almost all psychophysical research. Auditory, visual, or manual response time stimulus onsets are typically sent through wires to various machines that record data such as: eye gaze positions, electroencephalography, stereo electroencephalography, and electrocorticography. These stimulus onsets are collated and analyzed according to experimental condition. If there is variability in the temporal accuracy of the delivery of these onsets to external systems, the quality of the resulting data and scientific analyses will degrade. Here, we describe an approximately $200 Arduino based system and associated open-source codebase which achieved a 5.34 microsecond delay from the inputs to the outputs while electrically opto-isolating the connected external systems. Using an oscilloscope, the device is configurable for different environmental conditions particular to each laboratory (e.g. light sensor type, screen type, speaker type, stimulus type, temperature, etc). This low-cost open-source project delivered electrically isolated stimulus onset Transistor-Transistor Logic triggers with a median precision of 5.34 microseconds and was successfully tested with 7 different external systems that record eye and neurological data.

[1]  Charles E. Davis,et al.  High Resolution Human Eye Tracking During Continuous Visual Search , 2018, Front. Hum. Neurosci..

[2]  Charles E. Davis,et al.  Zapping 500 faces in less than 100 seconds: Evidence for extremely fast and sustained continuous visual search , 2018, Scientific Reports.

[3]  Jonathan D. Wren,et al.  Algorithmic identification of discrepancies between published ratios and their reported confidence intervals and P‐values , 2018, Bioinform..

[4]  Xun He,et al.  A consumer-grade LCD monitor for precise visual stimulation , 2018, Behavior research methods.

[5]  Stanislas Chambon,et al.  Performance of an Ambulatory Dry-EEG Device for Auditory Closed-Loop Stimulation of Sleep Slow Oscillations in the Home Environment , 2018, Front. Hum. Neurosci..

[6]  Ken Kelley,et al.  Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty , 2017, Psychological science.

[7]  Megan A. K. Peters,et al.  Perceptual confidence neglects decision-incongruent evidence in the brain , 2017, Nature Human Behaviour.

[8]  Maarten Kamermans,et al.  A novel mechanism of cone photoreceptor adaptation , 2017, PLoS biology.

[9]  J. Ioannidis,et al.  Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature , 2017, PLoS biology.

[10]  Valen E. Johnson,et al.  On the Reproducibility of Psychological Science , 2017, Journal of the American Statistical Association.

[11]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[12]  P. Mermelstein,et al.  Opposite Effects of mGluR1a and mGluR5 Activation on Nucleus Accumbens Medium Spiny Neuron Dendritic Spine Density , 2016, PloS one.

[13]  Jeffrey W. Sherman,et al.  On the scientific superiority of conceptual replications for scientific progress , 2016 .

[14]  Thomas E. Nichols,et al.  Scanning the horizon: towards transparent and reproducible neuroimaging research , 2016, Nature Reviews Neuroscience.

[15]  J. Vandekerckhove,et al.  A Bayesian Perspective on the Reproducibility Project: Psychology , 2016, PloS one.

[16]  Amanda F. Mejia,et al.  Zen and the Art of Multiple Comparisons , 2015, Psychosomatic medicine.

[17]  G. Kreiman,et al.  Single Neuron Studies of the Human Brain: Probing Cognition , 2014 .

[18]  Brian A. Nosek,et al.  Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.

[19]  Eugenio Culurciello,et al.  Design Constraints for Mobile, High-Speed Fluorescence Brain Imaging in Awake Animals , 2012, IEEE Transactions on Biomedical Circuits and Systems.

[20]  H. Beek F1000Prime recommendation of False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. , 2012 .

[21]  A. D’Ausilio Arduino: A low-cost multipurpose lab equipment , 2012, Behavior research methods.

[22]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[23]  Martin A. Lindquist,et al.  Evaluating the consistency and specificity of neuroimaging data using meta-analysis , 2009, NeuroImage.

[24]  J. A. Comer,et al.  Newly Discovered Ebola Virus Associated with Hemorrhagic Fever Outbreak in Uganda , 2008, PLoS pathogens.

[25]  Ales Prokes,et al.  Influence of Temperature Variation on Optical Receiver Sensitivity and its Compensation , 2007 .

[26]  D. Borsboom,et al.  The poor availability of psychological research data for reanalysis. , 2006, The American psychologist.

[27]  J. Ioannidis,et al.  Why Most Published Research Findings Are False , 2005, PLoS medicine.

[28]  Nick Hammond,et al.  Self-validating presentation and response timing in cognitive paradigms: How and why? , 2004, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[29]  Tom Whitehouse,et al.  Toward an Experimental Timing Standards Lab: Benchmarking precision in the real world , 2002, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[30]  T. K. Chaki,et al.  Electromagnetic interference shielding effectiveness of conductive carbon black and carbon fiber‐filled composites based on rubber and rubber blends , 2001 .

[31]  N. Kerr HARKing: Hypothesizing After the Results are Known , 1998, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[32]  Donald E. Knuth,et al.  Computer programming as an art , 1974, Commun. ACM.

[33]  A. Gelman,et al.  The garden of forking paths : Why multiple comparisons can be a problem , even when there is no “ fishing expedition ” or “ p-hacking ” and the research hypothesis was posited ahead of time ∗ , 2019 .

[34]  Nicholas Gaspelin,et al.  How to get statistically significant effects in any ERP experiment (and why you shouldn't). , 2017, Psychophysiology.

[35]  David Thomas,et al.  The Art in Computer Programming , 2001 .

[36]  D H Brainard,et al.  The Psychophysics Toolbox. , 1997, Spatial vision.

[37]  Donald Ervin Knuth,et al.  The Art of Computer Programming , 1968 .