Towards Replication in Computational Cognitive Modeling: a Machine Learning Perspective

The suggestions proposed by Lee et al. to improve cognitive modeling practices have significant parallels to the current best practices for improving reproducibility in the field of Machine Learning. In the current commentary on `Robust modeling in cognitive science', we highlight the practices that overlap and discuss how similar proposals have produced novel ongoing challenges, including cultural change towards open science, the scalability and interpretability of required practices, and the downstream effects of having robust practices that are fully transparent. Through this, we hope to inform future practices in computational modeling work with a broader scope.

[1]  Darrel C. Ince,et al.  The case for open computer programs , 2012, Nature.

[2]  David L. Donoho,et al.  WaveLab and Reproducible Research , 1995 .

[3]  Jonathan M. Borwein,et al.  SIAM: “Setting the Default to Reproducible” in Computational Science Research , 2013 .

[4]  M. Hutson Artificial intelligence faces reproducibility crisis. , 2018, Science.

[5]  Piek T. J. M. Vossen,et al.  Replicability and reproducibility of research results for human language technology: introducing an LRE special section , 2017, Lang. Resour. Evaluation.

[6]  Philip Bachman,et al.  Deep Reinforcement Learning that Matters , 2017, AAAI.

[7]  Odd Erik Gundersen,et al.  State of the Art: Reproducibility in Artificial Intelligence , 2018, AAAI.

[8]  Pia Rotshtein,et al.  Registered Reports: Realigning incentives in scientific publishing , 2015, Cortex.

[9]  R. Peng Reproducible Research in Computational Science , 2011, Science.

[10]  John P. A. Ioannidis,et al.  Mapping the universe of registered reports , 2018, Nature Human Behaviour.

[11]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[12]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[13]  H. Pashler,et al.  Editors’ Introduction to the Special Section on Replicability in Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[14]  et al.,et al.  Jupyter Notebooks - a publishing format for reproducible computational workflows , 2016, ELPUB.

[15]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[16]  Inioluwa Deborah Raji,et al.  Model Cards for Model Reporting , 2018, FAT.

[17]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[18]  D. Sculley,et al.  Hidden Technical Debt in Machine Learning Systems , 2015, NIPS.

[19]  Peter Henderson,et al.  Distilling Information from a Flood: A Possibility for the Use of Meta-Analysis and Systematic Review in Machine Learning Research , 2018, ArXiv.

[20]  Jon F. Claerbout,et al.  Electronic documents give reproducible research a new meaning: 62nd Ann , 1992 .

[21]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Juliana Freire,et al.  A Large-Scale Study About Quality and Reproducibility of Jupyter Notebooks , 2019, 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR).

[23]  Yolanda Gil,et al.  Enhancing reproducibility for computational methods , 2016, Science.

[24]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.