Reproducibility of modeling is a problem that exists for any machine learning practitioner, whether in industry or academia. The consequences of an irreproducible model can include significant financial costs, lost time, and even loss of personal reputation (if results prove unable to be replicated). This paper will first discuss the problems we have encountered while building a variety of machine learning models, and subsequently describe the framework we built to tackle the problem of model reproducibility. The framework is comprised of four main components (data, feature, scoring, and evaluation layers), which are themselves comprised of well defined transformations. This enables us to not only exactly replicate a model, but also to reuse the transformations across different models. As a result, the platform has dramatically increased the speed of both offline and online experimentation while also ensuring model reproducibility.
[1]
Pearl Brereton,et al.
Reproducibility in Machine Learning-Based Studies: An Example of Text Mining
,
2017
.
[2]
M. Hutson.
Artificial intelligence faces reproducibility crisis.
,
2018,
Science.
[3]
Gerhard Widmer,et al.
Learning in the presence of concept drift and hidden contexts
,
2004,
Machine Learning.
[4]
Pusheng Zhang,et al.
Scaling Machine Learning as a Service
,
2017,
PAPIs.
[5]
Luís Torgo,et al.
OpenML: networked science in machine learning
,
2014,
SKDD.
[6]
Carl E. Rasmussen,et al.
The Need for Open Source Software in Machine Learning
,
2007,
J. Mach. Learn. Res..
[7]
D. Sculley,et al.
Hidden Technical Debt in Machine Learning Systems
,
2015,
NIPS.