An Open Source AutoML Benchmark

In recent years, an active field of research has developed around automated machine learning(AutoML). Unfortunately, comparing different AutoML systems is hard and often doneincorrectly. We introduce an open, ongoing, and extensible benchmark framework whichfollows best practices and avoids common mistakes. The framework is open-source, usespublic datasets and has a website with up-to-date results. We use the framework to conducta thorough comparison of 4 AutoML systems across 39 datasets and analyze the results.