KAYAK: A Framework for Just-in-Time Data Preparation in a Data Lake

A data lake is a loosely-structured collection of data at large scale that is usually fed with almost no requirement of data quality. This approach aims at eliminating any human effort before the actual exploitation of data, but the problem is only delayed since preparing and querying a data lake is usually a hard task. We address this problem by introducing Kayak, a framework that helps data scientists in the definition and optimization of pipelines of data preparation. Since in many cases approximations of the results, which can be computed rapidly, are enough informative, Kayak allows the users to specify their needs in terms of accuracy over performance and produces previews of the outputs satisfying such requirement. In this way, the pipeline is executed much faster and the process of data preparation is shortened. We discuss the design choices of Kayak including execution strategies, optimization techniques, scheduling of operations, and metadata management. With a set of preliminary experiments, we show that the approach is effective and scales well with the number of datasets in the data lake.

[1]  Joseph M. Hellerstein,et al.  Ground: A Data Context Service , 2017, CIDR.

[2]  Tim Furche,et al.  Data Wrangling for Big Data: Challenges and Opportunities , 2016, EDBT.

[3]  Felix Naumann,et al.  Data Profiling with Metanome , 2015, Proc. VLDB Endow..

[4]  Reynold Xin,et al.  Apache Spark , 2016 .

[5]  Felix Naumann,et al.  Functional Dependency Discovery: An Experimental Evaluation of Seven Algorithms , 2015, Proc. VLDB Endow..

[6]  Reynold Xin,et al.  Finding related tables , 2012, SIGMOD Conference.

[7]  Felix Naumann,et al.  A Hybrid Approach to Functional Dependency Discovery , 2016, SIGMOD Conference.

[8]  Felix Naumann,et al.  Holistic Data Profiling: Simultaneous Discovery of Various Metadata , 2016, EDBT.

[9]  Michael Stonebraker,et al.  Data Curation at Scale: The Data Tamer System , 2013, CIDR.

[10]  Ion Stoica,et al.  BlinkDB: queries with bounded errors and bounded response times on very large data , 2012, EuroSys '13.

[11]  Sandra Geisler,et al.  Constance: An Intelligent Data Lake System , 2016, SIGMOD Conference.

[12]  Michael Stonebraker,et al.  The Data Civilizer System , 2017, CIDR.

[13]  Brian E. Granger,et al.  IPython: A System for Interactive Scientific Computing , 2007, Computing in Science & Engineering.

[14]  David R. Karger,et al.  Collaborative Data Analytics with DataHub , 2015, Proc. VLDB Endow..

[15]  Mary Roth,et al.  Data Wrangling: The Challenging Yourney from the Wild to the Lake , 2015, CIDR.

[16]  Paul Brown,et al.  CORDS: automatic discovery of correlations and soft functional dependencies , 2004, SIGMOD '04.

[17]  Helen J. Wang,et al.  Online aggregation , 1997, SIGMOD '97.

[18]  Riccardo Torlone,et al.  Crossing the finish line faster when paddling the Data Lake with Kayak , 2017, Proc. VLDB Endow..

[19]  Alon Y. Halevy,et al.  Goods: Organizing Google's Datasets , 2016, SIGMOD Conference.

[20]  Jignesh M. Patel,et al.  DAQ: A New Paradigm for Approximate Query Processing , 2015, Proc. VLDB Endow..