Poor data quality has become a persistent challenge for organizations as data continues to grow in complexity and size. Existing data cleaning solutions focus on identifying repairs to the data to minimize either a cost function or the number of updates. These techniques, however, fail to consider underlying data privacy requirements that exist in many real data sets containing sensitive and personal information. In this demonstration, we present PARC, a Privacy-AwaRe data Cleaning system that corrects data inconsistencies w.r.t. a set of FDs, and limits the disclosure of sensitive values during the cleaning process. The system core contains modules that evaluate three key metrics during the repair search, and solves a multi-objective optimization problem to identify repairs that balance the privacy vs. utility tradeoff. This demonstration will enable users to understand: (1) the characteristics of a privacy-preserving data repair; (2) how to customize data cleaning and data privacy requirements using two real datasets; and (3) the distinctions among the repair recommendations via visualization summaries.
[1]
J. Bourgain.
On lipschitz embedding of finite metric spaces in Hilbert space
,
1985
.
[2]
Lukasz Golab,et al.
Sampling the repairs of functional dependency violations under hard constraints
,
2010,
Proc. VLDB Endow..
[3]
Ahmed K. Elmagarmid,et al.
Guided data repair
,
2011,
Proc. VLDB Endow..
[4]
Lukasz Golab,et al.
On the relative trust between inconsistent data and inaccurate constraints
,
2012,
2013 IEEE 29th International Conference on Data Engineering (ICDE).
[5]
Marcelo Arenas,et al.
An information-theoretic approach to normal forms for relational and XML data
,
2003,
PODS.
[6]
Mehmet M. Dalkilic,et al.
Information dependencies
,
2000,
PODS '00.
[7]
Jianzhong Li,et al.
Towards certain fixes with editing rules and master data
,
2010,
The VLDB Journal.
[8]
Renée J. Miller,et al.
Continuous data cleaning
,
2014,
2014 IEEE 30th International Conference on Data Engineering.