Data quality is a serious concern in every data management application, and a variety of quality measures have been proposed, including accuracy, freshness and completeness, to capture the common sources of data quality degradation. We identify and focus attention on a novel measure, column heterogeneity, that seeks to quantify the data quality problems that can arise when merging data from different sources. We identify desiderata that a column heterogeneity measure should intuitively satisfy, and discuss a promising direction of research to quantify database column heterogeneity based on using a novel combination of cluster entropy and soft clustering. Finally, we present a few preliminary experimental results, using diverse data sets of semantically different types, to demonstrate that this approach appears to provide a robust mechanism for identifying and quantifying database column heterogeneity.
[1]
Noam Slonim,et al.
The Information Bottleneck : Theory and Applications
,
2006
.
[2]
Theodore Johnson,et al.
Mining database structure; or, how to build a data quality browser
,
2002,
SIGMOD '02.
[3]
Theodore Johnson,et al.
Exploratory Data Mining and Data Cleaning
,
2003
.
[4]
Jennifer Widom,et al.
Trio: A System for Integrated Management of Data, Accuracy, and Lineage
,
2004,
CIDR.
[5]
V. Notkola,et al.
Quality of Data
,
2000
.
[6]
Maria-Esther Vidal,et al.
Querying Quality of Data Metadata
,
1998
.
[7]
Thomas M. Cover,et al.
Elements of Information Theory
,
2005
.
[8]
Naftali Tishby,et al.
The information bottleneck method
,
2000,
ArXiv.
[9]
Theodore Johnson,et al.
Data quality and data cleaning: an overview
,
2003,
SIGMOD '03.