INTRODUCTION In recent years, advances in data collection and management technologies have led to a proliferation of very large databases. These large data repositories typically are created in the hope that, through analysis such as data mining and decision support, they will yield new insights into the data and the real-world processes that created them. In practice, however, while the collection and storage of massive datasets has become relatively straightforward , effective data analysis has proven more difficult to achieve. One reason that data analysis successes have proven elusive is that most analysis queries, by their nature, require aggregation or summarization of large portions of the data being analyzed. For multi-gigabyte data repositories, this means that processing even a single analysis query involves accessing enormous amounts of data, leading to prohibitively expensive running times. This severely limits the feasibility of many types of analysis applications, especially those that depend on timeliness or interactivity. While keeping query response times short is very important in many data mining and decision support applications, exactness in query results is frequently less important. In many cases, ballpark estimates are adequate to provide the desired insights about the data, at least in preliminary phases of analysis. For example, knowing the marginal data distributions for each attribute up to 10% error often will be enough to identify top-selling products in a sales database or to determine the best attribute to use at the root of a decision tree. This query seeks to compute the total number of a particular item sold in a sales database, grouped by state. Instead of a time-consuming process that produces completely accurate answers, in some circumstances, it may be suitable to produce ballpark estimates (e.g., counts to the nearest thousands). The acceptability of inexact query answers, coupled with the necessity for fast query response times, has led researchers to investigate approximate query answering (AQA) techniques that sacrifice accuracy to improve running time, typically through some sort of lossy data compression. The general rubric in which most approximate query processing systems operate is as follows: first, during the preprocessing phase, some auxiliary data structures, or data synopses, are built over the database; then, during the runtime phase, queries are issued to the system and approximate query answers quickly are returned , using the data synopses built during the prepro-cessing phase. The quality of an approximate query processing system often is determined by how …
[1]
Mario Piattini,et al.
An Experimental Replication With Data Warehouse Metrics
,
2005,
Int. J. Data Warehous. Min..
[2]
Philip Calvert,et al.
Encyclopedia of Data Warehousing and Mining
,
2006
.
[3]
Marzena Kryszkiewicz.
Reasoning about Frequent Patterns with Negation
,
2009,
Encyclopedia of Data Warehousing and Mining.
[4]
David Taniar,et al.
Progressive Methods in Data Warehousing and Business Intelligence: Concepts and Competitive Analytics
,
2009
.
[5]
John Wang Montclair.
Data Warehousing and Mining : Concepts , Methodologies , Tools , and Applications
,
2008
.
[6]
Laila Niedrite,et al.
Development of Data Warehouse Conceptual Models: Method Engineering Approach
,
2009
.
[7]
John Wang,et al.
Data Warehousing and Mining: Concepts, Methodologies, Tools, and Applications
,
2008
.
[8]
D. R. Mani,et al.
Predicting Resource Usage for Capital Efficient Marketing
,
2009,
Encyclopedia of Data Warehousing and Mining.
[9]
Laura Maruster,et al.
Encyclopedia of data warehousing and mining
,
2008
.
[10]
Ladjel Bellatreche.
Data Warehousing Design and Advanced Engineering Applications: Methods for Complex Construction (Advances in Data Warehousing and Mining - Adwm
,
2009
.
[11]
Sana Hamdoun,et al.
Construction and Maintenance of Heterogeneous Data Warehouses
,
2007
.