To achieve improved availability and performance, often, local copies of remote data from autonomous sources are maintained. Examples of such local copies include data warehouses and repositories managed by web search engines. As the size of the local data grows, it is not always feasible to maintain the freshness (up-to-dateness) of the entire data due to resource limitations. Previous contributions to maintaining freshness of local data use a freshness metric as the proportion of fresh documents within the total repository (we denote this as average freshness). As a result, even though updates to more frequently changing data are not captured, the average freshness measure may still be high. In this paper, we argue that, in addition to average freshness, it is important that the freshness metric should also include the proportion of changes captured for each document, which we call object freshness. The latter is particularly important when both the current and historical versions of information sources are queried or mined. We propose an approach by building an access scheduling tree (AST) to precisely schedule access to remote sources that achieves optimal freshness of the local data under limited availability of resources. We show, via experiments, the performance of our approach is significantly higher than a linear priority queue.
[1]
David J. DeWitt,et al.
X-Diff: an effective change detection algorithm for XML documents
,
2003,
Proceedings 19th International Conference on Data Engineering (Cat. No.03CH37405).
[2]
Hector Garcia-Molina,et al.
The Evolution of the Web and Implications for an Incremental Crawler
,
2000,
VLDB.
[3]
Hector Garcia-Molina,et al.
Efficient Crawling Through URL Ordering
,
1998,
Comput. Networks.
[4]
Marc Najork,et al.
Mercator: A scalable, extensible Web crawler
,
1999,
World Wide Web.
[5]
Hector Garcia-Molina,et al.
Synchronizing a database to improve freshness
,
2000,
SIGMOD 2000.
[6]
Anja Feldmann,et al.
Rate of Change and other Metrics: a Live Study of the World Wide Web
,
1997,
USENIX Symposium on Internet Technologies and Systems.