Size estimation of non-cooperative data collections

With the increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in stopping the crawling or sampling processes which can be so costly in some cases [14]. This tendency to know the sizes of data sources is increased by the competition among businesses on the Web in which the data coverage is critical. In the context of quality assessment of search engines [7], search engine selection in the federated search engines, and in the resource/collection selection in the distributed search field [19], this information is also helpful. In addition, it can give an insight over some useful statistics for public sectors like governments. In any of these mentioned scenarios, in the case of facing a non-cooperative collection which does not publish its information, the size has to be estimated [17]. In this paper, the suggested approaches for this purpose in the literature are categorized and reviewed. The most recent approaches are implemented and compared in a real environment. Finally, four methods based on the modification of the available techniques are introduced and evaluated. In one of the modifications, the estimations from other approaches could be improved ranging from 35 to 65 percent.

[1]  Ziv Bar-Yossef,et al.  Efficient search engine measurements , 2007, WWW '07.

[2]  Xin Jin,et al.  Unbiased estimation of size and other aggregates over hidden web databases , 2010, SIGMOD Conference.

[3]  Jianguo Lu,et al.  Ranking bias in deep web size estimation using capture recapture method , 2010, Data Knowl. Eng..

[4]  David J. C. Mackay,et al.  Introduction to Monte Carlo Methods , 1998, Learning in Graphical Models.

[5]  Andrei Z. Broder,et al.  A Technique for Measuring the Relative Size and Overlap of Public Web Search Engines , 1998, Comput. Networks.

[6]  James P. Callan,et al.  Query-based sampling of text databases , 2001, TOIS.

[7]  John C. Kern,et al.  Introduction to Regression Analysis , 2007 .

[8]  Andrei Z. Broder,et al.  Sampling Search-Engine Results , 2005, WWW '05.

[9]  Bryan F. J. Manly,et al.  Handbook of Capture-Recapture Analysis , 2010 .

[10]  Paul Thomas Generalising multiple capture-recapture to non-uniform sample sizes , 2008, SIGIR '08.

[11]  Jianguo Lu,et al.  Estimating deep web data source size by capture–recapture method , 2010, Information Retrieval.

[12]  Milad Shokouhi,et al.  Capturing collection size for distributed non-cooperative retrieval , 2006, SIGIR.

[13]  Sheng Wu,et al.  Estimating collection size with logistic regression , 2007, SIGIR.

[14]  H. Katzgraber Introduction to Monte Carlo Methods , 2009, 0905.1629.

[15]  Antonio Gulli,et al.  The indexable web is more than 11.5 billion pages , 2005, WWW '05.

[16]  Andrei Z. Broder,et al.  Estimating corpus size via queries , 2006, CIKM '06.

[17]  Ziv Bar-Yossef,et al.  Random sampling from a search engine's index , 2006, WWW '06.

[18]  David J. Olive,et al.  Introduction to Regression Analysis , 2007 .