With increasing amount of data in deep web sources (hidden from general search engines behind web forms), accessing this data has gained more attention. In the algorithms applied for this purpose, it is the knowledge of a data source size that enables the algorithms to make accurate decisions in stopping crawling or sampling processes which can be so costly in some cases [4]. The tendency to know the sizes of data sources is increased by the competition among businesses on the Web in which the data coverage is critical. In the context of quality assessment of search engines [2], search engine selection in the federated search engines, and in the resource/collection selection in the distributed search field [6], this information is also helpful. In addition, it can give an insight over some useful statistics for public sectors like governments. In any of these mentioned scenarios, in case of facing a non-cooperative collection which does not publish its information, the size has to be estimated [5]. In this paper, the approaches in literature are categorized and reviewed. The most recent approaches are implemented and compared in a real environment. Finally, four methods based on the modification of the available techniques are introduced and evaluated. In one of the modifications, the estimations from other approaches could be improved ranging from 35 to 65 percent.
[1]
Ziv Bar-Yossef,et al.
Efficient search engine measurements
,
2007,
WWW '07.
[2]
Andrei Z. Broder,et al.
Estimating corpus size via queries
,
2006,
CIKM '06.
[3]
Milad Shokouhi,et al.
Capturing collection size for distributed non-cooperative retrieval
,
2006,
SIGIR.
[4]
Jianguo Lu,et al.
Ranking bias in deep web size estimation using capture recapture method
,
2010,
Data Knowl. Eng..
[5]
Djoerd Hiemstra,et al.
Size estimation of non-cooperative data collections
,
2012,
IIWAS '12.
[6]
Sheng Wu,et al.
Estimating collection size with logistic regression
,
2007,
SIGIR.