We present a preliminary study of the evolution of a crawling strategy for an academic document search engine, in particular CiteSeerX. CiteSeerX actively crawls the web for academic and research documents primarily in computer and information sciences, and then performs unique information extraction and indexing extracting information such as OAI metadata, citations, tables and others. As such CiteSeerX could be considered a specialty or vertical search engine. To improve precision in resources expended, we replace a blacklist with a whitelist and compare the crawling efficiencies before and after this change. A blacklist means the crawl is forbidden from a certain list of URLs such as publisher domains but is otherwise unlimited. A whitelist means only certain domains are considered and others are not crawled The whitelist is generated based on domain ranking scores of approximately five million parent URLs harvested by the CiteSeerX crawler in the past four years. We calculate the F1 scores for each domain by applying equal weights to document numbers and citation rates. The whitelist is then generated by re-ordering parent URLs based on their domain ranking scores. We found that crawling the whitelist significantly increases the crawl precision by reducing a large amount of irrelevant requests and downloads.
[1]
Junghoo Cho,et al.
RankMass crawler: a crawler with high personalized pagerank coverage guarantee
,
2007,
VLDB 2007.
[2]
Hector Garcia-Molina,et al.
Efficient Crawling Through URL Ordering
,
1998,
Comput. Networks.
[3]
Ricardo A. Baeza-Yates,et al.
Crawling the Infinite Web
,
2007,
J. Web Eng..
[4]
Junghoo Cho,et al.
RankMass Crawler: A Crawler with High PageRank Coverage Guarantee
,
2007,
VLDB.
[5]
Marc Najork,et al.
Web Crawling
,
2010,
Found. Trends Inf. Retr..
[6]
C. Lee Giles,et al.
Graph-based seed selection for web-scale crawlers
,
2009,
CIKM.