A Scalable Lightweight Distributed Crawler for Crawling with Limited Resources
暂无分享,去创建一个
Web page crawlers are an essential component in a number of Web applications. The sheer size of the Internet can pose problems in the design of Web crawlers. All currently known crawlers implement approximations or have limitations so as to maximize the throughput of the crawl, and hence, maximize the number of pages that can be retrieved within a given time frame. This paper proposes a distributed crawling concept which is designed to avoid approximations, to limit the network overhead, and to run on relatively inexpensive hardware. A set of experiments, and comparisons highlight the effectiveness of the proposed approach.
[1] Dmitri Loguinov,et al. IRLbot: scaling to 6 billion pages and beyond , 2008, WWW.
[2] Jenny Edwards,et al. An adaptive model for optimizing performance of an incremental web crawler , 2001, WWW '01.
[3] Sebastiano Vigna,et al. UbiCrawler: a scalable fully distributed Web crawler , 2004, Softw. Pract. Exp..
[4] Marc Najork,et al. Mercator: A scalable, extensible Web crawler , 1999, World Wide Web.
[5] Hector Garcia-Molina,et al. Parallel crawlers , 2002, WWW.