Measuring the web crawler ethics
暂无分享,去创建一个
Web crawlers are highly automated and seldom regulated manually. The diversity of crawler activities often leads to ethical problems such as spam and service attacks. In this research, quantitative models are proposed to measure the web crawler ethics based on their behaviors on web servers. We investigate and define rules to measure crawler ethics, referring to the extent to which web crawlers respect the regulations set forth in robots.txt configuration files. We propose a vector space model to represent crawler behavior and measure the ethics of web crawlers based on the behavior vectors. The results show that ethicality scores vary significantly among crawlers. Most commercial web crawlers' behaviors are ethical. However, many commercial crawlers still consistently violate or misinterpret certain robots.txt rules. We also measure the ethics of big search engine crawlers in terms of return on investment. The results show that Google has a higher score than other search engines for a US website but has a lower score than Baidu for Chinese websites.
[1] Santanu Kolay. A larger scale study of robots.txt , 2008, WWW.
[2] David Eichmann,et al. 2 – Background : Agents in General and Spiders in Particular , 1994 .
[3] Mike Thelwall,et al. Web crawling ethics revisited: Cost, privacy, and denial of service , 2006, J. Assoc. Inf. Sci. Technol..
[4] C. Lee Giles,et al. A large-scale study of robots.txt , 2007, WWW '07.