Syntactically different URLs could represent the same web page on the World Wide Web, and duplicate representation for web pages causes web applications to handle a large amount of same web pages unnecessarily. In the standard communities, there are on-going efforts to define the URL normalization that helps eliminate duplicate URLs. On the other hand, there are research efforts to extend the standard URL normalization methods to reduce false negatives further while allowing false positives on a limited level. This paper presents a method that evaluates the effectiveness of a URL normalization method in terms of page loss/gain/change and the URL reduction. Over 94 million URLs were extracted from web pages for our experiment and interesting statistical results are reported in this paper.
[1]
Sang Ho Lee,et al.
On URL Normalization
,
2005,
ICCSA.
[2]
Torsten Suel,et al.
Design and implementation of a high-performance distributed Web crawler
,
2002,
Proceedings 18th International Conference on Data Engineering.
[3]
Marc Najork,et al.
Mercator: A scalable, extensible Web crawler
,
1999,
World Wide Web.
[4]
Roy T. Fielding,et al.
Uniform Resource Identifiers (URI): Generic Syntax
,
1998,
RFC.
[5]
Sang Ho Lee,et al.
Implementation of a Web Robot and Statistics on the Korean Web
,
2003,
Human.Society@Internet 2003.