A scale for crawler effectiveness on the client-side hidden web

The main goal of this study is to present a scale that classifies crawling systems according to their effectiveness in traversing the “clientside” Hidden Web. First, we perform a thorough analysis of the different client-side technologies and the main features of the web pages in order to determine the basic steps of the aforementioned scale. Then, we define the scale by grouping basic scenarios in terms of several common features, and we propose some methods to evaluate the effectiveness of the crawlers according to the levels of the scale. Finally, we present a testing web site and we show the results of applying the aforementioned methods to the results obtained by some open-source and commercial crawlers that tried to traverse the pages. Only a few crawlers achieve good results in treating client-side technologies. Regarding standalone crawlers, we highlight the open-source crawlers Heritrix and Nutch and the commercial crawler WebCopierPro, which is able to process very complex scenarios. With regard to the crawlers of the main search engines, only Google processes most of the scenarios we have proposed, while Yahoo! and Bing just deal with the basic ones. There are not many studies that assess the capacity of the crawlers to deal with client-side technologies. Also, these studies consider fewer technologies, fewer crawlers and fewer combinations. Furthermore, to the best of our knowledge, our article provides the first scale for classifying crawlers from the point of view of the most important client-side technologies.

[1]  Kumar Chellapilla,et al.  A taxonomy of JavaScript redirection spam , 2007, AIRWeb '07.

[2]  Rajeev Motwani,et al.  The PageRank Citation Ranking : Bringing Order to the Web , 1999, WWW 1999.

[3]  Michael J. Cafarella,et al.  Building Nutch: Open Source Search , 2004, ACM Queue.

[4]  Tim Bosenick,et al.  Web Page Layout: A Comparison Between Left- and Right-justified Site Navigation Menus , 2006, J. Digit. Inf..

[5]  Michael K. Bergman White Paper: The Deep Web: Surfacing Hidden Value , 2001 .

[6]  Hector Garcia-Molina,et al.  Web Spam Taxonomy , 2005, AIRWeb.

[7]  Daniel Read,et al.  VBScript Programmer's Reference , 1999 .

[8]  Melius Weideman,et al.  The influence that JavaScript™ has on the visibility of a Website to search engines - a pilot study , 2006, Inf. Res..

[9]  Anthony T. Holdener Ajax: the definitive guide , 2008 .

[10]  Brian D. Davison,et al.  Identifying link farm spam pages , 2005, WWW '05.

[11]  Brian D. Davison,et al.  Detecting semantic cloaking on the web , 2006, WWW '06.

[12]  Brian D. Davison,et al.  Cloaking and Redirection: A Preliminary Study , 2005, AIRWeb.

[13]  Sriram Raghavan,et al.  Crawling the Hidden Web , 2001, VLDB.

[14]  David R. Danielson Web navigation and the behavioral effects of constantly visible site maps , 2002, Interact. Comput..

[15]  Roger Braunstein ActionScript 3.0 Bible , 2007 .

[16]  B. Huberman,et al.  The Deep Web : Surfacing Hidden Value , 2000 .

[17]  Alberto Pan,et al.  Crawling Web Pages with Support for Client-Side Dynamism , 2006, WAIM.

[18]  D. Flannanghan JavaScript: The definitive guide , 1999 .

[19]  Jayant Madhavan,et al.  Google's Deep Web crawl , 2008, Proc. VLDB Endow..