A framework of deep Web crawler

As an ever-increasing amount of information on the Web today is available through search interfaces, users have to key in a set of keywords in order to access the pages from certain Web sites, which are often referred to as the hidden Web or the deep Web. Since there is no static links to the hidden Web pages, search engines cannot discover and index such pages. However, according to recent studies, the content provided by many hidden Web sites is often of very high quality and can be extremely valuable to many users. How to build an effective hidden Web crawler that can autonomously discover and download pages from the hidden Web is studied. A framework of deep Web crawler is provided and we propose novel techniques to handle the actual mechanics of crawling the deep Web. Experiment shows that these policies are effective.