Crawlers are software which can traverse the internet and retrieve web pages by hyperlinks. In the face of the large number of websites, traditional web crawlers cannot function well to get the relevant pages effectively. To solve these problems, focused crawlers utilize semantic web technologies to analyze the semantics of hyperlinks and web documents. The focused crawler is a special-purpose search engine which aims to selectively seek out pages that are relevant to a predefined set of topics, rather than to exploit all regions of the web. The main characteristic of focused crawling is that the crawler does not need to collect all web pages, but selects and retrieves only the relevant pages. So the major problem is how to retrieve the maximal set of relevant and quality pages. To address this problem, we have designed a focused crawler which calculates the relevancy of block in web page. The Block is partitioned by VIPS algorithm. Page relevancy is calculated by sum of all block relevancy scores in one page. It also calculates the URL score for identifying whether a URL is relevant or not for a specific topic.
[1]
Hector Garcia-Molina,et al.
Efficient Crawling Through URL Ordering
,
1998,
Comput. Networks.
[2]
Yoelle Maarek,et al.
The Shark-Search Algorithm. An Application: Tailored Web Site Mapping
,
1998,
Comput. Networks.
[3]
Yunming Ye,et al.
iSurfer: A Focused Web Crawler Based on Incremental Learning from Positive Samples
,
2004,
APWeb.
[4]
Reinier Post,et al.
Information Retrieval in the World-Wide Web: Making Client-Based Searching Feasible
,
1994,
Comput. Networks ISDN Syst..
[5]
Martin van den Berg,et al.
Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery
,
1999,
Comput. Networks.