Analyzing the Sense Distribution of Concordances Obtained by Web as Corpus Approach

In corpus-based lexicography and natural language processing fields some authors have proposed using the Internet as a source of corpora for obtaining concordances of words. Most techniques implemented with this method are based on information retrieval-oriented web searchers. However, rankings of concordances obtained by these search engines are not built according to linguistic criteria but to topic similarity or navigational oriented criteria, such as page-rank. It follows that examples or concordances could not be linguistically representative, and so, linguistic knowledge mined by these methods might not be very useful. This work analyzes the linguistic representativeness of concordances obtained by different relevance criteria based web search engines (web, blog and news search engines). The analysis consists of comparing web concordances and SemCor (the reference) with regard to the distribution of word senses. Results showed that sense distributions in concordances obtained by web search engines are, in general, quite different from those obtained from the reference corpus. Among the search engines, those that were found to be the most similar to the reference were the informational oriented engines (news and blog search engines).

[1]  Antoinette Renouf,et al.  The changing face of corpus linguistics , 2006 .

[2]  Adam Kilgarriff,et al.  Getting to Know Your Corpus , 2012, TSD.

[3]  David Brown,et al.  Word sense distribution in a web corpus , 2010, 9th IEEE International Conference on Cognitive Informatics (ICCI'10).

[4]  Ellen Riloff,et al.  Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs , 2008, ACL.

[5]  Fredric C. Gey,et al.  Proceedings of LREC , 2010 .

[6]  Julio Gonzalo,et al.  Wikipedia as Sense Inventory to Improve Diversity in Web Search Results , 2010, ACL.

[7]  Eneko Agirre,et al.  The Effect of Bias on an Automatically-built Word Sense Corpus , 2004, LREC.

[8]  Julie Weeds,et al.  Finding Predominant Word Senses in Untagged Text , 2004, ACL.

[9]  Martin Volk,et al.  Using the web as corpus for linguistic research , 2002 .

[10]  Roman Grundkiewicz,et al.  Automatic Extraction of Polish Language Errors from Text Edition History , 2013, TSD.

[11]  Brendan T. O'Connor,et al.  Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.

[12]  Olatz Ansa,et al.  Enriching very large ontologies using the WWW , 2000, ECAI Workshop on Ontology Learning.

[13]  Wolfgang Nejdl,et al.  Increasing Diversity in Web Search Results , 2010 .

[14]  Preslav Nakov,et al.  Improved Word Alignments Using the Web as a Corpus , 2007 .

[15]  Silvia Bernardini,et al.  BootCaT: Bootstrapping Corpora and Terms from the Web , 2004, LREC.

[16]  Andrei Broder,et al.  A taxonomy of web search , 2002, SIGF.

[17]  Barry Morley WebCorp: A tool for online linguistic information retrieval and analysis , 2006 .

[18]  Adam Kilgarriff,et al.  Introduction to the Special Issue on the Web as Corpus , 2003, CL.

[19]  Adam Kilgarriff,et al.  How Dominant Is the Commonest Sense of a Word? , 2004, TSD.