An Improved Method for Semantic Similarity Calculation Based on Stop-Words

Text similarity calculation has become one of the key issues of many applications such as information retrieval, semantic disambiguation, automatic question answering. There are increasing needs of similarity calculations in different levels, e.g. characters, vocabularies, syntactic structures and semantic etc. Most of existing semantic similarity algorithms can be categorized into statistical based methods, rule based methods and combination of these two methods. Statistical methods use knowledge bases to incorporate more comprehensive knowledge and have the capability of reducing knowledge noise. So they are able to obtain better performance. Nevertheless, for the unbalanced distribution of different items in the knowledge base, semantic similarity calculation performance for low-frequency words is usually poor. In this work, based on the distributions of stop-words, we proposes a weights normalization method for semantic dimensions. The proposed method uses the semantic independence of stop-words to avoid semantic bias of corpus in statistical methods. It further improves the accuracy of semantic similarity computation. Experiments compared with several existing algorithms show the effectiveness of the proposed method.