From a Scholarly Big Dataset to a Test Collection for Bibliographic Citation Recommendation
暂无分享,去创建一个
The problem of designing recommender systems for scholarly article citations has been actively researched with more than 200 publications appearing in the last two decades. In spite of this, no definitive results are available about what approaches work best. Arguably the most important reason for this lack of consensus is the dearth of standardised test collections and evaluation protocols, such as those provided by TREC-like forums. CiteSeerX, a "scholarly big dataset" has recently become available. However, this collection provides only the raw material that is yet to be moulded into Cranfield style test collections. In this paper, we discuss the limitations of test collections used in earlier work, and describe how we used CiteSeerX to design a test collection with a well-defined evaluation protocol. The collection consists of over 600,000 research papers and over 2,500 queries. We report some preliminary experimental results using this collection, which are indicative of the performance of elementary content-based techniques. These experiments also made us aware of some shortcomings of CiteSeerX itself.