ArXivDigest: A Living Lab for Personalized Scientific Literature Recommendation

Providing personalized recommendations that are also accompanied by explanations as to why an item is recommended is a research area of growing importance. At the same time, progress is limited by the availability of open evaluation resources. In this work, we address the task of scientific literature recommendation. We present arXivDigest, which is an online service providing personalized arXiv recommendations to end users and operates as a living lab for researchers wishing to work on explainable scientific literature recommendations.

[1]  Cornelia Caragea,et al.  CiteSeerX: AI in a Digital Library Search Engine , 2014, AI Mag..

[2]  Martha Larson,et al.  Benchmarking News Recommendations: The CLEF NewsREEL Use Case , 2016, SIGF.

[3]  Don Monroe,et al.  AI, explain yourself , 2018, Commun. ACM.

[4]  Filip Radlinski,et al.  Online Evaluation for Information Retrieval , 2016, Found. Trends Inf. Retr..

[5]  Krisztian Balog,et al.  Extended Overview of the Living Labs for Information Retrieval Evaluation (LL4IR) CLEF Lab 2015 , 2015, CLEF.

[6]  Anne Schuth,et al.  Search Engines that Learn from Their Users , 2016, SIGIR Forum.

[7]  M. D. Rijke,et al.  Information Retrieval Evaluation in a Changing World: Lessons Learned from 20 Years of CLEF , 2019, Information Retrieval Evaluation in a Changing World.

[8]  Boi Faltings,et al.  Offline and online evaluation of news recommender systems at swissinfo.ch , 2014, RecSys '14.

[9]  Maarten de Rijke,et al.  OpenSearch: Lessons Learned from an Online Evaluation Campaign , 2018, ACM J. Data Inf. Qual..

[10]  Jöran Beel,et al.  A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation , 2013, RepSys '13.

[11]  Filip Radlinski,et al.  Common Conversational Community Prototype: Scholarly Conversational Assistant , 2020, ArXiv.

[12]  Martha Larson,et al.  Continuous Evaluation of Large-Scale Information Access Systems: A Case for Living Labs , 2019, Information Retrieval Evaluation in a Changing World.

[13]  Filip Radlinski,et al.  Measuring Recommendation Explanation Quality: The Conflicting Goals of Explanations , 2020, SIGIR.

[14]  Xu Chen,et al.  Explainable Recommendation: A Survey and New Perspectives , 2018, Found. Trends Inf. Retr..