Report on the SIGIR 2013 workshop on benchmarking adaptive retrieval and recommender systems
暂无分享,去创建一个
In recent years, immense progress has been made in the development of recommendation, retrieval, and personalisation techniques. The evaluation of these systems is still based on traditional information retrieval and statistics metrics, e.g., precision, recall and/or RMSE, often not taking the use-case and situation of the actual system into consideration. However, the rapid evolution of recommender and adaptive IR systems in both their goals and their bapplication domains foster the need for new evaluation methodologies and environments. In the Workshop on Benchmarking Adaptive Retrieval and Recommender Systems, we aimed to provide a platform for discussions on novel evaluation and benchmarking approaches.
[1] Frank Hopfgartner,et al. The plista dataset , 2013, NRS '13.
[2] Kevin C. Almeroth,et al. Workshop and challenge on news recommender systems , 2013, RecSys.
[3] Nicholas J. Belkin,et al. Some(what) grand challenges for information retrieval , 2008, SIGF.
[4] Jimmy J. Lin,et al. A month in the life of a production news recommender system , 2013, LivingLab '13.