When users rate objects, a sophisticated algorithm that takes into account ability or reputation may produce a fairer or more accurate aggregation of ratings than the straightforward arithmetic average. Recently a number of authors have proposed different co-determination algorithms where estimates of user and object reputation are refined iteratively together, permitting accurate measures of both to be derived directly from the rating data. However, simulations demonstrating these methods' efficacy assumed a continuum of rating values, consistent with typical physical modelling practice, whereas in most actual rating systems only a limited range of discrete values (such as a 5-star system) is employed. We perform a comparative test of several co-determination algorithms with different scales of discrete ratings and show that this seemingly minor modification in fact has a significant impact on algorithms' performance. Paradoxically, where rating resolution is low, increased noise in users' ratings may even improve the overall performance of the system.
[1]
K. Pearson,et al.
Biometrika
,
1902,
The American Naturalist.
[2]
Robert V. Hogg,et al.
Introduction to Mathematical Statistics.
,
1966
.
[3]
P. Kollock.
The Production of Trust in Online Markets
,
1999
.
[4]
W. A. Ericson.
Introduction to Mathematical Statistics, 4th Edition
,
1972
.
[5]
Paul Van Dooren,et al.
Iterative Filtering for a Dynamical Reputation System
,
2007,
ArXiv.
[6]
D. Alan Ladd,et al.
Everybody Likes Likert: Using a Variable-Interval Slider to Collect Interval-Level Individual Options
,
2009,
ICIS.
[7]
Dawn M. Marsh-Richard,et al.
Adaptive Visual Analog Scales (AVAS): A modifiable software program for the creation, administration, and scoring of visual analog scales
,
2009,
Behavior research methods.