Multi-Agent Learning in Recommender Systems for Information Filtering on the Internet

Recommender Systems (RS), allow users to share information about items they like or dislike and obtain, in a timely fashion, recommendations based on predictions about unseen items (physical or information goods and/or services). In this process, users' preferences are considered to be the learning target functions. We study Agent-based Recommender Systems (ARS) under the scope of online learning in Multi-Agent systems (MAS). This approach models the problem as a pool of independent cooperative predictor agents, one per each user (the masters) in the system, in situations in which each agent (the learners) faces a sequence of trials, with a prediction to make in every step, eventually getting the correct value from its master. Each learner is willing to discover the degree of similarity among the target function of its master and those of other agents' masters (i.e. preference similarity). The agent uses this information for the calculation of its own prediction task, the goal being to make as few mistakes as possible. A simple, yet effective method is introduced in order to construct a compound algorithm for each agent by combining memory-based individual prediction and online weighted-majority voting. We give a theoretical mistake bound for this algorithm that is closely related to the total loss of the best predictor agent in the pool. Finally, we conduct some experiments obtaining results that empirically support these ideas and theories.