A Fuzzy Trust Model for Argumentation-Based Recommender Systems

With the enormous growth of the Internet and Agent-based E-commerce, online trust has become an increasingly important issue. The fact that multi-agent systems are vulnerable with respect to malicious agents poses a great challenge: the detection and the prevention of undesirable behaviors. That is the reason why techniques such as trust and reputation mechanisms have been used in literature. In this paper, we propose a fuzzy trust model for argumentation-based open multi-agent recommender systems. In an open Agent-based Recommender System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to determine whether to trust (information or services provided by) other agents or not. Lack of a trust computation mechanism may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose an extension of the basic argumentation framework in Agent-Based Recommender Systems to use the fuzzy trust within these models for trustworthy recommendations.