Considering situations in a multi-agent system, if there are tremendous number of agents sharing knowledge with each other, it is complicated activities hard to be solved. This thesis proposed a method that all agents just connect with a server to alleviate the complexity of the experiences exchange activities. The server collects learning knowledge loaded from all the agents, merges the knowledge, and shares the knowledge to all agents which lack akin experiences. The agents utilized the proposed Pheromone Mechanism in Ant Colony Algorithm to evaluate whether an experience is worthy to upload to the server. Meanwhile, to deal with the problem of massive data processing, this thesis used the open source software, Apache Hadoop, along with the MapReduce programming model. The agents can take shared experiences integrated with their own knowledge to achieve knowledge sharing and increase the efficiency significantly. The proposed approach in this thesis was implemented by a homemade server and personal computers.
[1]
Pooja Jain,et al.
Hadoop distributed computing clusters for fault prediction
,
2016,
2016 International Computer Science and Engineering Conference (ICSEC).
[2]
Chia-Feng Juang,et al.
Ant Colony Optimization Incorporated With Fuzzy Q-Learning for Reinforcement Fuzzy Control
,
2009,
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.
[3]
T. Tateyama,et al.
Parallel reinforcement learning systems using exploration agents and dyna-Q algorithm
,
2007,
SICE Annual Conference 2007.
[4]
Majid Nili Ahmadabadi,et al.
Expertness based cooperative Q-learning
,
2002,
IEEE Trans. Syst. Man Cybern. Part B.