Ant colony optimization techniques are usually guided by pheromone and heuristic cost information when choosing the next element to add to a solution. However, while an individual element may be attractive, usually its long term consequences are neither known nor considered. For instance, a short link in a traveling salesman problem may be incorporated into an ant's solution, yet, as a consequence of this link, the rest of the path may be longer than if another link was chosen. The Accumulated Experience Ant Colony uses the previous experiences of the colony to guide in the choice of elements. This is in addition to the normal pheromone and heuristic costs. Two versions of the algorithm are presented, the original and an improved AEAC that makes greater use of accumulated experience. The results indicate that the original algorithm finds improved solutions on problems with less than 100 cities, while the improved algorithm finds better solutions on larger problems.
[1]
Gerhard Reinelt,et al.
TSPLIB - A Traveling Salesman Problem Library
,
1991,
INFORMS J. Comput..
[2]
Marcus Randall,et al.
Anti-pheromone as a Tool for Better Exploration of Search Space
,
2002,
Ant Algorithms.
[3]
Luca Maria Gambardella,et al.
Ant colony system: a cooperative learning approach to the traveling salesman problem
,
1997,
IEEE Trans. Evol. Comput..
[4]
Marco Dorigo,et al.
The ant colony optimization meta-heuristic
,
1999
.
[5]
Marcus Randall,et al.
A General Framework for Constructive Meta-Heuristics
,
2002
.
[6]
Martin Heusse,et al.
Adaptive Agent-Driven Routing and Load Balancing in Communication Networks
,
1998,
Adv. Complex Syst..