Distributed agent-based air traffic flow management

Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET -- an air traffic flow simulator developed at NASA and used extensively by the FAA and industry -- to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation).

[1]  K. Bilimoria A Geometric Optimization Approach to Aircraft Conflict Resolution , 2000 .

[2]  Kagan Tumer,et al.  Collectives and Design Complex Systems , 2004 .

[3]  S. Shankar Sastry,et al.  Conflict resolution for air traffic management: a study in multiagent hybrid systems , 1998, IEEE Trans. Autom. Control..

[4]  G. D. Sweriduk,et al.  Optimal Strategies for Free-Flight Air Traffic Conflict Resolution , 1999 .

[5]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[6]  Kagan Tumer,et al.  Efficient Evaluation Functions for Multi-rover Systems , 2004, GECCO.

[7]  David Sislák,et al.  Autonomous agents for air-traffic deconfliction , 2006, AAMAS '06.

[8]  Kagan Tumer,et al.  Multi-agent reward analysis for learning in noisy domains , 2005, AAMAS '05.

[9]  Banavar Sridhar,et al.  Central East Pacific Flight Routing , 2007 .

[10]  Kagan Tumer,et al.  Handling Communication Restrictions and Team Formation in Congestion Games , 2006, Autonomous Agents and Multi-Agent Systems.

[11]  Kapil Sheth,et al.  Aggregate Flow Model for Air-Traffic Management , 2004 .

[12]  Kapil Sheth,et al.  FACET: Future ATM Concepts Evaluation Tool , 2001 .

[13]  M. S. Eby,et al.  Free flight separation assurance using distributed algorithms , 1999, 1999 IEEE Aerospace Conference. Proceedings (Cat. No.99TH8403).

[14]  Kagan Tumer,et al.  Optimal Payoff Functions for Members of Collectives , 2001, Adv. Complex Syst..

[15]  James K. Archibald,et al.  A cooperative multi-agent approach to free flight , 2005, AAMAS '05.

[16]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.