Distributed Robust Execution of Qualitative State Plan with Chance Constraints
暂无分享,去创建一个
Physically grounded AI systems consisting of multiple mobile agents need to conduct complicated tasks cooperatively in dynamic, uncertain environment. Two important capabilities for such systems are robust kinodynamic path planning and distributed plan execution on a hybrid discrete/continuous plant. For example, a fleet of autonomous underwater vehicles (AUV) shown in Figure 1, which conducts scientific observations cooperatively for up to 20 hours without human supervision, should ideally navigate themselves to areas of scientific interest according to a game plan provided by scientists. Our plan formalism is Qualitative State Plan (QSP) (Leaute 2005), which specifies the desired evolution of the qualitative state of the system as well as the flexible temporal constraints. This approach elevates the interaction between the human operator and the robotic system, to a more abstract level where the operator is able to qualitatively command the tasks. A centralized model-based QSP executive called Sulu (Leaute 2005) generates optimal path and schedule that is consistent with a given QSP in deterministic environment. Real-world systems, however, are exposed to stochastic disturbances. Stochastic systems typically have a risk of failure due to unexpected events, such as unpredictable tides and currents that affect the AUV’s motion. AUV operators want to limit the risk of losing AUV by colliding with seafloor. Thus the kinodynamic path planning has to be robust in existence of disturbance. The plan executive should ideally be distributed for a number of reasons. First, in many cases such as underwater, the inter-vehicle communication is limited. Second, the leader vehicle that has the centralized plan executive is the single point failure. Third, computation burden concentrates in the leader vehicle. Distributed plan executive makes system more robust and efficient. My research objective is to develop a distributed modelbased QSP executive that is robust in a stochastic environment.
[1] Masahiro Ono,et al. An Efficient Motion Planning Algorithm for Stochastic Dynamic Systems with Constraints on Probability of Failure , 2008, AAAI.
[2] L. Blackmore. A Probabilistic Particle Control Approach to Optimal, Robust Predictive Control , 2006 .
[3] Brian C. Williams,et al. Coordinating Agile Systems through the Model-based Execution of Temporal Plans , 2005, AAAI.
[4] Aachen,et al. Stochastic Inequality Constrained Closed-loop Model Predictive Control: With Application To Chemical Process Operation , 2004 .