Multi-valuedness destroys data contiguity for inverse-learning control

Inverse control is one of the basic paradigms of control theory. In one variation, the map from the spaces of current state and control to that of the future state is (partially) inverted. Because the inverse cannot usually be computed in closed form, a learning mechanism, such as a fuzzy approximator and neural network, is often used to deduce the inverse from examples of the forward map. We show that this popular approach may fail when the inverse is multi-valued. Although, multi-valuedness can be ignored when the inverse can be expressed in closed-form, learning-based inversion may suffer from it considerably. The importance of keeping the contiguity of the training data is illustrated.