Structured learning of fuzzy models for reduction of information dimensionality

A specially designed structured optimization procedure is used for learning the parameters of the Takagi-Sugeno (TS) type fuzzy models. It is well known that the number of learning parameters increases exponentially with the number of model inputs. Therefore an appropriate optimization scheme with preliminary structuring of the learning parameters into two groups: left-hand-side (antecedent) parameters and right-hand-side (consequent) parameters can be helpful for speeding-up the learning process. Two different optimization algorithms for tuning the antecedent and consequent parameters respectively are used in a sequence of repetitive loops (epochs). The stop criterion is defined as a number of repetitions of the loops or as a desired minimal error. A random walk algorithm with variable step size is used in the paper for tuning the parameters of the membership functions. For tuning the singletons (consequent) parameters the previously proposed local learning algorithm is used. The problem of dimensionality reduction in fuzzy modeling is also considered in the paper from another viewpoint, namely as a hierarchical fuzzy model structure. This is a decomposition of the complete fuzzy model into a feedforward structure of sub-models with two inputs and one output called partial fuzzy models. It leads to a significant reduction of model parameters for tuning and learning time since the partial models are learned separately in a preliminary specified order. Experimentally it has been shown that both concepts for dimensionality reduction in learning fuzzy models possess benefits in learning speed and accuracy. A comparison with simultaneous optimization of all parameters of the fuzzy model is also given. It shows, that the proposed structured learning and the hierarchical fuzzy model structure reduce clearly the learning time. In other words this leads to more accurate fuzzy models for different possible applications in control and simulation.