Training Neurofuzzy Systems
暂无分享,去创建一个
Abstract A neurofuzzy system combines the positive attributes of a neural network and a fuzzy system by providing a transparent framework for representing linguistic rules with well defined modelling and learning characteristics. Unfortunately, their application is limited to problems involving a small number of input variables by the curse of dimensionality where the the size of the rule base and the training set increase as an exponential function of the input dimension. The curse can be alleviated by a number of approaches but one which has recently received much attention is the exploitation of redundancy. Many functions can be adequately approximated by an additive model whose output is a sum over several smaller dimensional subrnodels. This technique is called global partitioning and the aim of an algorithm designed to construct the approximation is to automatically determine the number of submodels and the subset of input variables for each submodel. The construction algorithm is an iterative process where each iteration must identify a set of candidate refinements and evaluate the associated candidate models. This leads naturally to the problem of how to train the candidate models and the approach taken depends on whether they contain one or multiple submodels.
[1] G. Kitagawa,et al. Akaike Information Criterion Statistics , 1988 .
[2] Li-Xin Wang,et al. Adaptive fuzzy systems and control , 1994 .
[3] T. Kavli. ASMO—Dan algorithm for adaptive spline modelling of observation data , 1993 .
[4] Elie Bienenstock,et al. Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.