Analysis of Cartesian Granule Feature Models

Additive Cartesian granule feature (ACGF) models and a corresding constructive induction algorithm - G_DACG — were introduced in the previous chapters. G_DACG automatically determines the language (Cartesian granule features and linguistic partitions) and parameters of a Cartesian granule feature model. Here, for the purposes of illustration and nanlysis, this approach is appplied in the context of artificial problems in both the classification and prediction domains. Even though the G_DACG algorithm can automically learn models from example data, here the language of the models is determined manually, while the model parameters are identified automatically. This allows a close analysis of the effect of various decisions taken primarily in the language identification phase of learning, on the resulting Cartesian granule feature models. This analysis involves the systematic sampling of the possible model space in the following ways and subsequently measuring the accuracy of the resulting model on a test dataset: use different linguistic partitions of input variable universes; vary the feature dimensionality of the Cartesian granule features; vary the type of rule used to aggregate; use different linguistic partitions of the output variable’s universe (in the case of prediction problems). This analysis provides insights on how to model a problem using Cartesian granule features. Furthermore, this chapter provides a useful platform for understanding many other learning algorithms that may or may not explicitly manipulate fuzzy events or probabilities.