FINDING UNDERLYING FACTORS IN TIMESERIES

We compare four neural methods of pre-processing time series data - Principal Component Analysis (PCA) (Karhunen and Joutsensalo 1994), a neural implementation of Factor Analysis (FA), Independent Component Analysis (ICA) (Hyva¨rinen and Oja 1997) and Complexity Pursuit (CP) (Hyva¨rinen 2001) - with a view to subsequently using a multi-layer perceptron (MLP) to forecast on the data set. Our rationale is that forecasting the underlying factors will be easier than forecasting the original time series which is a combination of these factors. The projections of the data onto the filters found by the pre-processing method were fed into the MLP and it was trained to find Least Mean Square Error (LMSE). We show that forecasting the projections on the underlying factors reduces the need to consider the possibility of overtraining the MLP. The last method, CP, achieves by far the best (in terms of least mean square error) performance. Factor Analysis (FA) and particularly Independent Component Analysis (ICA) have the worst performance. Minor modifications to the Complexity Pursuit (CP) method are shown to improve performance in terms of LMSE.