Guidelines for Financial Forecasting with Neural Networks

Neural networks are good at classification, forecasting and recognition. They are also good candidates of financial forecasting tools. Forecasting is often used in the decision making process. Neural network training is an art. Trading based on neural network outputs, or trading strategy is also an art. We will discuss a seven-step neural network forecasting model building approach in this article. Pre and post data processing/analysis skills, data sampling, training criteria and model recommendation will also be covered in this article. 1. Forecasting with Neural Networks Forecasting is a process that produces a set of outputs by given a set of variables. The variables are normally historical data. Basically, forecasting assumes that future occurrences are based, at least in part, on presently observable or past events. It assumes that some aspects of the past patterns will continue into the future. Past relationships can then be discovered through study and observation. The basic idea of forecasting is to find an approximation of mapping between the input and output data in order to discover the implicit rules governing the observed movements. For instance, the forecasting of stock prices can be described in this way. Assume that i u represents today's price, i v represents the price after ten days. If the prediction of a stock price after ten days could be obtained using today's stock price, then there should be a functional mapping i u to i v , where ) ( i i i u v Γ = . Using all ( i u , i v ) pairs of historical data, a general function () Γ which consists of () i Γ could be obtained, that is ) (u v Γ = . More generally, u which consists of more information in today's price could be used in function () Γ . As NNs are universal approximators, we can find a NN simulating this () Γ function. The trained network is then used to predict the movements for the future. NN based financial forecasting has been explored for about a decade. Many research papers are published on various international journals and conferences proceedings. Some companies and institutions are also claiming or marketing the so called advanced forecasting tools or models. Some research results of financial forecasting found in references. For instance, stock trading system[4], stock forecasting [6, 22], foreign exchange rates forecasting [15, 24], option prices [25], advertising and sales volumes [13]. However, Callen et al. [3] claim that NN models are not necessarily superior to linear time series models even when the data are financial, seasonal and nonlinear. 2. Towards a Better Robust Financial Forecasting Model In working towards a more robust financial forecasting model, the following issues are worth examining. First, instead of emphasizing on the forecasting accuracy only, other financial criteria should be considered. Current researchers tend to use goodness of fit or similar criteria to judge or train their models in financial domain. In terms of mathematical calculation this approach is a correct way in theory. As we understand that a perfect forecasting is impossible in reality. No model can achieve such an ideal goal. Under this constraint, seeking a perfect forecasting is not our aim. We can only try to optimize our imperfect forecasts and use other yardsticks to give the most realistic measure. Second, there should be adequate organization and processing of forecasting data. Preprocessing and proper sampling of input data can have impact on the forecasting performance. Choice of indicators as inputs through sensitivity analysis could help to eliminate redundant inputs. Furthermore, NN forecasting results should be used wisely and effectively. For example, as the forecast is not perfect, should we compare the NN output with the previous forecast or with the real data especially when price levels are used as the forecasting targets? Third, a trading system should be used to decide on the best tool to use. NN is not the single tool that can be used for financial forecasting. We also cannot claim that it is the best forecasting tool. In fact, people are still not aware of which kind of time series is the most suitable for NN applications. To conduct post forecasting analysis will allow us to find out the suitability of models and series. We may then conclude that a certain kind of models should be used for a certain kind of time series. Training or building NN models is a trial and error procedure. Some researchers are not willing to test more on their data set [14]. If there is a system that can help us to formalize these tedious exploratory procedures, it will certainly be of great value to financial forecasting. Instead of just presenting one successful experiment, possibility or confidence level can be applied to the outputs. Data are partitioned into several sets to find out the particular knowledge of this time series. As stated by David Wolpert and William Macready about their No-Free-Lunch theorems [28], averaged over all problems, all search algorithms perform equally. Just experimenting on a single data set, a NN model which outperforms other models can be found. However, for another data set one model which outperforms NN model can also be found according to No-Free-Lunch theorems. To avoid such a case of one model outperforming others, we partition the data set into several sub data sets. The recommended NN models are those that outperform other models for all sub time horizons. In other words, only those models incorporated with enough local knowledge can be used for future forecasting. It is very important and necessary to emphasize these three issues here. Different criteria exist for the academics and the industry. In academics, sometime people seek for the accuracy towards 100%. While in industry a guaranteed 60% accuracy is typically aimed for. In addition, profit is the eventual goal of practitioners, so a profit oriented forecasting model may fit their needs. Cohen [5] surveyed 150 papers in the proceedings of the 8th National Conference on artificial intelligence. He discovered that only 42% of the papers reported that a program had run on more than one example; just 30% demonstrated performance in some way; a mere 21% framed hypotheses or made predictions. He then concluded that the methodologies used were incomplete with respect to the goals of designing and analyzing AI system. Tichy [20] showed that in a very large study of over 400 research articles in computer science. Over 40% of the articles are about new designs and the models completely lack experimental data. In a recent IEEE computer journal, he also points out 16 excuses to avoid experimentation for computer scientists [21]. What he is talking is true and not a joke. Prechelt [14] showed that the situation is not better in the NN literature. Out of 190 papers published in wellknown journals dedicated to NNs, 29% did not employ even a single realistic or real learning problem. Only 8% of the articles presented results for more than one problem using real world data. To build a NN forecasting we need sufficient experiments. To test only for one market or just for one particular time period means nothing. It will not lead to a robust model based on manually, trial-and-error, or ad hoc experiments. More robust model is needed but not only in one market or for one time period. Because of the lack of industrial models and because failures in academic research are not published, a single person or even a group of researchers will not gain enough information or experiences to build up a good forecasting model. It is obvious that an automated system dealing with NN models building is necessary. 3. Steps of NN Forecasting: The Art of NN Training As NN training is an art, many searchers and practitioners have worked in the field to work towards successful prediction and classification. For instance, William Remus and Marcus O'connor [16] suggest some principles for NN prediction and classification are of critical importance in the chapter, “Principles of Forecasting” in “A Handbook for Researchers and Practitioners”: • Clean the data prior to estimating the NN model. • Scale and deseasonalize data prior to estimating the model. • Use appropriate methods to choose the right starting point. • Use specialized methods to avoid local optima. • Expand the network until there is no significant improvement in fit. • Use pruning techniques when estimating NNs and use holdout samples when evaluating NNs. • Take care to obtain software that has in-built features to avoid NN disadvantages. • Build plausible NNs to gain model acceptance by reducing their size. • Use more approaches to ensure that the NN model is valid. With the authors' experience and sharing from other researchers and practitioners, we propose a seven-step approach for NN financial forecasting model building. The seven steps are basic components of the automated system and normally involved in the manual approach. Each step deals with an important issue. They are data preprocessing, input and output selection, sensitive analysis, data organization, model construction, post analysis and model recommendation. Step 1. Data Preprocessing A general format of data is prepared. Depending on the requirement, longer term data, e.g. weekly, monthly data may also be calculated from more frequently sampled time series. We may think that it makes sense to use as frequent data sampling as possible for experiments. However, researchers have found that increasing observation frequency does not always help to improve the accuracy of forecasting [28]. Inspection of data to find outliers is also important as outliers make it difficult for NNs and other forecasting models to model the true underlying functional. Although NNs have been shown to be universal approximators, it had been found that NNs had difficulty modeling seasonal patterns in time series [11].When a time series contains significant seasonality, the data

[1]  Randall S. Sexton,et al.  Toward global optimization of neural networks: A comparison of the genetic algorithm and backpropagation , 1998, Decis. Support Syst..

[2]  Shouhong Wang The unpredictability of standard back propagation neural networks in classification applications , 1995 .

[3]  Paul R. Cohen,et al.  A Survey of the Eighth National Conference on Artificial Intelligence: Pulling Together or Pulling Apart? , 1991, AI Mag..

[4]  C. Tan,et al.  NEURAL NETWORKS FOR TECHNICAL ANALYSIS: A STUDY ON KLCI , 1999 .

[5]  Lutz Prechelt,et al.  A quantitative study of experimental evaluations of neural network learning algorithms: Current research practice , 1996, Neural Networks.

[6]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[7]  William Remus,et al.  Neural Networks for Time-Series Forecasting , 2001 .

[8]  Yufei Yuan,et al.  Neural network forecasting of quarterly accounting earnings , 1996 .

[9]  Michael C. Mozer,et al.  Using Relevance to Reduce Network Size Automatically , 1989 .

[10]  C. Tan,et al.  Option price forecasting using neural networks , 2000 .

[11]  Walter F. Tichy,et al.  Should Computer Scientists Experiment More? , 1998, Computer.

[12]  Robert Heinkel,et al.  Measuring Event Impacts in Thinly Traded Stocks , 1988, Journal of Financial and Quantitative Analysis.

[13]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[14]  JingTao Yao,et al.  Neural Networks for the Analysis and Forecasting of Advertising and Promotion Impact , 1998 .

[15]  Wee Kheng Leow,et al.  Opening the neural network black box: an algorithm for extracting rules from function approximating artificial neural networks , 2000, ICIS.

[16]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[17]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[18]  Jingtao Yao,et al.  A case study on using neural networks to perform technical forecasting of forex , 2000, Neurocomputing.

[19]  Thomas Kolarik,et al.  Time series forecasting using neural networks , 1994, APL '94.

[20]  Ignacio Requena,et al.  Are artificial neural networks black boxes? , 1997, IEEE Trans. Neural Networks.

[21]  D. Wolpert,et al.  No Free Lunch Theorems for Search , 1995 .

[22]  Jingtao Yao,et al.  Time dependent directional profit model for financial time series forecasting , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[23]  Paul Lukowicz,et al.  Experimental evaluation in computer science: A quantitative study , 1995, J. Syst. Softw..

[24]  R. Donaldson,et al.  An artificial neural network-GARCH model for international stock return volatility , 1997 .