Replication and Extension of a Forecasting Decision Support System: An Empirical Examination of the Time Series Complexity Scoring Technique

This study presents a conceptual replication of Adya and Lusk’s (2016) forecasting decision support system (FDSS) that identifies the complexity or simplicity of a time series. Prior studies in forecasting have argued convincingly that the design of FDSS should incorporate the complexity of the forecasting task. Yet, there existed no formal way of determining time series complexity until this FDSS, referred to as the Complexity Scoring Technique (CST). The CST uses characteristics of the time series to trigger 12 rules that score the complexity of a time series and classify it along the binary dimension of Simple or Complex. The CST was originally validated using statistical forecasts of a small set of 54 time series as well as judgmental forecasts from 14 representative participants to confirm that the FDSS successfully distinguished Simple series from Complex ones. In this study, we (a) replicate the CST on a much larger set of data from both statistical and judgmental forecasting methods, and (b) extend and validate the series classification categories from the binary Simple-Complex used in the original CST to Very Simple, Simple, Complex, and Very Complex thus adding an ordinal link between the two previous binary designations. Findings suggest that both the replication and extension of the CST further validate it, thereby greatly enhancing its use in the practice of forecasting. Implications for research and practice are discussed.

[1]  Monica Adya,et al.  Decomposition as a Complex-Skill Acquisition Strategy in Management Education: A Case Study in Business Forecasting. , 2009 .

[2]  P. Goodwin,et al.  Improving judgmental time series forecasting: A review of the guidance provided by research , 1993 .

[3]  William Remus,et al.  Judgemental forecasting in times of change , 1993 .

[4]  Nada R. Sanders,et al.  Forecasting Software in Practice: Use, Satisfaction, and Performance , 2003, Interfaces.

[5]  Eric J. Johnson,et al.  The Adaptive Decision-Maker : Effort and Accuracy in Choice , 2022 .

[6]  Dale Goodhue,et al.  Task-Technology Fit and Individual Performance , 1995, MIS Q..

[7]  Fred Collopy,et al.  An Application of Rule-Based Forecasting to a Situation Lacking Domain Knowledge , 2000 .

[8]  Paul B. Andreassen,et al.  Judgmental extrapolation and the salience of change , 1990 .

[9]  John W. Payne,et al.  Task complexity and contingent processing in decision making: An information search and protocol analysis☆ , 1976 .

[10]  Monica Adya,et al.  Designing Effective Forecasting Decision Support Systems: Aligning Task Complexity and Technology Support , 2012 .

[11]  Monica Adya,et al.  Development and validation of a rule-based time series complexity scoring technique to support design of adaptive forecasting DSS , 2016, Decis. Support Syst..

[12]  Fred Collopy,et al.  Automatic Identification of Time Series Features for Rule-Based Forecasting , 2001 .

[13]  Ilze Zigurs,et al.  A Theory of Task/Technology Fit and Group Support Systems Effectiveness , 1998, MIS Q..

[14]  Robert L. Winkler,et al.  The accuracy of extrapolation (time series) methods: Results of a forecasting competition , 1982 .

[15]  Monica Adya,et al.  Rule Based Forecasting [RBF] - Improving Efficacy of Judgmental Forecasts Using Simplified Expert Rules , 2013 .

[16]  D. Campbell Task Complexity: A Review and Analysis , 1988 .

[17]  Monica Adya,et al.  Corrections to rule-based forecasting: findings from a replication , 2000 .

[18]  Kalervo Järvelin,et al.  Task complexity affects information seeking and use , 1995 .