Vetting the Makridakis Dataset: Further Indications of the Robustness of the Rule Based Forecasting Model
Abstract
Context: The year 2022 marks the 30th anniversary of Collopy and Armstrong’s The Rule Based Forecasting [RBF] Expert Systems Model. Over the last three decades, there has been a plethora of research reports—truly a research Cornucopia—spawned by this very unique, effective, and ground-breaking forecasting system. Focus: The purpose of this research note is to: (i) Briefly, remind the forecasting community of the excellent pre-model-launch vetting used by Collopy and Armstrong [C&A] to form their RBF-model. Important is: their vetting protocols readily generalize to most modeling domains, and (ii) Offer a “re-vetting” analysis of the M-Competition dataset used by C&A that addresses their comment: “This study also used long calibration series - - -; rule-based forecasting benefits from long series because it uses information about patterns in the data. We do not know how the procedure will perform for short series.” [p. 1403[Bolding Added]]. Results: We trimmed selected series from the M-Competition to arrive at 165-series all of which had 13-time series points for the OLS Regression-fit [OLS-R] & three panel-points as holdbacks. We found that: (i) there is evidence that these trimmed-series likely have inferentially differentiable variance profiles compared to the performance profiles reported by C&A, and (ii) despite this, these trimmed-segments did not seem to compromise the C&A’s parametrization of the RBF Model in comparison to OLS-R forecasts. Finally, we suggest the need for an extension of the RBF Expert System re: (1-FPE) Confidence Intervals that would further enhance RBF-testing with respect to capture-rates and relative precision.
References
Adya, M., Collopy, F., Armstrong, J.S. & Kennedy, M. (2001). Automatic identification of time series features for Rule-based Forecasting. International Journal of Forecasting, 17, 143–157.
Adya, M. & Lusk, E. (2013). Rule Based Forecasting [RBF] - Improving efficacy of judgmental forecasts using simplified expert rules. International Research Journal of Applied Finance, 4, 1006-1024.
Adya, M. & Lusk, E. (2016). Time series complexity: The development and validation of a Rule-Based complexity scoring technique. Decision Support Systems. http://dx.doi.org/10.1016/j.dss.2015.12.009
Armstrong, J.S. & Collopy, F. (1992). The selection of error measures for generalizing about forecasting methods: empirical comparisons, International Journal of Forecasting, 8, 69–80.
Brown, R.G. (1959). Statistical Forecasting for Inventory Control. McGraw-Hill, New York.
Collopy, F. & Armstrong, J. S. (1992). Rule-based forecasting: Development and validation of an expert systems approach to combining time series extrapolations. Management Science, 38, 1394–1414.
de Carvalho, M. (2016). Mean, what do you mean?. The American Statistician, 70(3), 270-274.
Makridakis, S., Andersen, A., Carbone, R., Fildes, R., Hibon, H., Lewandowski & Winkler, R. (1982). The accuracy of extrapolation (time series) methods: Results of a forecasting competition. Journal of Forecasting, 1, 111–153.
Makridakis, S., Hibon, M., Lusk, E. & Belhadjali, M. (1987). Confidence intervals: An empirical investigation of the series in the M-Competition. I.J. Forecasting, 3, 489-508.
Theil, H. (1958). Economic Forecasts and Policy. North Holland Press, Amsterdam.
Wang, H. & Chow, S.-C. (2007). Sample size calculation for comparing proportions. Test for equality: Wiley encyclopedia of clinical trials. https://doi.org/10.1002/9780471462422.eoct005
Copyright (c) 2023 Frank Heilig, Edward J. Lusk
This work is licensed under a Creative Commons Attribution 4.0 International License.