M1 competition

First Makridakis Competition or M Competition as known in the forecasting literature, was held in 1982 and used 1001 time series and 15 forecasting methods (with another nine variations of those methods included). According to a later paper by the authors, the following were the main conclusions of the M Competition (Makridakis et al., 1982):

  1. Statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones.
  2. The relative ranking of the performance of the various methods varies according to the accuracy measure being used.
  3. The accuracy when various methods are combined outperforms, on average, the individual methods being combined and does very well in comparison to other methods.
  4. The accuracy of the various methods depends on the length of the forecasting horizon involved.

The findings of the study were verified and replicated through other competitions and new methods by other researchers.

Newbold (1983) was critical of the M Competition, and argued against the general idea of using a single competition to attempt to settle the complex issue.

Before the first M Competition, Makridakis and Hibon (Makridakis and Hibon, 1979) published in the Journal of the Royal Statistical Society (JRSS) an article showing that simple methods perform well in comparison to the more complex and statistically sophisticated ones. Statisticians at that time criticized the results claiming that they were not possible. Their criticism motivated the subsequent M, M2 and M3 Competitions that proved beyond a shadow of doubt the findings of Makridakis and Hibon Study.

A copy describing the M1 can be found here.