• Stars
    star
    546
  • Rank 78,487 (Top 2 %)
  • Language
    Jupyter Notebook
  • Created over 4 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Data, Benchmarks, and methods submitted to the M5 forecasting competition

The repository includes

"Code of Winning Methods": The code provided by the winners of the M5 competition for replicating their submissions.

"Scores and Ranks.xlsx": Scores and ranks of the top 50 submissions of the M5 "Accuracy" and M5 "Uncertainty" competitions. The scores of the benchmarks are also provided.

"M5-Competitors-Guide.pdf": Provides information about the set-up of the competition, the data set, the evaluation measures, the prizes, the submission files, and the benchmarks.

The following link includes the abovomentioned items PLUS:

https://drive.google.com/drive/folders/1D6EWdVSaOtrP1LEFh1REjI3vej6iUS_4?usp=sharing

"Dataset": The data set of the competition, i.e., unit sales (train and test set) and information about calendar, promotions, and prices. The data set is provided for the validation (public leaderboard) and evaluation (private leaderboard) phases of the competition separately. The weights used for computing the scores (WRMSSE and WSPL) are also provided per case.

"Accuracy Submissions": The forecasts of the 24 benchmarks of the M5 "Accuracy" competition and the submissions made by the top 50 performing methods. R code is also provided for evaluating the submissions (per series, horizon, aggregation level, and total).

"Uncertainty Submissions": The forecasts of the 6 benchmarks of the M5 "Uncertainty" competition and the submissions made by the top 50 performing methods. R code is also provided for evaluating the submissions (per series, horizon, aggregation level, and total).

"Papers": Papers describing the setup and data set of the M5 competition, as well as the results, findings and winning submissions of the "Accuracy" and "Uncertainty" challenges.

"validation" R code and files that can be used for replicating the submissions of the M5 benchmarks and for understanding the evaluation setup of the competition.