OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
Introduction
OpenSTL is a comprehensive benchmark for spatio-temporal predictive learning, encompassing a broad spectrum of methods and diverse tasks, ranging from synthetic moving object trajectories to real-world scenarios such as human motion, driving scenes, traffic flow, and weather forecasting. OpenSTL offers a modular and extensible framework, excelling in user-friendliness, organization, and comprehensiveness. The codebase is organized into three abstracted layers, namely the core layer, algorithm layer, and user interface layer, arranged from the bottom to the top.
Overview
Major Features and Plans
-
Flexable Code Design. OpenSTL decomposes STL algorithms into
methods
(training and prediction),models
(network architectures), andmodules
, while providing unified experiment API. Users can develop their own STL algorithms with flexible training strategies and networks for different STL tasks. -
Standard Benchmarks. OpenSTL will support standard benchmarks of STL algorithms image with training and evaluation as many open-source projects (e.g., MMDetection and USB). We are working on training benchmarks and will update results synchronizingly.
-
Plans. We plan to provide benchmarks of various STL methods and MetaFormer architectures based on SimVP in various STL application tasks, e.g., video prediction, weather prediction, traffic prediction, etc. We encourage researchers interested in STL to contribute to OpenSTL or provide valuable advice!
Code Structures
openstl/api
contains an experiment runner.openstl/core
contains core training plugins and metrics.openstl/datasets
contains datasets and dataloaders.openstl/methods/
contains training methods for various video prediction methods.openstl/models/
contains the main network architectures of various video prediction methods.openstl/modules/
contains network modules and layers.tools/
contains the executable python filestools/train.py
andtools/test.py
with possible arguments for training, validating, and testing pipelines.
News and Updates
[2023-06-19] OpenSTL
v0.3.0 is released and will be enhanced in #25.
Installation
This project has provided an environment setting file of conda, users can easily reproduce the environment by the following commands:
git clone https://github.com/chengtan9907/OpenSTL
cd OpenSTL
conda env create -f environment.yml
conda activate OpenSTL
python setup.py develop
Dependencies
- argparse
- dask
- decord
- fvcore
- hickle
- lpips
- matplotlib
- netcdf4
- numpy
- opencv-python
- packaging
- pandas
- scikit-image
- scikit-learn
- torch
- timm
- tqdm
- xarray==0.19.0
Please refer to install.md for more detailed instructions.
Getting Started
Please see get_started.md for the basic usage. Here is an example of single GPU non-distributed training SimVP+gSTA on Moving MNIST dataset.
bash tools/prepare_data/download_mmnist.sh
python tools/train.py -d mmnist --lr 1e-3 -c configs/mmnist/simvp/SimVP_gSTA.py --ex_name mmnist_simvp_gsta
Overview of Model Zoo and Datasets
We support various spatiotemporal prediction methods and will provide benchmarks on various STL datasets. We are working on add new methods and collecting experiment results.
-
Spatiotemporal Prediction Methods.
Currently supported methods
Currently supported MetaFormer models for SimVP
- ViT (ICLR'2021)
- Swin-Transformer (ICCV'2021)
- MLP-Mixer (NeurIPS'2021)
- ConvMixer (Openreview'2021)
- UniFormer (ICLR'2022)
- PoolFormer (CVPR'2022)
- ConvNeXt (CVPR'2022)
- VAN (ArXiv'2022)
- IncepU (SimVP.V1) (CVPR'2022)
- gSTA (SimVP.V2) (ArXiv'2022)
- HorNet (NeurIPS'2022)
- MogaNet (ArXiv'2022)
-
Spatiotemporal Predictive Learning Benchmarks (prepare_data or Baidu Cloud).
Currently supported datasets
- Human3.6M (TPAMI'2014) [download] [config]
- KTH Action (ICPR'2004) [download] [config]
- KittiCaltech Pedestrian (IJRR'2013) [download] [config]
- Kinetics-400 (ArXiv'2017) [download] [config]
- Moving MNIST (ICML'2015) [download] [config]
- Moving FMNIST (ICML'2015) [download] [config]
- TaxiBJ (AAAI'2017) [download] [config]
- WeatherBench (ArXiv'2020) [download] [config]
Visualization
We present visualization examples of ConvLSTM below. For more detailed information, please refer to the visualization.
-
For synthetic moving object trajectory prediction and real-world video prediction, visualization examples of other approaches can be found in visualization/video_visualization.md.
-
For traffic flow prediction, visualization examples of other approaches are shown in visualization/traffic_visualization.md.
-
For weather forecasting, visualization examples of other approaches are shown in visualization/weather_visualization.md.
Moving MNIST | Moving FMNIST | Moving MNIST-CIFAR |
---|---|---|
KittiCaltech | KTH | Human 3.6M |
---|---|---|
Traffic - in flow | Traffic - out flow | Weather - Temperature |
---|---|---|
Weather - Humidity | Weather - Latitude Wind | Weather - Cloud Cover |
---|---|---|
License
This project is released under the Apache 2.0 license. See LICENSE
for more information.
Acknowledgement
OpenSTL is an open-source project for STL algorithms created by researchers in CAIRI AI Lab. We encourage researchers interested in video and weather prediction to contribute to OpenSTL! We borrow the official implementations of ConvLSTM, PredNet, PredRNN variants, E3D-LSTM, MAU, CrevNet, PhyDNet, and DMVFN.
Citation
If you are interested in our repository or our paper, please cite the following paper:
@article{tan2023openstl,
title={OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning},
author={Cheng Tan and Siyuan Li and Zhangyang Gao and Wenfei Guan and Zedong Wang and Zicheng Liu and Lirong Wu and Stan Z. Li},
journal={arXiv preprint arXiv:2306.11249},
year={2023},
}
@article{tan2022simvpv2,
title={SimVP: Towards Simple yet Powerful Spatiotemporal Predictive Learning},
author={Tan, Cheng and Gao, Zhangyang and Li, Siyuan and Li, Stan Z},
journal={arXiv preprint arXiv:2211.12509},
year={2022}
}
Contribution and Contact
For adding new features, looking for helps, or reporting bugs associated with OpenSTL
, please open a GitHub issue and pull request with the tag "new features", "help wanted", or "enhancement". Feel free to contact us through email if you have any questions.
- Siyuan Li ([email protected]), Westlake University & Zhejiang University
- Cheng Tan ([email protected]), Westlake University & Zhejiang University
- Zhangyang Gao ([email protected]), Westlake University & Zhejiang University