Welcome to PyPOTS
A Python Toolbox for Data Mining on Partially-Observed Time Series
⦿ Motivation
: Due to all kinds of reasons like failure of collection sensors, communication error,
and unexpected malfunction, missing values are common to see in time series from the real-world environment.
This makes partially-observed time series (POTS) a pervasive problem in open-world modeling and prevents advanced
data analysis. Although this problem is important, the area of data mining on POTS still lacks a dedicated toolkit.
PyPOTS is created to fill in this blank.
⦿ Mission
: PyPOTS (pronounced "Pie Pots") is born to become a handy toolbox that is going to make data mining on POTS easy rather than
tedious, to help engineers and researchers focus more on the core problems in their hands rather than on how to deal
with the missing parts in their data. PyPOTS will keep integrating classical and the latest state-of-the-art data mining
algorithms for partially-observed multivariate time series. For sure, besides various algorithms, PyPOTS is going to
have unified APIs together with detailed documentation and interactive examples across algorithms as tutorials.
To make various open-source time-series datasets readily available to our users, PyPOTS gets supported by its subproject TSDB (Time-Series Data Base), a toolbox making loading time-series datasets super easy!
Visit TSDB right now to know more about this handy tool
The rest of this readme file is organized as follows: ❖ Installation, ❖ Usage, ❖ Available Algorithms, ❖ Citing PyPOTS, ❖ Contribution, ❖ Community.
❖ Installation
You can refer to the installation instruction in PyPOTS documentation for a guideline with more details.
PyPOTS is available on both PyPI and Anaconda. You can install PyPOTS as shown below:
# by pip
pip install pypots # the first time installation
pip install pypots --upgrade # update pypots to the latest version
# by conda
conda install -c conda-forge pypots # the first time installation
conda update -c conda-forge pypots # update pypots to the latest version
Alternatively, you can install from the latest source code with the latest features but may be not officially released yet:
pip install https://github.com/WenjieDu/PyPOTS/archive/main.zip
❖ Usage
PyPOTS tutorials have been released. Considering the future workload, I separate the tutorials into a single repo, and you can find them in BrewPOTS. Take a look at it now, and learn how to brew your POTS datasets!
You can also find a simple and quick-start tutorial notebook on Google Colab with this link. If you have further questions, please refer to PyPOTS documentation docs.pypots.com. Besides, you can also raise an issue or ask in our community.
We present you a usage example of imputing missing values in time series with PyPOTS below, you can click it to view.
Click here to see an example applying SAITS on PhysioNet2012 for imputation:
import numpy as np
from sklearn.preprocessing import StandardScaler
from pypots.data import load_specific_dataset, mcar, masked_fill
from pypots.imputation import SAITS
from pypots.utils.metrics import cal_mae
# Data preprocessing. Tedious, but PyPOTS can help.
data = load_specific_dataset('physionet_2012') # PyPOTS will automatically download and extract it.
X = data['X']
num_samples = len(X['RecordID'].unique())
X = X.drop(['RecordID', 'Time'], axis = 1)
X = StandardScaler().fit_transform(X.to_numpy())
X = X.reshape(num_samples, 48, -1)
X_intact, X, missing_mask, indicating_mask = mcar(X, 0.1) # hold out 10% observed values as ground truth
X = masked_fill(X, 1 - missing_mask, np.nan)
dataset = {"X": X}
print(dataset["X"].shape) # (11988, 48, 37), 11988 samples, 48 time steps, 37 features
# Model training. This is PyPOTS showtime.
saits = SAITS(n_steps=48, n_features=37, n_layers=2, d_model=256, d_inner=128, n_heads=4, d_k=64, d_v=64, dropout=0.1, epochs=10)
saits.fit(dataset) # train the model. Here I use the whole dataset as the training set, because ground truth is not visible to the model.
imputation = saits.impute(dataset) # impute the originally-missing values and artificially-missing values
mae = cal_mae(imputation, X_intact, indicating_mask) # calculate mean absolute error on the ground truth (artificially-missing values)
❖ Available Algorithms
PyPOTS supports imputation, classification, clustering, and forecasting tasks on multivariate time series with missing values. The currently available algorithms of four tasks are cataloged in the following table with four partitions. The paper references are all listed at the bottom of this readme file. Please refer to them if you want more details.
Imputation |
|||
---|---|---|---|
Type | Abbr. | Full name of the algorithm/model/paper | Year |
Neural Net | SAITS | Self-Attention-based Imputation for Time Series 1 | 2023 |
Neural Net | Transformer | Attention is All you Need 2; Self-Attention-based Imputation for Time Series 1; Note: proposed in 2, and re-implemented as an imputation model in 1. |
2017 |
Neural Net | BRITS | Bidirectional Recurrent Imputation for Time Series 3 | 2018 |
Neural Net | M-RNN | Multi-directional Recurrent Neural Network 4 | 2019 |
Naive | LOCF | Last Observation Carried Forward | - |
Classification |
|||
Type | Abbr. | Full name of the algorithm/model/paper | Year |
Neural Net | BRITS | Bidirectional Recurrent Imputation for Time Series 3 | 2018 |
Neural Net | GRU-D | Recurrent Neural Networks for Multivariate Time Series with Missing Values 5 | 2018 |
Neural Net | Raindrop | Graph-Guided Network for Irregularly Sampled Multivariate Time Series 6 | 2022 |
Clustering |
|||
Type | Abbr. | Full name of the algorithm/model/paper | Year |
Neural Net | CRLI | Clustering Representation Learning on Incomplete time-series data 7 | 2021 |
Neural Net | VaDER | Variational Deep Embedding with Recurrence 8 | 2019 |
Forecasting |
|||
Type | Abbr. | Full name of the algorithm/model/paper | Year |
Probabilistic | BTTF | Bayesian Temporal Tensor Factorization 9 | 2021 |
❖ Citing PyPOTS
[Updates in Jun 2023]
The paper introducing PyPOTS is available on arXiv at this URL,
and we are pursuing to publish it in prestigious academic venues, e.g. JMLR (track for
Machine Learning Open Source Software). If you use PyPOTS in your work,
please cite it as below and
@article{du2023PyPOTS,
title={{PyPOTS: A Python Toolbox for Data Mining on Partially-Observed Time Series}},
author={Wenjie Du},
year={2023},
eprint={2305.18811},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2305.18811},
doi={10.48550/arXiv.2305.18811},
}
or
Wenjie Du. (2023). PyPOTS: A Python Toolbox for Data Mining on Partially-Observed Time Series. arXiv, abs/2305.18811. https://doi.org/10.48550/arXiv.2305.18811
❖ Contribution
You're very welcome to contribute to this exciting project!
By committing your code, you'll
- make your well-established model out-of-the-box for PyPOTS users to run,
and help your work obtain more exposure and impact.
Take a look at our inclusion criteria.
You can utilize the
template
folder in each task package (e.g. pypots/imputation/template) to quickly start; - be listed as one of PyPOTS contributors: ;
- get mentioned in our release notes;
You can also contribute to PyPOTS by simply staring
👏 Click here to view PyPOTS stargazers and forkers.
We're so proud to have more and more awesome users, as well as more bright ✨ stars:
❖ Community
We care about the feedback from our users, so we're building PyPOTS community on
- Slack. General discussion, Q&A, and our development team are here;
- LinkedIn. Official announcements and news are here;
- WeChat (微信公众号). We also run a group chat on WeChat, and you can get the QR code from the official account after following it;
If you have any suggestions or want to contribute ideas or share time-series related papers, join us and tell. PyPOTS community is open, transparent, and surely friendly. Let's work together to build and improve PyPOTS!
Footnotes
-
Du, W., Cote, D., & Liu, Y. (2023). SAITS: Self-Attention-based Imputation for Time Series. Expert systems with applications.
↩ ↩ 2↩ 3 -
Vaswani, A., Shazeer, N.M., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017). Attention is All you Need. NeurIPS 2017.
↩ ↩ 2 -
Cao, W., Wang, D., Li, J., Zhou, H., Li, L., & Li, Y. (2018). BRITS: Bidirectional Recurrent Imputation for Time Series. NeurIPS 2018.
↩ ↩ 2 -
Yoon, J., Zame, W. R., & van der Schaar, M. (2019). Estimating Missing Data in Temporal Data Streams Using Multi-Directional Recurrent Neural Networks. IEEE Transactions on Biomedical Engineering.
↩ -
Che, Z., Purushotham, S., Cho, K., Sontag, D.A., & Liu, Y. (2018). Recurrent Neural Networks for Multivariate Time Series with Missing Values. Scientific Reports.
↩ -
Zhang, X., Zeman, M., Tsiligkaridis, T., & Zitnik, M. (2022). Graph-Guided Network for Irregularly Sampled Multivariate Time Series. ICLR 2022.
↩ -
Ma, Q., Chen, C., Li, S., & Cottrell, G. W. (2021). Learning Representations for Incomplete Time Series Clustering. AAAI 2021.
↩ -
Jong, J.D., Emon, M.A., Wu, P., Karki, R., Sood, M., Godard, P., Ahmad, A., Vrooman, H.A., Hofmann-Apitius, M., & Fröhlich, H. (2019). Deep learning for clustering of multivariate clinical patient trajectories with missing values. GigaScience.
↩ -
Chen, X., & Sun, L. (2021). Bayesian Temporal Factorization for Multidimensional Time Series Prediction. IEEE transactions on pattern analysis and machine intelligence.
↩