diff --git a/README.md b/README.md index 3008da6..fa41832 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,20 @@ + +> [!TIP] +> **[Updates in Jun 2024]** π The 1st comprehensive time-seres imputation benchmark paper +[TSI-Bench: Benchmarking Time Series Imputation](https://arxiv.org/abs/2406.12747) now is public available. +The code is open source in the repo [Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation). +With nearly 35,000 experiments, we provide a comprehensive benchmarking study on 28 imputation methods, 3 missing patterns (points, sequences, blocks), +various missing rates, and 8 real-world datasets. +> +> **[Updates in May 2024]** π₯ We applied SAITS embedding and training strategies to **iTransformer, FiLM, FreTS, Crossformer, PatchTST, DLinear, ETSformer, FEDformer, +> Informer, Autoformer, Non-stationary Transformer, Pyraformer, Reformer, SCINet, RevIN, Koopa, MICN, TiDE, and StemGNN** in PyPOTS +> to enable them applicable to the time-series imputation task. +> +> **[Updates in Feb 2024]** π Our survey paper [Deep Learning for Multivariate Time Series Imputation: A Survey](https://arxiv.org/abs/2402.04059) has been released on arXiv. +We comprehensively review the literature of the state-of-the-art deep-learning imputation methods for time series, +provide a taxonomy for them, and discuss the challenges and future directions in this field. + +
-> [!TIP] -> **[Updates in Jun 2024]** π The 1st comprehensive time-seres imputation benchmark paper -[TSI-Bench: Benchmarking Time Series Imputation](https://arxiv.org/abs/2406.12747) now is public available. -The code is open source in the repo [Awesome_Imputation](https://github.com/WenjieDu/Awesome_Imputation). -With nearly 35,000 experiments, we provide a comprehensive benchmarking study on 28 imputation methods, 3 missing patterns (points, sequences, blocks), -various missing rates, and 8 real-world datasets. -> -> **[Updates in May 2024]** π₯ We applied SAITS embedding and training strategies to **iTransformer, FiLM, FreTS, Crossformer, PatchTST, DLinear, ETSformer, FEDformer, -> Informer, Autoformer, Non-stationary Transformer, Pyraformer, Reformer, SCINet, RevIN, Koopa, MICN, TiDE, and StemGNN** in PyPOTS -> to enable them applicable to the time-series imputation task. -> -> **[Updates in Feb 2024]** π Our survey paper [Deep Learning for Multivariate Time Series Imputation: A Survey](https://arxiv.org/abs/2402.04059) has been released on arXiv. -We comprehensively review the literature of the state-of-the-art deep-learning imputation methods for time series, -provide a taxonomy for them, and discuss the challenges and future directions in this field. - **βΌοΈKind reminder: This document can help you solve many common questions, please read it before you run the code.** @@ -49,11 +51,11 @@ while it was ranked 1st in Google Scholar under the top publications of Artifici ([here is the current ranking list](https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=eng_artificialintelligence) FYI). SAITS is the first work applying pure self-attention without any recursive design in the algorithm for general time series imputation. -Basically you can take it as a validated framework for time series imputation, like we've integrated 2οΈβ£0οΈβ£βοΈ forecasting models into PyPOTS by adapting SAITS framework. +Basically you can take it as a validated framework for time series imputation, like we've integrated 2οΈβ£0οΈβ£ forecasting models into PyPOTS by adapting SAITS framework. More generally, you can use it for sequence imputation. Besides, the code here is open source under the MIT license. Therefore, you're welcome to modify the SAITS code for your own research purpose and domain applications. Of course, it probably needs a bit of modification in the model structure or loss functions for specific scenarios or data input. -And this is [an incomplete list](https://scholar.google.com/scholar?q=%E2%80%9CSAITS%E2%80%9D+%22time+series%22+%22Du%22&hl=en&as_ylo=2022) of scientific research referencing SAITS in their papers. +And this is [an incomplete list](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&as_ylo=2022&q=%E2%80%9CSAITS%E2%80%9D+%22time+series%22) of scientific research referencing SAITS in their papers. π€ Please [cite SAITS](https://github.com/WenjieDu/SAITS#-citing-saits) in your publications if it helps with your work. Please starπ this repo to help others notice SAITS if you think it is useful. @@ -168,7 +170,7 @@ or > SAITS: Self-Attention-based Imputation for Time Series. > Expert Systems with Applications, 219:119619, 2023. -π Our latest survey and benchmarking research on time-series imputation may be also related to your work: +### π Our latest survey and benchmarking research on time-series imputation may also be useful to your work: ```bibtex @article{du2024tsibench, @@ -188,7 +190,7 @@ year={2024} } ``` -π₯ In case you use PyPOTS for your research, please also cite the following paper: +### π₯ In case you use PyPOTS in your research, please also cite the following paper: ``` bibtex @article{du2023pypots, @@ -249,9 +251,9 @@ python run_models.py \ ## β Acknowledgments -Thanks to Ciena, Mitacs, and NSERC (Natural Sciences and Engineering Research Council of Canada) for funding support. -Thanks to Ciena for providing computing resources. -Thanks to all our reviewers for helping improve the quality of this paper. +Thanks to Ciena, Mitacs, and NSERC (Natural Sciences and Engineering Research Council of Canada) for funding support. +Thanks to all our reviewers for helping improve the quality of this paper. +Thanks to Ciena for providing computing resources. And thank you all for your attention to this work.