Skip to content

serval-uni-lu/transplit

Repository files navigation

Transplit

Faster and cheaper seasonal time series forecasting

Lightweight model for large scale applications, that achieves the performance of state-of-the-art time series transformers, while being 5 to 100 times smaller and faster.

The Transplit Architecture

Transplit is a time series transformer with an additional module that allows to shorten the sequence to process, called SVS (Slice to Vector to Slice).

The input sequence is split into slices of fixed length (e.g. 24 hours), and each slice is transformed into a vector, then representing a single day.

This allows the model to reason in a day-by-day basis, and to be able to process large sequences. The outputed vectors are eventually converted back to slices and concatenated to form the final output.

Some results

We consider a classical training configuration, also used in using the Adam optimizer with a learning rate of 1e-4, divided by 2 at every epoch. All experiments are run on an NVIDIA RTX A2000 4GB with a batch size of 12. The CPU used for time measurement is an Intel i7 11th generation.

Datasets used:

  • ECL (Electric consumption load)
  • IND, industrial private dataset from a grid operator

Other models used:

Getting Started

First clone this container and cd into it:

git clone https://github.com/serval-uni-lu/transplit && cd transplit

Native installation

Requirements

  • Python 3.6+
  • Pytorch 1.10+

Optional

  • PyTorch installed with a GPU to run it faster ;)

Install dependencies with:

pip install -r requirements.txt

Download the data with:

python utils/download_data.py

Docker usage

If you don't want to install Transplit natively and want an easy solution, you can use Docker.

First pull the Nvidia's PyTorch image:

docker pull docker pull nvcr.io/nvidia/pytorch:22.05-py3

If you want to run the container with GPU, you will need to setup Docker for it by installing the nvidia-container-runtime and nvidia-container-toolkit packages.

Then run the container from the transplit directory:

docker run -it --rm --gpus all --name transplit -v $(pwd):/workspace nvcr.io/nvidia/pytorch:22.05-py3

Once in it, install the dependencies with:

pip install -r requirements.txt

And download the data if not already done.

Run experiments

We recommend to run the experiments via IPython, so that you can access the Python environment and debug it. So, simply run ipython, then:

%run main.py --is_training 1 --root_path ./dataset/electricity/ --data_path electricity.csv --model_id electricity --model Transplit --batch_size 32 --train_epochs 1 --features SA --seq_len 720 --pred_len 720 --e_layers 1 --d_layers 1 --period 24 --n_filters 256 --d_model 84 --d_ff 256 --des 'Exp'

You can do the same for other models by picking the commands in models.sh.

Acknowledgement

Thanks to the Informer and Autoformer authors for their valuable work and their useful repositories:

https://github.com/thuml/Autoformer

https://github.com/zhouhaoyi/Informer2020

More info in LICENSE and NOTICE.

About

Code for the Transplit paper - energy load forecasting

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published