Want to convert your video to slowmotion? Now you can!
This method generates extra frames, so you can convert an existing video to a higher framerate.
The method uses CNNs (convolutional neural networks), so we recommend running in on a GPU.
This is a reference implementation of Video Frame Interpolation via Cyclic Fine-Tuning and Asymmetric Reverse Flow.
If you use our work please cite the paper:
@inproceedings{hannemose2019video,
title={Video Frame Interpolation via Cyclic Fine-Tuning and Asymmetric Reverse Flow},
author={Hannemose, Morten and Jensen, Janus N{\o}rtoft and Einarsson, Gudmundur and Wilm, Jakob and Dahl, Anders Bjorholm and Frisvad, Jeppe Revall},
booktitle={Scandinavian Conference on Image Analysis},
pages={311--323},
year={2019},
organization={Springer}
}
For best results, you should enable cyclic fine-tuning, but this will also make the code run considerably slower.
This is enabled by adding --cft true
to the command line.
Here is an example comparing our method against After Effects and sepconv.
You can download our results on the UCF101 dataset: UCF101_eval_vfi-cft.zip.
To convert a video to slowmotion use slow-movie.py
Example to convert rain.mp4
to 4x slowmotion:
python slow_movie.py -m rain.mp4 -f 4
This will output the movie as bmp
files and put them in the folder slowed_movie_frames
.
The generated frames will automatically be converted to a video if you have ffmpeg
installed. Instructions here.
You can download our pretrained model from dtu.dk or google drive.
This file should be placed in the root of the repository.
To interpolate the middle frame from only two frames, please see simple_example.py
.
This is also a good starting ground for modifying our code.
The code is tested under:
- Python 3.6
- pytorch 1.1.0
It will most likely work with other versions, but we have not tested it.
This repository is actively maintained, so feel free to open an issue if you run into problems.