This is a work-in-progress unofficial re-implementation of the real-time neural supersampling model proposed in Neural supersampling for real-time rendering
[Paper
] using PyTorch and PyTorch Lightning. This is in no way endorsed by the original authors.
This model is implemented as closely to the original paper as possible. However, there are some important differences:
- the original training data is not freely available. Therefore Blender is used to render images with color, depth and motion data from Blender Open Movies.
- the original paper seems to use motion data of the target resolution. Here, due to storage constraints, we use motion data of the source resolution
- the original paper seems to use raw depth values for feature extraction. I found high depth values to negatively impact numerical stability and therefore decided to use inverse depth, i.e. disparity, instead.
The training data may be rendered by Blender and the Cycles rendering engine. To achieve this, download any number of Blender Open Movie assets and configure them in render_all.py. Then either run render_all.py directly or use run_blender_headless.sh to run Blender via Docker.
The training, evaluation and visualization are all implemented as separate files in the model directory. Alternatively, take a look at the Jupyter Notebook NeuralSupersampling.ipynb
- train to convergence
- optimize using TensorRT and embed in real-time application