Skip to content

JOP-Lee/DGNR-Rendering

Repository files navigation

DGNR: Density-Guided Neural Point Rendering of Large Driving Scenes

Paper

Overview:

PyTorch implementation of DGNR.

The following instructions describe installation of conda environment. Please refer to requirement. .

Data

Download the data from the official websites of KITTI, Waymo, Argo, Cityscape, or you can try the examples provided by us here and unpack in the Data directory.

Train

Density-Guided Scene Representation

ns-process-data metashape --data xx/kitti11/images --xml xx/kitti11/camera.xml --output-dir xx  
python scripts/divideTransform.py
ns-train nerfacto --data xx --timestamp  kitti11_block_200
ns-export pointcloud --load-config outputs/kitti11/nerfacto/kitti11_block_200/config.yml --output-dir  outputs/kitti11/nerfacto/kitti11_block_200/
python scripts/scale_pointcloud.py
python scripts/pointcloudCat.py

Density-Guided Differentiable Rendering

python train.py --config configs/train_example.yaml --pipeline READ.pipelines.ogl.TexturePipeline --crop_size 256x256

*For devices without a display, training and testing can be done using headless rendering, but the efficiency will be 2-3 times slower. It is preferable to use devices with a display for rendering.

xvfb-run --auto-servernum --server-args="-screen 0 1280x360x24" python train.py --config configs/train_example.yaml --pipeline READ.pipelines.ogl.TexturePipeline --crop_size 256x256

The size of crop_size depends on your GPU memory and the parameter train_dataset_args can be adjusted in the configs folder. For high-resolution images, it is necessary first to train them with a low-resolution downsampled version and then train them at a higher resolution. For example, during the initial training, the range of random_zoom is set to 1.0-2.0 with num_samples=3000. Then, the model is loaded and fine-tuned with a modified range of random_zoom set to 0.7-2.0 and num_samples=6000.

Viewer

python viewer.py --config downloads/kitti11.yaml

*headless rendering

python viewer_numpy.py --config downloads/kitti11.yaml

Acknowledgments

In this code we refer to the following implementations: nerfstudio and READ. Great thanks to them!

Citation

If our work or code helps you, please consider to cite our paper. Thank you!

@article{li2024dgnr,
  title={DGNR: Density-Guided Neural Point Rendering of Large Driving Scenes},
  author={Li, Zhuopeng and Wu, Chenming and Zhang, Liangjun and Zhu, Jianke},
  journal={IEEE Transactions on Automation Science and Engineering},
  year={2024},
  publisher={IEEE}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published