This repository will contain the code for DeltaVSG, our framework to estimate 3D Variable Scene Graphs (VSG) for long-term semantic scene change prediction.
Scene variability prediction using DeltaVSG.Credits
Setup
Examples
If you find this useful for your research, please consider citing our paper:
- Samuel Looper, Javier Rodriguez-Puigvert, Roland Siegwart, Cesar Cadena, and Lukas Schmid, "3D VSG: Long-term Semantic Scene Change
Prediction through 3D Variable Scene Graphs", accepted for IEEE International Conference on Robotics and Automation (ICRA), 2023. [ IEEE | ArXiv ]
@inproceedings{looper22vsg, author = {Looper, Samuel and Rodriguez-Puigvert, Javier and Siegwart, Roland and Cadena, Cesar and Schmid, Lukas}, title = {3D VSG: Long-term Semantic Scene Change Prediction through 3D Variable Scene Graphs}, publisher = {IEEE International Conference on Robotics and Automation (ICRA)}, year = {2023}, doi = {10.1109/ICRA48891.2023.10161212}, }
-
Clone the repository using SSH Keys:
export REPO_PATH=<path/to/destination> cd $REPO_PATH git clone git@github.com:ethz-asl/3d_vsg.git cd 3d_vsg
-
Create a Python environment. We recommend using conda:
conda create --name 3dvsg python=3.8 conda activate 3dvsg pip install -r requirements.txt
Note The installation is configured for CPU-version of torch. If you have cuda replace
cpu
in the above instructions and inrequirements.txt
with your cuda version, e.g.cu102
for CUDA 10.2. -
You're all set!
The dataset used in our experiments is based on the 3RScan Dataset [1] and 3D SSG Dataset [2].
[1] Wald, Johanna, Armen Avetisyan, Nassir Navab, Federico Tombari, and Matthias Nießner, "Rio: 3d object instance re-localization in changing indoor environments", in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7658-7667. 2019.
[2] Wald, Johanna, Helisa Dhamo, Nassir Navab, and Federico Tombari, "Learning 3d semantic scene graphs from 3d indoor reconstructions", in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3961-3970. 2020.
Option 1: Download the pre-processed 3D VSG Embeddings.
The pre-processed training and evaluation data for the examples is available for donwload from GDrive:
```bash
cd $REPO_PATH/3d_vsg
mkdir data
cd data
gdown 1ub_pdt7vXIlJVK0V4B_ydWSKiuX1wTP_
unzip processed.zip
rm processed.zip
```
Option 2: Download the original data and process it.
-
Sign up for the 3R Scan Dataset to get access to the data and
download_3rscan.py
script. -
Download the semantic segmentation data:
cd $REPO_PATH/3d_vsg mkdir -p data/raw python download_3rscan.py --out_dir data/raw/semantic_segmentation_data --type 'semseg.v2.json'
-
Download the meta data file
3RScan.json
and place it indata/raw
. -
Download the 3DSSG annotations:
wget http://campar.in.tum.de/public_datasets/3RScan/3RScan.json -P data/raw wget https://campar.in.tum.de/public_datasets/3DSSG/3DSSG.zip unzip 3DSSG.zip -d data/raw rm 3DSSG.zip
-
Process the raw data to get the 3D VSG Embeddings by setting
load
inconfig/DatasetCfg.py
tofalse
and running:python -m src.scripts.generate_dataset
Note Any pre-processed dataset files currently in
data/processed
will be moved todata/old_processed
and timestamped. The newly created dataset will generate files indataset/processed
.
The pre-trained network weights are available for download on GDrive:
cd $REPO_PATH/3d_vsg
mkdir pretrained
gdown 1hHmXSXtAUqqGNMn4vsEvc3XA3SLxpV4o
unzip models.zip -d pretrained
rm models.zip
Before starting training, make sure you have setup the data as explained above. To train a new model, run:
python -m src.scripts.train_variability
Note Additional dataset parameters can be configured in
config/DatasetCfg.py
. Addtional model parameters can be configured in the hyperparameter dictionary insrc/scripts/train_variability.py
.
To infere 3D Variable Scene Graphs, if not already done so setup the output directory and download the data splits:
cd $REPO_PATH/3d_vsg
mkdir results
cd results
gdown 1mT-agKOkB8ebg6NsliOnIReRO81PjHQL
gdown 1jO4rG1qlYj7MHqxNx-6Ql_igv3lsi_79
Then, to run model inference run:
python -m src.scripts.inference
Note Additional dataset parameters can be configured in
config/DatasetCfg.py
in theInferenceCfg
subclass. Addtional model parameters can be configured in the hyperparameter dictionary insrc/scripts/inference.py
.
To evaluate the performance of a 3DVSG model run:
python -m src.scripts.eval
Note The splits path, dataset root, model weights path, and hyperparameter dictionary can be configured in
src/scripts/eval.py
.