Skip to content

Benchmarking the impact of image feature detector and descriptor choice on visual odometry

License

Notifications You must be signed in to change notification settings

dysdsyd/VO_benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VO_benchmark

This repository holds all the code for our EECS 568: Mobile Robotics final project

VO_benchmark: Impact of Image Feature Detector and Descriptor Choice on Visual Odometry
Chunkai Yao, Danish Syed, Joseph Kim, Peerayos Pongsachai, Teerachart Soratana

Installation

Please follow the instructions from pySLAM v2 repository. Our implementation uses the conda environment installation.

Datasets

We used the first 10 (00-09) trajectory sequences from KITTI dataset for evlaution. Download the KITTI odometry data set (grayscale, 22 GB) and store it in the data/dataset folder with the following directory structure.

├── data/dataset
    ├── sequences
        ├── 00.txt
        ...
        ├── 09.txt
    ├── poses
        ├── 00.txt
            ...
        ├── 09.txt

Usage Instructions

Once installed, there are two steps to generate the Visual Odometry (VO) results on the KITTI dataset.

Running the VO for multiple Detector and Descriptor Combinations

1. Create the descriptor and detector configurations in the test_configs dictionary present in pyslam/feature_tracker_configs.py

2. Run VO experiment over all the trajectory sequences by:

$ cd src
$ python run_vo.py

The ouput will be dumped into the data/results folder with the following directory structure:

├── data/results
    ├── T01_SHI_NONE
        ├── 00.txt
        ...
        ├── 09.txt
    ├── T60_D2NET_D2_NET
        ├── 00.txt
            ...
        ├── 09.txt

Evaluating the VO for multiple Detector and Descriptor Combinations

1. Install the evo package for evaluating Visual Odometery

2. Run the evaluation using the evo package by running the following:

$ cd src
$ python eval_vo.py

This will calculate the Absolute Trajectory Error (ATE) and Releative Pose Error (RPE) for all the configurations & trajectory sequences. The results can be found in data/output folder

3. Collate the ATE and RPE into a table by running the following:

$ cd src
$ python generate_results.py

Resulting table is dumped into an excel file as data/output/exported_data.xlsx

Troubleshooting

Use src/run_vo.ipynb to trobuleshoot issues with any configuration.

Acknowledgements

The autors would like to thank the luigifreda/pyslam repository for collating multiple Visual Odometry feature detectors and descriptors.

About

Benchmarking the impact of image feature detector and descriptor choice on visual odometry

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published