This is a personal Dockerfile hub for SLAM algorithms.
Since SLAM consists of various modules, it is quite hard for a beginner to compile and run them from scratch.
This DockerSLAM repository is suitable for SLAM beginners. All you need is to install Docker beforehand!
LiDAR SLAM | Link to original repository | Link to Docker Hub |
---|---|---|
FAST-LIO2 | FAST-LIO2 repository | Docker hub |
LEGO-LOAM | LEGO-LOAM repository | Docker hub |
LIO-SAM | LIO-SAM repository | Docker hub |
Visual SLAM | Link to original repository | Link to Docker Hub |
---|---|---|
DSO | DSO repository | Docker hub |
ORB-SLAM2 | ORB-SLAM2 repository | Docker hub |
VINS-Mono | VINS-Mono repository | Docker hub |
RTABMap | RTABMap-ROS repository | Docker hub |
PL-VINS | PL-VINS repository | Docker hub |
PL-VIO | PL-VIO repository | Docker hub |
ProSLAM | ProSLAM repository | Docker hub |
Please follow the official guidance link to install Docker.
Make sure to perform post-installation step, especially adding docker to your group.
newgroup docker
In order to use --gpus all
option when running a container, please install NVIDIA Container Toolkit properly.
First, check available algorithm (vins-mono, lio-sam, ... ) for Docker image.
You can either build it by yourself...
./build.sh <target_algorithm>
or just pull from dockerhub
docker pull hyeonjaegil/<target_algorithm>:latest
Check if image is ready with $ docker images
command.
./run <target_algorithm>
Customizing run.sh File
- You can modify run.sh to use extra options, such as
--volume
option to mount your directories.
For example, mount local~/Download/Dataset
folder into container/dataset
folder.
# Inside run.sh file...
docker run --gpus all --rm -it --ipc=host --net=host --privileged \
--env="DISPLAY" \
--volume="/etc/localtime:/etc/localtime:ro" \
--volume="$HOME/Downloads/Dataset:/dataset" \ # add this line.
${docker_image}
- If you don't want to delete container when exiting, please remove
--rm
options.
Be aware that current run.sh
file gives container the highest authority.
- It gives all available GPUs (
--gpus all
), - gives maximum shared memory (
--ipc host
), - offers the same network stack with host (
--net host
), - and gives access to all the devices on the host (
--privileged
) - AND container can connect from any host (
xhost +
)