Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate Stereo VINS (MSCKF_VIO) demo in Colaboratoty #10

Open
mslavescu opened this issue Feb 3, 2018 · 16 comments
Open

Integrate Stereo VINS (MSCKF_VIO) demo in Colaboratoty #10

mslavescu opened this issue Feb 3, 2018 · 16 comments

Comments

@mslavescu
Copy link
Member

Project code:

Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
https://github.com/KumarRobotics/msckf_vio

Amazing demo video here:
https://youtu.be/jxfJFgzmNSw

@angelocarbone
Copy link
Member

@mslavescu said:
If anyone make it work, please share it there and lets talk in # 1-mvp-vbacc and # 3-mvp-smart-camera more about how to integrate VINS cameras.
I'll try to collect some datasets with the Lattice Embedded Vision Kit (dual camera) and phone accelerometer/gyro, so we can test scenarios from the car at faster speeds.

@ppirrip
Copy link
Contributor

ppirrip commented Feb 4, 2018

I just finished the first pass of the paper and it is very interesting (no deep net no nothing but good old dynamics and filtering). Its github code requires ROS Kinetic, and I am a bit confused on which docker image I should pull. Should I get the kinetic-robot, kinetic-perception, kinetic-ros-core or kinetic-ros-base?
BTW, what is the best recommended resource to pick up ROS in general?

@angelocarbone
Copy link
Member

angelocarbone commented Feb 4, 2018

I don't have experience in ROS

@ppirrip
Copy link
Contributor

ppirrip commented Feb 4, 2018

to decouple https://github.com/KumarRobotics/msckf_vio from ROS

@ppirrip ppirrip self-assigned this Feb 4, 2018
@ppirrip
Copy link
Contributor

ppirrip commented Feb 6, 2018

Seems like I am able to built the code within a ROS kinetic image, relatively painful maybe due to my lack of experience with rosdep. I am planning to see how to "run" it in some meaningful way, then pack it up with snap and see it is as good as it advertise.

Update: docker image available at https://hub.docker.com/r/ppirrip/msckf_vio/, package installed in /msckf_vio, and roslaunch script has no error.

@mslavescu
Copy link
Member Author

Today I'll try to collect a rosbag with images from a PS4Eye stereo camera and 2 x PS3Eye cameras (in a stereo setup).

Will also try to add GPS and accel/gyro from an Android phone.

Then we should try to test this algorithm​, live to.

@ppirrip
Copy link
Contributor

ppirrip commented Feb 7, 2018

I downloaded one of the bag from the github last nite (~10G) and will try to play it back today, We will need a config file setup for the PS3 Eye too I think.

@mslavescu
Copy link
Member Author

I got a ROS launch file for the 2 x PS3Eye cameras.

I calibrated them individually, but I could not​ do it in stereo mode.

This should help, I'll try it tonight:
https://github.com/ossdc/stereovision

For the IMU/GPS from Android phone I'll try to make this work:

https://github.com/OSSDC/ros_android_sensors

I tried to record a rosbag this morning, but the laptop hung after a few minutes, I need to reduce the number of topics and compress the images faster.

I'll post tonight the scripts and datasets I got so far.

@mslavescu
Copy link
Member Author

I like the PS3Eye more than PS4Eye, with PS3Eye auto exposure works pretty well and also seems to focus farther away.

@ppirrip
Copy link
Contributor

ppirrip commented Feb 7, 2018

I played back the flight data bag this morning, but not sure how it interacts with the nodes started by the program. I installed rviz now and will try it later. BTW, the two nodes subscribes and publishes the following topics:

image_processor node
Subscribed Topics:
imu (sensor_msgs/Imu): IMU messages is used for compensating rotation in feature tracking, and 2-point RANSAC.
cam[x]_image (sensor_msgs/Image): Synchronized stereo images.

Published Topics
features (msckf_vio/CameraMeasurement): Records the feature measurements on the current stereo image pair.
tracking_info (msckf_vio/TrackingInfo): Records the feature tracking status for debugging purpose.
debug_stereo_img (sensor_msgs::Image): Draw current features on the stereo images for debugging purpose. Note that this debugging image is only generated upon subscription.

vio node
Subscribed Topics
imu (sensor_msgs/Imu): IMU measurements.
features (msckf_vio/CameraMeasurement): Stereo feature measurements from the image_processor node.

Published Topics
odom (nav_msgs/Odometry): Odometry of the IMU frame including a proper covariance.
feature_point_cloud (sensor_msgs/PointCloud2): Shows current features in the map which is used for estimation.

@mslavescu
Copy link
Member Author

I finally got a dataset with phone sensors data and 4 cameras images (compressed), I will publish it in the evening.

We need to figure out a way to collect uncompressed images, PS4Eye ROS method is not reliable, I'll try opencv to save lossless PNGs.

@ppirrip
Copy link
Contributor

ppirrip commented Feb 9, 2018

How do you hook up the PS cameras to your computer (I assume) anyway? Just curious. I will play with your bag file later then. Any good ROS reference material you aware of, esp on publish/subscribe topics?

@mslavescu
Copy link
Member Author

mslavescu commented Feb 9, 2018

I use 2 USB2 ports to connect the 2 x PS3Eye and one USB3 powered hub for PS4Eye (the USB3 on the computer doesn't provide enough power). All connected to my laptop which was powered, as I have 110V outlet in my car. The USB3 hub was powered from the power outlet also.

I will just provide a zip with images and we will generate the ROS messages on the fly if needed.

Large rosbags are not easy to use, because rosbag play scans them fully every time we playback, that is very slow.

@mslavescu
Copy link
Member Author

@mslavescu
Copy link
Member Author

This would be good to try also:

NanoMap: Fast, Uncertainty-Aware Proximity Queries with Lazy Search of Local 3D Data - https://www.youtube.com/watch?v=zWAs_Djd_hA
https://github.com/peteflorence/nanomap_ros

@mslavescu
Copy link
Member Author

This would be good to try also:

OKVIS: Open Keyframe-based Visual-Inertial SLAM.
https://github.com/ethz-asl/okvis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants