Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tweaking the model for partial azimuth FOV Lidar #45

Open
boazMgm opened this issue Feb 27, 2022 · 7 comments
Open

Tweaking the model for partial azimuth FOV Lidar #45

boazMgm opened this issue Feb 27, 2022 · 7 comments

Comments

@boazMgm
Copy link

boazMgm commented Feb 27, 2022

Hi,
My Lidar's azimuth FOV is only ~100 [deg] .
What would be the best way to tweak the model or some configuration so it will work?
Currently the range images (and also the residual images) are very sparse at the right and left sides and
I think that is one of the reason for the bad performance I get.
Thanks

@Chen-Xieyuanli
Copy link
Member

Hey @boazMgm, you are right. Range image-based method may not work well with low-resolution LiDAR sensors either in azimuth or inclination. To get good performance, you may try to use a 3D CNN operating directly on point clouds.

We are currently just working on one 3D-CNN-based LiDAR-MOS method. We will submit it to IROS today and will also release the code soon.

@boazMgm
Copy link
Author

boazMgm commented Mar 1, 2022

Thanks :)
My Lidar has only 32 channels instead of the 64 in the Kitti dataset.
It has also limited fov in the azimuth of 100 [deg].
I thought of the following tweaks:

  1. generating the residual images with 32 pixels in the height (instead of 64).
  2. changing the spherical projection to: proj_x = 0.5 * (yaw / (c*np.pi) + 1.0) where c = 100/360

I have tried both but I still don't get the results I have expected.
Is there anything you think may help?

@Chen-Xieyuanli
Copy link
Member

Thanks :) My Lidar has only 32 channels instead of the 64 in the Kitti dataset. It has also limited fov in the azimuth of 100 [deg]. I thought of the following tweaks:

  1. generating the residual images with 32 pixels in the height (instead of 64).
  2. changing the spherical projection to: proj_x = 0.5 * (yaw / (c*np.pi) + 1.0) where c = 100/360

I have tried both but I still don't get the results I have expected. Is there anything you think may help?

One thing you should check is the fov parameters in inclination. For a 64-beam Velodyne is fov_up=3.0, fov_down=-25.0, and they should be different for a 32-beam LiDAR.

Changing the projection function could be an interesting idea, and we haven't tested it before.

Let's keep this issue open and see whether any other interesting ideas pop up from other users.

@boazMgm
Copy link
Author

boazMgm commented Mar 1, 2022

Thanks.
just a fix: c = 100/180

@Psyclonus2887
Copy link

Another question, have you tested the result in small FOV LiDAR but with non-repeat scanning pattern, like livox series LiDAR? They can also generate dense point cloud, so the FOV problem may be not a big deal?

@boazMgm
Copy link
Author

boazMgm commented Mar 1, 2022

No I haven't. I'm using a few recordings of the VLP-32C that I have.
This Lidar has 360 [deg] azimuth fov but in the recordings I have it was limited (by SW) to ~100 [deg]

@Psyclonus2887
Copy link

Hello, it's me again. After valuable communication with the author I tried to accumulate point cloud for 1s, which lead to a 100% coverage of the FoV because of its non-repeat scanning mode. The range image here using spherical projection is very dense. However, the predictions using the pretrained network are still bad. Again, all the points are classified as moving objects.
image
As the range image is really dense now, is the issue only in the azimuth? my FoV is 80x25, the projection parameter is set as below:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants