-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real world application of pylot/perception/point_cloud.py #253
Comments
Hi, could you try the following?
We have a test here which might be a useful point of reference: pylot/tests/test_point_cloud.py Lines 151 to 171 in 388d4f7
|
Hey, thanks for the quick response! I've produced some semi-usable results now, with the following approach: Setting lidar location to a real world location within the point cloud, followed by setting point cloud origin at the lidarsetup location (the sensor is at x = 0, y = 0, z = 0). Camerasetup is at location x = 0, y = 0, z = 0, and rotation all zero, for both camerasetup and lidarsetup. In get_pixel_location(): I bypass the fwd_points-variable, and pass self.points to get_closest_point_in_point_cloud(). The call to fwd_points seem to be unaffected by trying to rotate either lidarsetup or camerasetup's yaw values? I've tried to input 0, 90 or 180 to both separately, and the output fwd_points are still the same. Correct me if I'm wrong, but fwd_points should approximately return the other half of self.points, when yaw = +-180 degrees, right? Since lidar_type is set to 'velodyne', the output from get_pixel_location() is in camera coordinates, or rather it is essentially just the x & y from p3d, and z is the distance to the found location in camera coordinate space. My workaround to getting an actual world location output, is outputting the 'closest_index' from get_closest_point_in_point_cloud(), and relocation the point in PointCloud.global_points. It usually returns a point within a few meters of what I'm looking for. The biggest hurdle now, is the seeming lack of ability to "turn" on the yaw axis within the given location. I get the same output point, with both yaw = 0, and yaw = 180. In my specific use case, I'm looking for road signs, and because of their high reflectivity, I can filter the entire point cloud to almost only contain road signs. I can however still not be certain to target the right signs, if more than one sign is present in a small area. |
I've been trying to use point_cloud.py and utils.py to relocate a point from an RGB image, in a point cloud using global coordinates. I have described the issue further here:
https://stackoverflow.com/questions/71999715/finding-a-point-in-3d-point-cloud-given-a-pixel-in-2d-image
Am I wrong in assuming the code can be utilized for this real world application, or are the methods in fact bugged? Namely get_pixel_location(), and or helper methods. I have tried a number of different inputs, but the output from get_pixel_location() is usually just the transposed pixel matrix plus z = inf.
Should the points around the lidar be oriented to assume the lidar is at origin x = 0, y = 0, z = 0 ?
I am not getting any sensible output from calls to points._to_camera_coordinates() either, the result is a large array of " -inf, -200.9, inf" with a little variation in the middle value.
Hopefully I am using the methods in the wrong manner, and it indeed works. I would appreciate any feedback. Thanks.
The text was updated successfully, but these errors were encountered: