-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Camera Pose #11
Comments
hi, I have done some work like you mentioned.
|
I have simulated data which I get them from Blender with corresponding matrix_world for the camera for every frame. But using this tool box I cannot reconstruct the same point cloud that I see in the Blender. It just shows a mess. The Blender camera matrix_world is a 3*3 plus the 4rth column being the translation of x, y and z. |
Hey @YJonmo , were you able to solve this issue? I revisited this code for a custom dataset, using simulated data from Gazebo and I, too am getting a messy point cloud. I do suspect there could be a problem with my transformation matrix, however, there is no way for me to verify if that is indeed the issue. (I obtained the transformation matrix from Gazebo itself) |
Yes mate, finally solved. There is a transformation conversion needed: VSLAMMappingFromBlender2DSO Please check this repo: https://github.com/GSORF/Visual-GPS-SLAM/blob/master/02_Utilities/BlenderAddon/addon_vslam_groundtruth_Blender280.py#L34 |
Hi @YJonmo |
I guess your format should be eventually like this:
above is the translation. You need to also convert the rotation matrix too (the 3x3). For converting the Blender below matrix was multiplied and then transposed: In your case, you might need to convert the X to -X and Y to -Y. |
Hello @YJonmo |
These functions might be useful for you (conversion between the quaternion and the work matrix): def world2quatr(World): # |
What is Coordinates argument in the first function? Why does it have 6 values? |
It has seven values. It is the quaternion coordinates [x y z qw qx qy qz ] |
Oh makes sense. |
@YJonmo i am trying to do something like this with unity, could you help me get my depth input into this repo's software? |
What do you exactly want to do? like creating an extended 3D map using several depth frames? |
Hi mate, I also have the same issues as you did. but the link you provided doesn't work for my side, actually, I am not sure what coordinate system is being used for this repo, could you provide the conversion matrix from blender coordinates to this repo's coordinates? |
Hi mate, You could use these rotation/mirroring functions to perform your conversion: |
hi,I'm sorry to bother you so late.I now have the absolute position XYZ and quaternion QWER for each image frame.How do I map these 7 numbers to its 4×4 matrix? |
I have not worked with this for a long time. You might need to use the demo.py in this repo and replace your data with the demo data. |
Thank you for your reply. I used my own dataset, but here's what happened: Voxel volume size: 2565 x 4061 x 1767 - # points: 18,405,893,655. Very large numbers crash the program, but my dataset is not large. What caused this? |
I am not sure, I remember it was crashing for me too. You could try to reduce the size. |
Thank you for your reply. I have found the reason. |
Hi there,
Thanks for putting this work in public.
My question may sound silly, but do I need to have the camera pose to be able to use this repository? that's my impression by going through your codes.
What I have is just bunch of RGB-D images and I would like to fuse them to each other get the extended map.
Regards,
Jacob
The text was updated successfully, but these errors were encountered: