Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment on real-world Franka robot arm #14

Open
Zed-Wu opened this issue Sep 1, 2023 · 7 comments
Open

Deployment on real-world Franka robot arm #14

Zed-Wu opened this issue Sep 1, 2023 · 7 comments

Comments

@Zed-Wu
Copy link

Zed-Wu commented Sep 1, 2023

Is there any example or tutorial about deploying a trained agent (e.g., BC agent with control mode "pd_ee_delta_pose") on the real-world Franka robot arm?
For the sim2real experiments in the paper, I found that you used a ROKAE xMate3Pro robot arm, which differs from the Franka robot in the simulator. Is there any reason for it?
I appreciate your answering and help.

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented Sep 1, 2023

We haven't drafted a sim2real tutorial yet, will likely do in maniskill2 0.6.0.

For sim2real, besides camera calibration, there is (inherent) visual observation gap, and there is inherent control gap you need to pay attention to. Simulation pd_ee_delta_pose controller uses 20hz control, and real robot execution speed can be different from 20hz. Besides, the gap in qvel between sim & real is also an important factor.

We used ROKAE xMate3Pro because it was available for us at that time.

@Zed-Wu
Copy link
Author

Zed-Wu commented Sep 3, 2023

Thank you very much for your suggestions. Which IK interface package or library do you use to compute the joint positions for the pd_ee_delta_pose controller? It would be better if you could tell me whether it can be used on a real-world Franka robot arm.

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented Sep 4, 2023

Yes it should be able to, as long as urdf matches the real. You can create a parallel env in ManiSkill2 to help you calculate the ik. We use Sapien's IK interface: https://github.com/haosulab/ManiSkill2/blob/231eb151b917ae68155a58fca88a6293e1f1ab03/mani_skill2/agents/controllers/pd_ee_pose.py#L59

(controller corresponds to env.agent.controller.controllers['arm'])

@Zed-Wu
Copy link
Author

Zed-Wu commented Sep 8, 2023

Hi, how did you control the gripper? I found that, in the pd_ee_pose mode, there are only 7 values computed from Sapien's IK interface corresponding with 7 DoF. I think an extra number is needed to control the gripper. Does it right?

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented Sep 8, 2023

The last dimension of action controls the gripper (thus, 6d robot end-effector and 1d gripper). The gripper controller is pd_joint_pos: https://github.com/haosulab/ManiSkill2/blob/231eb151b917ae68155a58fca88a6293e1f1ab03/mani_skill2/agents/configs/panda/defaults.py#L137

@Zed-Wu
Copy link
Author

Zed-Wu commented Sep 9, 2023

Thank you for the reply. I know the last dimension of the action output from the model is used to control the gripper. But in the function set_action in pd_ee_pose.py (https://github.com/haosulab/ManiSkill2/blob/231eb151b917ae68155a58fca88a6293e1f1ab03/mani_skill2/agents/controllers/pd_ee_pose.py#L90), you preprocess the action and use it to compute the target pose in line 101.
Then, in the function compute_target_pose in pd_ee_pose.py for pd_ee_delta_pose controller (https://github.com/haosulab/ManiSkill2/blob/231eb151b917ae68155a58fca88a6293e1f1ab03/mani_skill2/agents/controllers/pd_ee_pose.py#L173C9-L173C28), you only use the first 6 dimension of the action that is delta_pos, delta_rot = action[0:3], action[3:6] in line 175.
So I wonder where or which code segment you use the last dimension of the action to control the gripper. I searched using the keyword action[-1] but could not find anything.

@xuanlinli17
Copy link
Collaborator

xuanlinli17 commented Sep 9, 2023

>>> env.agent.controller
<mani_skill2.agents.base_controller.CombinedController object at 0x7f431a8df700>
>>> env.agent.controller.controllers
OrderedDict([('arm', <mani_skill2.agents.controllers.pd_ee_pose.PDEEPoseController object at 0x7f42bf658700>), ('gripper', <mani_skill2.agents.controllers.pd_joint_pos.PDJointPosMimicController object at 0x7f42bf658730>)])
>>> env.agent.controller.action_mapping
OrderedDict([('arm', (0, 6)), ('gripper', (6, 7))])

Agent uses a combined controller, and set_action sets each controller's action one-by-one accordingly based on start and end dimensions: https://github.com/haosulab/ManiSkill2/blob/231eb151b917ae68155a58fca88a6293e1f1ab03/mani_skill2/agents/base_controller.py#L242

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants