-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add inverse kinematics example #538
Conversation
8da0591
to
8a1d55a
Compare
31a5014
to
db873bc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this! Left some comments.
Besides addressing this, I think we should include comments to guide people throughout the example. The scripts in this folder are meant for people to learn how to use theseus
.
examples/labs/inverse_kinematics.py
Outdated
theta = torch.rand(10, robot.dof, dtype=dtype) | ||
targeted_poses_ee: torch.Tensor = fk(theta)[0] | ||
theta = torch.zeros_like(theta) | ||
for iter in range(50): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be able to use theseus
for this, right? We can show how to create a custom cost function with some vector optim vars for theta and wrap fk
and jfk
computation. Then we just add to objective and optimize.
This cost function could even be added to theseus.embodied
once we merge FK from labs
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed offline, let's keep the current version and focus on adding more explanations and references. We can remove the CostFunction
for now, but keep the code around somewhere, in case we have time to add it and clean it up properly soon.
examples/labs/inverse_kinematics.py
Outdated
theta = torch.zeros_like(theta) | ||
for iter in range(50): | ||
jac_b, poses = jfk_b(theta) | ||
error = SE3.log(SE3.compose(SE3.inv(poses[-1]), targeted_poses_ee)).view(-1, 6, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be made more concise if we use LieTensor
class, which has a local()
implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add local()
to lie.functional
. See #542
67af5f8
to
c1959ef
Compare
examples/labs/inverse_kinematics.py
Outdated
(theta,) = optim_vars | ||
(targeted_pose,) = aux_vars | ||
pose = th.SE3(tensor=fk(theta.tensor)[0]) | ||
return (pose.inverse().compose(targeted_pose)).log_map() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can call targeted_pose.local(pose)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it pose.local(targeted_pose)
? In either case, agree that we should call local()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what we have in the Local
cost function in /embodied/misc.
In the longer term we should actually make a unified version of this cost function that can optionally call in to FK and apply local on pose/point/rot of a specific joint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the right way to do this would be this, which unfortunately we don't have time to implement, but it would be very nice to have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good, but left some minor comments. I think the most important is to use this example to clearly explain the API, because we don't have any documentation for this code.
examples/labs/inverse_kinematics.py
Outdated
# the selected links, in that order. The return types of these functions are as | ||
# follows: | ||
# | ||
# - fk: return poses of selected links |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should be more explicit here, this description doesn't help the reader understand the API.
For example, fk
returns N + 1 outputs, where N is the number of selected links. The first N correspond to poses (tensors, I think?). And I think the last output is a list containing poses for all links? It would be really useful to include this level of detail.
Same goes for the jacobians.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, fk
will only return the tuple of selected link poses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, my bad, I missed this slicing for fk
function. Sorry about that. You should still mention that the return is a tuple of pose tensors, one for each selected link, in the same order as the link names given as input.
Also, can you reword the jacobians part so it says "returns a tuple where the first element is a list with the body jacobians of selected links, and the second element is a tuple with the corresponding poses."?
examples/labs/inverse_kinematics.py
Outdated
(theta,) = optim_vars | ||
(targeted_pose,) = aux_vars | ||
pose = th.SE3(tensor=fk(theta.tensor)[0]) | ||
return (pose.inverse().compose(targeted_pose)).log_map() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it pose.local(targeted_pose)
? In either case, agree that we should call local()
.
d69e976
to
961e595
Compare
961e595
to
204bc00
Compare
204bc00
to
0c42338
Compare
9948b6c
to
ece2df5
Compare
Motivation and Context
How Has This Been Tested
Types of changes
Checklist