Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grasp Vizualization: Evaluate pixel relative offset at every pixel with single gripper pose offset model #381

Open
1 of 24 tasks
ahundt opened this issue Jan 5, 2018 · 2 comments
Assignees

Comments

@ahundt
Copy link
Member

ahundt commented Jan 5, 2018

We need to be able to generate a 3D visualization of many poses and the predicted grasp success values to determine if the results look reasonable.

Here is the model file to load for visualization:
2018-01-20-06-41-24_grasp_model_weights-delta_depth_sin_cos_3-grasp_model_levine_2016-dataset_062_b_063_072_a_082_b_102-epoch-014-val_loss-0.641-val_acc-0.655.h5.zip

Here is the updated scene file:
2018-01-20-0630-kukaRemoteApiCommandServerExample.ttt.zip

TODO:

  • show matplotlib rendering of predictions

  • make boundaries of crop clear

  • fix prediction visualization to use correct depth image cropping and prediction cropping

  • overlay real image with predictions

  • actually show color on predictions & point cloud in V-REP

  • make sure offset of point cloud is correct

  • don't overwrite main colored time step point cloud

  • make pixels configurable from remote api so they can appear different from scene pixels

  • Add colored point cloud to V-REP based on grasp success value at a depth

  • Plot the heat map in V-REP point cloud

  • Do the same for pixel wise training

  • Generate current to final pose transform for every pixel in resized clear view + current image at a fixed depth offset

  • Create loop that generates end-effector relative poses at every pixel and evaluates

  • Create an input vector with all of this data to run through the prediction algorithm, and call predict() on each

  • low priority: Use tf.image.resize_images for a ResizeMethod.NEAREST_NEIGHBOR nearest neighbor resize of the depth image after applying the median filter.

V-REP code steps:

  • Enable setting the color of dummies
  • Incorporate function that maps 0-1 values to colors
  • Add rescaled image display that shows the heat map at each pixel
  • set color and thickness of lines being drawn so we can show surface relative portion of transform

Remember, we will need to get from the full sized images to the small images and back!

Bonus features that would help, but are not required:

  • v-rep gui checkbox to show/hide point clouds & labels with one click, plus to display the confidence

TensorBoard steps:

  • Enable gradient visualization
  • Enable image visualization
  • Enable picture visualization

Gradient visualization:

@ahundt
Copy link
Member Author

ahundt commented Jan 5, 2018

This visualization enhancement was suggested by @cpaxton, could you add your thoughts on this issue description?

DingYu95 added a commit that referenced this issue Jan 9, 2018
@ahundt ahundt changed the title Evaluate pixel relative offset at every pixel with single gripper pose offset model Grasp Vizualization: Evaluate pixel relative offset at every pixel with single gripper pose offset model Jan 18, 2018
@ahundt
Copy link
Member Author

ahundt commented Jan 20, 2018

Initial work in #429, this issue is still in progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants