Skip to content

Latest commit

 

History

History
43 lines (35 loc) · 2.37 KB

README.md

File metadata and controls

43 lines (35 loc) · 2.37 KB

Visual Force/Torque Sensing

Paper | Video | Dataset/Models

Can we replace a $6000 force/torque sensor with a $60 USB camera?

Visual Force/Torque Sensing (VFTS) estimates forces and torques on a robotic gripper from a single RGB image.

alt text

Installation

  • Clone the repo on your PC and robot. We use the Hello Robot Stretch.
  • Install Miniconda if you haven't done so already.
  • Create a virtual environment:
conda env create -n rp_ft --file rp_ft.yml python=3.9
  • Verify the robot and pc are on the same network and that the IPs match those in /robot/zmq_client.py
  • Install pip packages:
pip install pyyaml keyboard opencv-contrib-python tqdm pyzmq open3d

Live model and demos

python -m prediction.live_model --config vfts_final_model --live True --view True
python -m demos.clean_curved_surface --config vfts_final_model --live True --view True
python -m demos.clean_manikin --config vfts_final_model --live True --view True
python -m demos.collision_detector --config vfts_final_model --live True --view True
python -m demos.handover --config vfts_final_model --live True --view True
python -m demos.make_bed --config vfts_final_model --live True --view True

ATI Mini45 force/torque sensor setup for collecting ground truth (Tested on Ubuntu 20.04)

  • In network settings, Set IPv4 Method to manual.
  • Set IPv4 address to 192.168.1.100 and IPv4 netmask to 255.255.255.0.
  • Go to 192.168.1.1 in browser.
  • After cloning the repo on the robot, run /robot/level_robot.py to level the gripper.
  • Press "Snapshot" on the left side of the screen, then press the "bias" button to zero out the force-torque readings.
  • Refer to this manual for more implementation details.

Hardware

  • Hardware for mounting the force/torque sensor and camera can be found here and here.