UR5_FetchPush | UR5_FetchReach-real |
---|---|
This repository contains a custom OpenAI Gym-compatible environment for simulating a robotic manipulation task using the UR5 robotic arm. The task, named "FetchPush," involves the UR5 robot pushing an object to a target location on a flat surface. This environment is designed for research and development in the field of reinforcement learning and robotics.
In the FetchPush task, the UR5 robot is equipped with a two-finger gripper and is tasked with pushing a puck to a specified goal location. The environment provides a realistic simulation of the robot's dynamics and the interaction with the object.
Key features of the environment include:
Realistic UR5 robot arm simulation with a two-finger gripper. (Thanks to ElectronicElephant for meshes and visual)
- A puck that the robot must push to the goal.
- Observation space that includes the position and velocity of the robot's joints, the position of the puck, and the target goal position.
- Reward function that encourages the robot to push the puck as close to the goal as possible.
- Configurable initial conditions for the robot's arm and the puck's position.
- Proper Wandb support
- Add plots and demo
- Collect datasets for offline RL methods
To install the UR5 FetchPush Gym Environment, follow these steps:
git clone https://github.com/nikisim/UR5_FetchPush_env.git
pip install -e .
To use the UR5 FetchPush environment, you can create an instance of the environment and interact with it as you would with any other Gym environment:
import gym
import gym_UR5_FetchPush
env = gym.make('gym_UR5_FetchPush/UR5_FetchPushEnv-v0', render=True)
# Reset the environment
observation = env.reset()
# Sample a random action
action = env.action_space.sample()
# Step the environment
observation, reward, done, info = env.step(action)
This environment requires the following dependencies:
- gym
- numpy
- pybullet (for physics simulation)
Make sure to install these dependencies before using the environment.
If you want to use GPU, just add the flag --cuda
(Not Recommended, Better Use CPU).
mpirun -np 16 python -u train_UR5.py --num-workers 12 --n-epochs 800 --save-dir saved_models/UR5_FetcReach 2>&1 | tee reach_UR5.log
Check arguments.py
for more info about flags and options
Current success rate: 0.8191489361702128
python demo.py --demo-length 10
To collect dataset in D4RL format using pretrained DDPG+HER. By default it will collect >800.000 transitions with 'observations', 'actions', 'rewards', 'next_observations', 'terminals'
python create_dataset.py
It was plotted by using 1000 epochs.