Skip to content

Latest commit

 

History

History
40 lines (27 loc) · 1.13 KB

README.md

File metadata and controls

40 lines (27 loc) · 1.13 KB

Under Development

Robust Adversarial Reinforcement Learning

This repo contains code for training RL agents with adversarial disturbance agents in our work on Robust Adversarial Reinforcement Learning (RARL). We build heavily build on the OpenAI rllab repo.

Installation instructions

Since we build upon the rllab package for the optimizers, the installation process is similar to rllab's manual installation. Most of the packages are virtually installated in the anaconda rllab3-adv enivronment.

  • Dependencies for scipy:
sudo apt-get build-dep python-scipy
  • Install python modules:
conda env create -f environment.yml
export PYTHONPATH=<PATH_TO_RLLAB_ADV>:$PYTHONPATH

Example

# Enter the anaconda virtual environment
source activate rllab3-adv
# Train on InvertedPendulum
python adversarial/scripts/train_adversary.py --env InvertedPendulumAdv-v1 --folder ~/rllab-adv/results

Contact

Lerrel Pinto -- lerrelpATcsDOTcmuDOTedu.