Explorer is a PyTorch reinforcement learning framework for exploring new ideas.
Note
This repository is no longer actively maintained. For Jax implementation, please check Jaxplorer.
- Vanilla Deep Q-learning (VanillaDQN): No target network.
- Deep Q-Learning (DQN)
- Double Deep Q-learning (DDQN)
- Maxmin Deep Q-learning (MaxminDQN)
- Averaged Deep Q-learning (AveragedDQN)
- Ensemble Deep Q-learning (EnsembleDQN)
- Bootstrapped Deep Q-learning (BootstrappedDQN)
- NoisyNet Deep Q-learning (NoisyNetDQN)
- Randomized Exploration for Reinforcement Learning with General Value Function Approximation (LSVI-PHE)
- REINFORCE
- Actor-Critic
- Proximal Policy Optimisation (PPO)
- Soft Actor-Critic (SAC)
- Deep Deterministic Policy Gradients (DDPG)
- Twin Delayed Deep Deterministic Policy Gradients (TD3)
- Reward Policy Gradient (RPG)
- Memory-efficient Deep Q-learning (MeDQN)
Base Agent
├── Vanilla DQN
| ├── DQN
| | ├── DDQN
| | ├── NoisyNetDQN
| | ├── BootstrappedDQN
| | └── MeDQN: MeDQN(U), MeDQN(R)
| ├── Maxmin DQN ── Ensemble DQN, LSVI-PHE
| └── Averaged DQN
└── REINFORCE
├── Actor-Critic
| └── PPO ── RPG
└── SAC ── DDPG ── TD3
- Python (>=3.6)
- PyTorch
- Gym && Gym Games: You may only install part of Gym (
classic_control, box2d
) by commandpip install 'gym[classic_control, box2d]'
. - Optional:
- Gym Atari:
pip install gym[atari,accept-rom-license]
- Gym Mujoco:
- Download MuJoCo version 1.50 from MuJoCo website.
- Unzip the downloaded
mjpro150
directory into~/.mujoco/mjpro150
, and place the activation key (themjkey.txt
file downloaded from here) at~/.mujoco/mjkey.txt
. - Install mujoco-py:
pip install 'mujoco-py<1.50.2,>=1.50.1'
- Install gym[mujoco]:
pip install gym[mujoco]
- PyBullet:
pip install pybullet
- DeepMind Control Suite:
pip install git+git://github.com/denisyarats/dmc2gym.git
- Gym Atari:
- Others: Please check
requirements.txt
.
All hyperparameters including parameters for grid search are stored in a configuration file in directory configs
. To run an experiment, a configuration index is first used to generate a configuration dict corresponding to this specific configuration index. Then we run an experiment defined by this configuration dict. All results including log files are saved in directory logs
. Please refer to the code for details.
For example, run the experiment with configuration file RPG.json
and configuration index 1
:
python main.py --config_file ./configs/RPG.json --config_idx 1
The models are tested for one episode after every test_per_episodes
training episodes which can be set in the configuration file.
First, we calculate the number of total combinations in a configuration file (e.g. RPG.json
):
python utils/sweeper.py
The output will be:
Number of total combinations in RPG.json: 12
Then we run through all configuration indexes from 1
to 12
. The simplest way is using a bash script:
for index in {1..12}
do
python main.py --config_file ./configs/RPG.json --config_idx $index
done
Parallel is usually a better choice to schedule a large number of jobs:
parallel --eta --ungroup python main.py --config_file ./configs/RPG.json --config_idx {1} ::: $(seq 1 12)
Any configuration index that has the same remainder (divided by the number of total combinations) should have the same configuration dict. So for multiple runs, we just need to add the number of total combinations to the configuration index. For example, 5 runs for configuration index 1
:
for index in 1 13 25 37 49
do
python main.py --config_file ./configs/RPG.json --config_idx $index
done
Or a simpler way:
parallel --eta --ungroup python main.py --config_file ./configs/RPG.json --config_idx {1} ::: $(seq 1 12 60)
To analyze the experimental results, just run:
python analysis.py
Inside analysis.py
, unfinished_index
will print out the configuration indexes of unfinished jobs based on the existence of the result file. memory_info
will print out the memory usage information and generate a histogram to show the distribution of memory usages in directory logs/RPG/0
. Similarly, time_info
will print out the time information and generate a histogram to show the distribution of time in directory logs/RPG/0
. Finally, analyze
will generate csv
files that store training and test results. Please check analysis.py
for more details. More functions are available in utils/plotter.py
.
Enjoy!
-
Qingfeng Lan, Yangchen Pan, Alona Fyshe, Martha White. Maxmin Q-learning: Controlling the Estimation Bias of Q-learning. ICLR, 2020. (Poster) [paper] [code]
-
Qingfeng Lan, Samuele Tosatto, Homayoon Farrahi, A. Rupam Mahmood. Model-free Policy Learning with Reward Gradients. AISTATS, 2022. (Poster) [paper] [code] [robot experiment code]
-
Qingfeng Lan, Yangchen Pan, Jun Luo, A. Rupam Mahmood. Memory-efficient Reinforcement Learning with Value-based Knowledge Consolidation. TMLR, 2023. [paper] [code] [Atari experiment code]
If you find this repo useful to your research, please cite my paper if related. Otherwise, please cite this repo:
@misc{Explorer,
author = {Lan, Qingfeng},
title = {A PyTorch Reinforcement Learning Framework for Exploring New Ideas},
year = {2019},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/qlan3/Explorer}}
}