Motivation: I have always thought that the only way to truely test if you understand a concept is to see if you can build it. As such all these these algorithms are implemented studying the relevant papers and coded to test my understanding.
What I cannot create, I do not understand” - Richard Feynman
- Vanilla DQN
- Noisy DQN
- Dualing DQN
- Double DQN
- Prioritiesed Experience Replay DQN
- Rainbow DQN
- Advantage Actor Critic (A2C) - single environment
- Advantage Actor Critic (A2C) - multi environment
- Deep Deterministic Policy Gradients
- Proximal Policy Optimisation (discrete and continuous)
These were mainly referenced from a really good lecture series by Colin Skow on youtube [link]. A large part was also found in the Deep Reinforcement Learning Udacity course.
- Bellman Equation
- Dynamic Programming
- Q learning
- Converged to an average of 17.56 after 1300 Episodes.
- Code and results can be found under
DQN/7. Vanilla DQN Atari.ipynb
- Solved in 409 episodes
- Code and results can be found under
Policy Gradient/5. PPO.ipynb
- Converged to ~ -270 after a 100 episodes
- Code and results can be found under
Policy Gradient/4. DDPG.ipynb.ipynb
- General Advantage Estimator
- Pull Policy Gradient algorithms into seperate files
- Curiousity Driven Exploration
- HER (Hindsight Experience Replay)
- Recurrent networks in PPO and DDPG
Whilst I tried to code everything directly from the papers, it wasn't always easy to understand what I was doing wrong when the algorithm just wouldn't train or I got runtime errors. As such I used the following repositories as references.