Giter VIP home page Giter VIP logo

andri27-ts / reinforcement-learning Goto Github PK

View Code? Open in Web Editor NEW
4.1K 246.0 615.0 10.25 MB

Learn Deep Reinforcement Learning in 60 days! Lectures & Code in Python. Reinforcement Learning + Deep Learning

Home Page: https://andri27-ts.github.io/Reinforcement-Learning/

License: MIT License

Jupyter Notebook 53.59% Python 46.41%
reinforcement-learning machine-learning artificial-intelligence deep-reinforcement-learning deep-learning policy-gradients evolution-strategies a2c deepmind dqn

reinforcement-learning's Introduction

Course in Deep Reinforcement Learning

Explore the combination of neural network and reinforcement learning. Algorithms and examples in Python & PyTorch

Have you heard about the amazing results achieved by Deepmind with AlphaGo Zero and by OpenAI in Dota 2? It's all about deep neural networks and reinforcement learning. Do you want to know more about it?
This is the right opportunity for you to finally learn Deep RL and use it on new and exciting projects and applications.

Here you'll find an in depth introduction to these algorithms. Among which you'll learn q learning, deep q learning, PPO, actor critic, and implement them using Python and PyTorch.

The ultimate aim is to use these general-purpose technologies and apply them to all sorts of important real world problems. Demis Hassabis

This repository contains:


drawing Lectures (& other content) primarily from DeepMind and Berkley Youtube's Channel.


drawing Algorithms (like DQN, A2C, and PPO) implemented in PyTorch and tested on OpenAI Gym: RoboSchool & Atari.



Stay tuned and follow me on Twitter Follow and GitHub followers #60DaysRLChallenge

Now we have also a Slack channel. To get an invitation, email me at [email protected]. Also, email me if you have any idea, suggestion or improvement.

To learn Deep Learning, Computer Vision or Natural Language Processing check my 1-Year-ML-Journey

Before starting.. Prerequisites



Quick Note: my NEW BOOK is out!

To learn Reinforcement Learning and Deep RL more in depth, check out my book Reinforcement Learning Algorithms with Python!!

drawing

Table of Contents

  1. The Landscape of Reinforcement Learning
  2. Implementing RL Cycle and OpenAI Gym
  3. Solving Problems with Dynamic Programming
  4. Q learning and SARSA Applications
  5. Deep Q-Network
  6. Learning Stochastic and DDPG optimization
  7. TRPO and PPO implementation
  8. DDPG and TD3 Applications
  9. Model-Based RL
  10. Imitation Learning with the DAgger Algorithm
  11. Understanding Black-Box Optimization Algorithms
  12. Developing the ESBAS Algorithm
  13. Practical Implementation for Resolving RL Challenges



Index - Reinforcement Learning


Week 1 - Introduction

Other Resources


Week 2 - RL Basics: MDP, Dynamic Programming and Model-Free Control

Those who cannot remember the past are condemned to repeat it - George Santayana

This week, we will learn about the basic blocks of reinforcement learning, starting from the definition of the problem all the way through the estimation and optimization of the functions that are used to express the quality of a policy or state.

Lectures - Theory drawing

  • Model-Free Prediction - David Silver (DeepMind)
    • Monte Carlo Learning
    • Temporal Difference Learning
    • TD(Ξ»)
  • Model-Free Control - David Silver (DeepMind)
    • Ɛ-greedy policy iteration
    • GLIE Monte Carlo Search
    • SARSA
    • Importance Sampling

Project of the Week - Q-learning drawing

Q-learning applied to FrozenLake - For exercise, you can solve the game using SARSA or implement Q-learning by yourself. In the former case, only few changes are needed.

Other Resources


Week 3 - Value based algorithms - DQN

This week we'll learn more advanced concepts and apply deep neural network to Q-learning algorithms.

Lectures - Theory drawing

Project of the Week - DQN and variants drawing

drawing

DQN and some variants applied to Pong - This week the goal is to develop a DQN algorithm to play an Atari game. To make it more interesting I developed three extensions of DQN: Double Q-learning, Multi-step learning, Dueling networks and Noisy Nets. Play with them, and if you feel confident, you can implement Prioritized replay, Dueling networks or Distributional RL. To know more about these improvements read the papers!


Papers

Must Read
Extensions of DQN

Other Resources


Week 4 - Policy gradient algorithms - REINFORCE & A2C

Week 4 introduce Policy Gradient methods, a class of algorithms that optimize directly the policy. Also, you'll learn about Actor-Critic algorithms. These algorithms combine both policy gradient (the actor) and value function (the critic).

Lectures - Theory drawing

  • Policy gradient Methods - David Silver (DeepMind)
    • Finite Difference Policy Gradient
    • Monte-Carlo Policy Gradient
    • Actor-Critic Policy Gradient
  • Policy gradient intro - Sergey Levine (RECAP, optional)
    • Policy Gradient (REINFORCE and Vanilla PG)
    • Variance reduction
  • Actor-Critic - Sergey Levine (More in depth)
    • Actor-Critic
    • Discout factor
    • Actor-Critic algorithm design (batch mode or online)
    • state-dependent baseline

Project of the Week - Vanilla PG and A2C drawing

Vanilla PG and A2C applied to CartPole - The exercise of this week is to implement a policy gradient method or a more sophisticated actor-critic. In the repository you can find an implemented version of PG and A2C. Bug Alert! Pay attention that A2C give me strange result. If you find the implementation of PG and A2C easy, you can try with the asynchronous version of A2C (A3C).

Papers

Other Resources


Week 5 - Advanced Policy Gradients - PPO

This week is about advanced policy gradient methods that improve the stability and the convergence of the "Vanilla" policy gradient methods. You'll learn and implement PPO, a RL algorithm developed by OpenAI and adopted in OpenAI Five.

Lectures - Theory drawing

  • Advanced policy gradients - Sergey Levine (UC Berkley)
    • Problems with "Vanilla" Policy Gradient Methods
    • Policy Performance Bounds
    • Monotonic Improvement Theory
    • Algorithms: NPO, TRPO, PPO
  • Natural Policy Gradients, TRPO, PPO - John Schulman (Berkey DRL Bootcamp) - (RECAP, optional)
    • Limitations of "Vanilla" Policy Gradient Methods
    • Natural Policy Gradient
    • Trust Region Policy Optimization, TRPO
    • Proximal Policy Optimization, PPO

Project of the Week - PPO drawing

drawing

PPO applied to BipedalWalker - This week, you have to implement PPO or TRPO. I suggest PPO given its simplicity (compared to TRPO). In the project folder Week5 you find an implementation of PPO that learn to play BipedalWalker. Furthermore, in the folder you can find other resources that will help you in the development of the project. Have fun!


To learn more about PPO read the paper and take a look at the Arxiv Insights's video

Papers

Other Resources


Week 6 - Evolution Strategies and Genetic Algorithms - ES

In the last year, Evolution strategies (ES) and Genetic Algorithms (GA) has been shown to achieve comparable results to RL methods. They are derivate-free black-box algorithms that require more data than RL to learn but are able to scale up across thousands of CPUs. This week we'll look at this black-box algorithms.

Lectures & Articles - Theory drawing

Project of the Week - ES drawing

drawing

Evolution Strategies applied to LunarLander - This week the project is to implement a ES or GA. In the Week6 folder you can find a basic implementation of the paper Evolution Strategies as a Scalable Alternative to Reinforcement Learning to solve LunarLanderContinuous. You can modify it to play more difficult environments or add your ideas.


Papers

Other Resources


Week 7 - Model-Based reinforcement learning - MB-MF

The algorithms studied up to now are model-free, meaning that they only choose the better action given a state. These algorithms achieve very good performance but require a lot of training data. Instead, model-based algorithms, learn the environment and plan the next actions accordingly to the model learned. These methods are more sample efficient than model-free but overall achieve worst performance. In this week you'll learn the theory behind these methods and implement one of the last algorithms.

Lectures - Theory drawing

Project of the Week - MB-MF drawing

drawing

MB-MF applied to RoboschoolAnt - This week I chose to implement the model-based algorithm described in this paper. You can find my implementation here. NB: Instead of implementing it on Mujoco as in the paper, I used RoboSchool, an open-source simulator for robot, integrated with OpenAI Gym.


Papers

Other Resources


Week 8 - Advanced Concepts and Project Of Your Choice

This last week is about advanced RL concepts and a project of your choice.

Lectures - Theory drawing

The final project

Here you can find some project ideas.

Other Resources


Last 4 days - Review + Sharing

Congratulation for completing the 60 Days RL Challenge!! Let me know if you enjoyed it and share it!

See you!

Best resources

πŸ“š Reinforcement Learning: An Introduction - by Sutton & Barto. The "Bible" of reinforcement learning. Here you can find the PDF draft of the second version.

πŸ“š Deep Reinforcement Learning Hands-On - by Maxim Lapan

πŸ“š Deep Learning - Ian Goodfellow

πŸ“Ί Deep Reinforcement Learning - UC Berkeley class by Levine, check here their site.

πŸ“Ί Reinforcement Learning course - by David Silver, DeepMind. Great introductory lectures by Silver, a lead researcher on AlphaGo. They follow the book Reinforcement Learning by Sutton & Barto.

Additional resources

πŸ“š Awesome Reinforcement Learning. A curated list of resources dedicated to reinforcement learning

πŸ“š GroundAI on RL. Papers on reinforcement learning

A cup of Coffe β˜•

Any contribution is higly appreciated! Cheers!

paypal

reinforcement-learning's People

Contributors

andri27-ts avatar cjauvin avatar matteobarbieri avatar wgierke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

reinforcement-learning's Issues

Are there any Prerequisites?

I am sure RL must come with a lot of Prerequisites, can you list them, please? It would really help people like me. 😸

Interested by WorldModels?

Hey ! Me again ;)

I've seen that you plan on using Model-based RL ? Are you planning to go over some agent-imagined world like in this post ?

If so, I just finished a working VAE implementation , particularly useful for state representation.

A2C Suggestion

Hey,

Nice work in this repo ! In your A2C example, I think you should:

  1. Make sure you initialize your agent with small weights
  2. As you said, avoid sharing parameters between policy and critic

Good luck !

rl in industry use case

Hi andri can u suggest something on a scanrios where we have 5 or more parameters affecting a goal and we want our system to learn best value for these parameters to acheiev the goal , The target value of the outout variable changes in 2-3 days .
What shoudl we do in such sceanrios?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.