Giter VIP home page Giter VIP logo

suman7495 / rl-botics Goto Github PK

View Code? Open in Web Editor NEW
18.0 3.0 5.0 278 KB

Deep Reinforcement Learning Toolbox for Robotics using Keras and TensorFlow

License: MIT License

Python 100.00%
reinforcement-learning robotics reinforcement-learning-algorithms trpo policy-gradient rl machine-learning machine-learning-algorithms deep-learning deep-reinforcement-learning deep-q-learning ddpg openai-gym openai robots ppo qlearning machinelearning keras tensorflow

rl-botics's Introduction

RL-botics

RL-botics is a toolbox with highly optimized implementations of Deep Reinforcement Learning algorithms for robotics developed with Keras and TensorFlow in Python3.

The objective was to have modular, clean and easy to read codebase so that the research community may build on top with ease. The implementations can be integrated with OpenAI Gym environments. The majority of the algorithms are Policy Search Methods as the toolbox is targetted for robotic applications.

Requirements

Requirements:

Conda Environment

It is highly recommended to install this package in a virtual environment, such as Miniconda. Please find the Conda installation here.

To create a new conda environment called RL:

conda create -n RL python=3

To activate the environment:

source activate RL

To deactivate the environment:

source deactivate

Installation

To install the package, we recommend cloning the original package:

git clone https://github.com/Suman7495/rl-botics.git
cd rl-botics
pip install -e .

Usage

To run any algorithm in the default setting, simply run:

cd rl_botics/<algo>/
python main.py

For example, to run TRPO:

cd rl_botics/trpo/
python main.py

Numerous other options can be added too, but it is recommended to modify the hyerperparameters in hyperparameters.py.

Algorithms

The algorithms implemented are:

To be added:

Environments

All environments are in the envs directory. The environments available currently are:

  • Field Vision Rock Sampling (FVRS): A POMDP environment where the agent has to collect good rocks from partial observability.
  • Table Continuous: A POMDP environment emulation Human Robot Collaboration. The objective of the robot is to remove dirty dishes from the table without colliding with the human.

Toolbox Structure

All the algorithms are in the rl_botics directory. Each algorithm specified above has an individual directory.

Common

The directory common contains common modular classes to easily build new algorithms.

  • approximators: Basic Deep Neural Networks (Dense, Conv, LSTM).
  • data_collection: Performs rollouts and collect observations and rewards
  • logger: Log training data and other information
  • plotter: Plot graphs
  • policies: Common policies such as Random, Softmax, Parametrized Softmax and Gaussian Policy
  • utils: Functions to compute the expected return, the Generalized Advantage Estimation (GAE), etc.

Algorithm Directories

Each algorithm directory contains at least 3 files:

  • main.py: Main script to run the algorithm
  • hyperparameters.py: File to contain the default hyperparameters
  • <algo>.py: Implementation of the algorithm
  • utils.py: (Optional) File containing some utility functions

Some algorithm directories may have additional files specific to the algorithm.

Contributing

To contribute to this package, it is recommended to follow this structure:

  • The new algorithm directory should at least contain the 3 files mentioned above.
  • main.py should contain at least the following functions:
    • main: Parses input argument, builds the environment and agent, and train the agent.
    • argparse: Parses input argument and loads default hyperparameters from hyperparameter.py.
  • <algo>.py should contain at least the following methods:
    • __init__: Initializes the classes
    • _build_graph: Calls the following methods to build the TensorFlow graph:
      • _init_placeholders: Initialize TensorFlow placeholders
      • _build_policy: Build policy TensorFlow graph
      • _build_value_function: Build value function TensorFlow graph
      • _loss: Build policy loss function TensorFlwo graph
    • train: Main training loop called by main.py
    • update_policy: Update the policy
    • update_value: Update the value function
    • print_results: Print the training results
    • process_paths: (optional) Process collected trajectories to return the feed dictionary for TensorFlow

It is recommended to check the structure of ppo.py and follow a similar structure.

Credits

Suman Pal

License

MIT License.

rl-botics's People

Contributors

suman7495 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.