Giter VIP home page Giter VIP logo

rl_matrix's Introduction

RLMatrixBanner

πŸš€ RL Matrix - Pure C# Deep Reinforcement Learning Experience with TorchSharp!

Dive into the future of type-safe Deep Reinforcement Learning with .NET & RL Matrix, powered by the might of TorchSharp. RL Matrix stands out as a user-friendly toolkit offering a collection of RL algorithmsβ€”primed for plug, play, and prosper! NuGet NuGet NuGet TorchSharp

🌟 Featured Algorithms:

  • PPO
  • DQN
  • Both have 1D (Feed forward) and 2D (CNN) variants
  • 0.1.2 Adds multi-head continous (PPO) discrete (PPO, DQN) and mixed (PPO) actions. See IEnvironment and IContinousEnvironment.
  • 0.2.0 Adds working-ish PPO GAIL. And overhauls training method for stepwise
  • 0.2.0 Adds multi-environment training
  • 0.2.0 Includes Godot examples and RLMatrix.Godot nuget package for easy setup
  • Only tested single-head discrete output so please open issue if it doesnt work.

🎯 What Sparks RL Matrix?

While embarking on my RL journey, I sensed a gap in the reinforcement learning world even with TorchSharp's solid foundation. It struck meβ€”C# is the ideal choice for RL outside research circles, thanks to its pristine and intuitive coding experience. No more guessing games in environment or agent building!

With RL Matrix, our vision is to offer a seamless experience. By simply incorporating the IEnvironment interface, you're equipped to rapidly craft and unleash Reinforcement Learning Agents (RL Agents). Switching between algorithms? A breeze! It’s our nod to the elegance of Matlab's toolkit methodology.

enter image description here enter image description here

πŸš€ Getting Started:

Peek into the /examples/ directory for illustrative code. But to give you a quick start:

  1. Craft an IEnvironment class: Comply with reinforcement learning guidelines, defining your observation shapes and action count:
public class CartPole : IEnvironment<float[]>
{
  public int stepCounter { get; set; }
  public int maxSteps { get; set; }
  public bool isDone { get; set; }
  public OneOf<int, (int, int)> stateSize { get; set; }
  public int actionSize { get; set; }

  CartPoleEnv myEnv;

  private float[] myState;

  public CartPole()
  {
      Initialise();
  }

  public float[] GetCurrentState()
  {
      if (myState == null)
          myState = new float[4] {0,0,0,0};
      return myState;
  }

  public void Initialise()
  {
      myEnv = new CartPoleEnv(WinFormEnvViewer.Factory);
      stepCounter = 0;
      maxSteps = 100000;
      stateSize = myEnv.ObservationSpace.Shape.Size;
      actionSize = myEnv.ActionSpace.Shape.Size;
      myEnv.Reset();
      isDone = false; 
  }

  public void Reset()
  {
  //For instance:
      myEnv.Reset();
      isDone = false;
      stepCounter = 0;
  }

  public float Step(int actionId)
  {
  //Whatever step logic, returns reward
      return reward;
  }
}
  1. Agent Instance & Training: Spawn an agent for your environment and ignite the Step method:
var opts = new DQNAgentOptions(batchSize: 64, memorySize: 10000, gamma: 0.99f, epsStart: 1f, epsEnd: 0.05f, epsDecay: 50f, tau: 0.005f, lr: 1e-4f, displayPlot: myChart);
var env = new List<IEnvironment<float[]>> { new CartPole(), new CartPole() };
var myAgent = new DQNAgent<float[]>(opts, env);

for (int i = 0; i < 10000; i++)
{
    myAgent.Step();
}

Notice that TrainEpisode method was removed.

πŸ“Œ Current Roadmap:

-Add RNN support for PPO and DQN -Add variations for multi-head output for PPO and DQN -More Godot examples testing multi-head continous+discrete action spaces -Create Godot plugin -Fully develop workflow for Gail and imitation learning As we innovate, anticipate breaking changes. We'll keep you in the loop!

πŸ’Œ Get in Touch:

Questions? Ideas? Collaborations? Drop a line at: πŸ“§ [email protected]

🀝 License:

RLMatrix is licensed under the RLMatrix Comprehensive Dual License Agreement

rl_matrix's People

Contributors

asieradzk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rl_matrix's Issues

Reinforcement Learning The Identity Function

Context

I was having difficulty using reinforcement learning for more complex problems, so I made a simple project to test the learning potential. In the code below, I am trying to teach the Boolean identity function to the agent, but the agent gives random answers before and after training.

Code

using OneOf;
using RLMatrix; //NuGet v0.2.0

namespace ReinforcementLearning.Trivial;

internal class Program
{
    static void Main(string[] args)
    {
        var options = new PPOAgentOptions(lr: .005f);
        List<IEnvironment<float[]>> environments = [new TrivialEnvironment(), new TrivialEnvironment(), new TrivialEnvironment()];
        var agent = new PPOAgent<float[]>(options, environments, new PPONetProviderBase<float[]>(16, 2));

        Console.WriteLine($"Pre-training: {Evaluate(agent)}");

        for (int i = 0; i < 10000; i++)
        {
            Console.Write(i);
            agent.Step();
            Console.Write("\b\b\b\b\b\b\b\b\b\b\b\b");
        }

        Console.WriteLine($"Post-training: {Evaluate(agent)}");
    }

    private static float Evaluate(PPOAgent<float[]> agent, int count = 1000)
    {
        float[] state = new float[1];
        int correct = 0;
        for (int i = 0; i < count; i++)
        {
            state[0] = TrivialEnvironment.RandomValue();
            (int[] greedyDiscreteActions, float[] meanContinuousActions) = agent.SelectAction(state, false);
            if (state[0] == greedyDiscreteActions[0])
            {
                correct++;
            }
        }
        return (float)correct / count;
    }

    private sealed class TrivialEnvironment : IEnvironment<float[]>
    {
        public const float CorrectAnswerReward = 1;
        public const float WrongAnswerPenalty = -1;

        private readonly float[] state = [RandomValue()];

        public static int RandomValue()
        {
            return Random.Shared.Next(2);
        }

        public int stepCounter { get; set; }
        public int maxSteps { get; set; } = int.MinValue;
        public bool isDone { get; set; }
        public OneOf<int, (int, int)> stateSize { get; set; } = 1;
        public int[] actionSize { get; set; } = [2];

        public float[] GetCurrentState() => state;

        public TrivialEnvironment() => Initialise();

        public void Initialise() => Reset();

        public void Reset()
        {
            stepCounter = 0;
            isDone = false;
        }

        public float Step(int[] actionsIds)
        {
            float input = state[0];
            float output = actionsIds[0];

            state[0] = RandomValue();

            return input == output ? CorrectAnswerReward : WrongAnswerPenalty;
        }
    }
}

Am I using the library incorrectly?

How to save and load models trained using Godot

Hey there, I'm curious about how to save and load models trained using Godot. I've been experimenting with reinforcement learning algorithms, and now I'm at a point where I want to persist my trained models for future use. Can anyone guide me through the process? Any tips or examples would be greatly appreciated! Thanks in advance!

Train one observation at a time

Thank you for the RLMatrix.

I have a complex environment that cannot be simulated, in which interdependent observations are generated one at a time, so running training on an entire episode at once is not an option. I had to add a TrainObservation() based on your TrainEpisode() code. Please add this minor fix.

Thanks

Documentation question

Hello, I'm currently in the process of porting a (failed) TorchSharp agent over to RL Matrix. Honestly thanks so much for this library.
I'm just wondering if you're planning on using the TorchSharp-cuda-windows or TorchSharp-cuda-linux packages eventually? I was using cuda in my previous environment and was hoping it was in the plans

EDIT: I stepped through the debugger and saw that the tensors were already using cuda. I just assumed it didn't since I didn't see it in the README.

I guess while I'm here writing this, I was also wondering if there was an example for a IContinuousEnvironment<>? This is the environment I need since I don't need discrete actions, but the examples were all regular IEnvironment<> from what I saw. Maybe I missed it as well lol

I'm asking since I'm running into this error:
image

My env is extremely basic, I am trying to set up 3 inputs and 3 (continuous) outputs with 0 discrete outputs.

private sealed class Env : IContinuousEnvironment<float[]>
{
	public int stepCounter { get; set; }
	public int maxSteps { get; set; }
	public bool isDone { get; set; }
	public OneOf<int, (int, int)> stateSize { get; set; }
	public int[] actionSize { get; set; }
	public (float min, float max)[] continuousActionBounds { get; set; }

	private readonly float[] _inputs;

	public Env()
	{
		_inputs = new float[3];
		Initialise();
	}

	public void Initialise()
	{
		// new physics
		stepCounter = 0;
		maxSteps = 100_000;
		stateSize = 3;
		actionSize = [3];
		continuousActionBounds = [(-1f, 1f), (-1f, 1f), (-1f, 1f)];
		// physics reset
		isDone = false;
	}

	public float[] GetCurrentState()
	{
		_inputs[0] = 1f;
		_inputs[1] = 0.5f;
		_inputs[2] = -1f;
		return _inputs;
	}

	public void Reset()
	{
		// physics reset
		isDone = false;
		stepCounter = 0;
	}

	public float Step(int[] actionsIds)
	{
		throw new NotImplementedException();
	}
	public float Step(int[] discreteActions, float[] continuousActions)
	{
		return continuousActions[0];
	}
}

I also noticed stepCounter never increases from 0

Introduce Test Driven Development!

It is now possible to do Test Driven Development using VS2022 to create Gym 2D and 3D RL environments for testing

using the latest gdUnit4API TestAdaptor on VS2022

Z0nhScGr3J

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.