Giter VIP home page Giter VIP logo

rl-shems's Introduction

RL-SHEMS

This repository belongs to a publication recently accepted by Applied Energy, the publication will be linked here once it is online.

The publication is closely linked to another publication/repository of mine: https://https://github.com/lilanger/SHEMS

Langer, Lissy, and Thomas Volling. "An optimal home energy management system for modulating heat pumps and photovoltaic systems." Applied Energy 278 (2020): 115661. https://doi.org/10.1016/j.apenergy.2020.115661

Preprint available here: https://arxiv.org/abs/2009.02349

This repository takes the model predictive control (MPC) implementation of the above Smart Home Energy Management (SHEMS) system and translates it into a reinforcement learning (RL) environment. The different environments tested are implemented using the Julia package Reinforce.jl which is very light-weight.

The RL environment is solved using the deep deterministic policy gradient (DDPG) algorithm implemented using the Julia package Flux.jl. You will find some other algorithms in the repository but most of them will not work in their current state.

HOW IT WORKS (some hints)

Loading the right environment

The repository contains a Manifest and Project files, so that the same Julia package versions can be installed. Julia is used in version 1.6.1.

On a cluster

When running the model on a cluster the job files can be used, default is for a gpu (jobfile_ddpg_v12) but there is also a cpu version (jobfile_ddpg_v12_cpu). For example, 40 parallel model runs can then be started using the bash command: qsub -t 1-40:1 jobfile_ddpg_v12.job

Work flow

  • In general, all input data is fed from the input.jl file, there are some templates for the different algorithms available. The input files of previous runs are saved in out/input.
  • The general workflow is defined in DDPG_reinforce_v12_nf.
  • The folder algorithms contains the code for the DDPG implementation.
  • The folder Reinforce.jl... contains the RL environments and the file to embed them in the Reinforce package. The environments used in the paper are Case A: H10 , CaseB: H9 and Case C: U8.
  • The Analysis-cases folder contains the result analysis of the cases run and the results of the main runs illustrated in the paper.
  • The data folder contains the input data of the RL environment.
  • The out folder contains the results of the model runs, I have, however, not uploaded all results but just the most recent ones to save on space.

I tried to add some comments in the code, so other people would be able to understand what is going on. I hope I was somewhat successful. If you have questions, just raise an issue and I will try to help.

rl-shems's People

Contributors

lilanger avatar

Stargazers

Minjae Son avatar Raymond avatar Tian fayang avatar  avatar  avatar  avatar Yanfei avatar  avatar  avatar  avatar  avatar Roger Samsó avatar Jeremiah avatar  avatar zuoj avatar

Watchers

 avatar

Forkers

owuqqq

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.