Giter VIP home page Giter VIP logo

wind-farm-env's Introduction

wind-farm-env

License

This repository contains the source code and data for the experiments presented in Deep Reinforcement Learning for Active Wake Control (to be published at AAMAS'22).

How to Use This Repository

Replicating the Experiments

Because each experiment has many parameters, we use configuration files to describe each experiment. The configuration files for the experiments presented in the paper can be found in ./code/configs: action_representations_*.yml for the action representation experiment (Section 4.1), and noisy_*.yml for the noisy observation experiment (Section 4.2). You can also create new configuration files to run your own experiments.

Make sure that you are using Python 3.8 or newer. You also need to install required packages by running python3 -m pip install -r requirements.txt from the project's folder.

To replicate the experiments in the paper, run python3 main.py --config CONFIG_FILE from the code directory, where CONFIG_FILE is a path to a configuration file. The output data is written in Tensorboard format to <current-path>/<directory>/<name>/..., where <current-path> is the path the script was executed from and <directory> and <name> correspond to parameters specified in the configuration file. You can use ./code/tensorboard_to_csv.py --path <PATH> to convert the output from <PATH> to a .csv-file.

Building from the Source Code

If you are interested in using the wind farm environment with your own reinforcement learning agents, you can build and install a Python package that will make the environment available on your machine.

First, make sure that you have build installed by running python3 -m pip install --upgrade build.

Next, build the package by running python3 -m build from the project's directory. For linux based systems you can be asked to install python3-venv (i.e. apt install python3-venv).

This will create ./dist with a wind_farm_gym-<VERSION>-py3-none-any.whl file in it, where <VERSION> is the current release version.

Finally, install the package by running python3 -m pip install wind_farm_gym-<VERSION>-py3-none-any.whl, or python3 -m pip install wind_farm_gym-<VERSION>-py3-none-any.whl --force-reinstall if you want to reinstall an existing installation. This should install the package and its dependencies.

To test that the package is available and working, you can run the following example, also available at build-test.py. A simulation window should appear on your screen showing an overhead view of a three-turbine wind farm.

from wind_farm_gym import WindFarmEnv

# Initialize the environment with 3 turbines positioned 750 meters apart in a line
env = WindFarmEnv(turbine_layout=([0, 750, 1500], [0, 0, 0]))

obs = env.reset()
for _ in range(1000):                # Repeat for 1000 steps
    a = env.action_space.sample()    # Choose an action randomly
    obs, reward, _, _ = env.step(a)  # Perform the action
    env.render()                     # Render the environment; remove this line to speed up the process
env.close()

A simulation of an overhead view of a wind farm

Contents

  • ./code contains the source code:
    • agent for implementations of RL agents;
    • config for configuration files used in the experiments;
    • wind_farm_gym for the environment.
  • ./data contains the datasets used in the paper:
    • wind_data contains the measurements from Hollandse Kust Noord (site B) used to estimate the transition model in Section 3.4.
    • the other subdirectories contain the results of the experiments:
      • action_representations_tunnel for the experiment presented in Section 4.1.
      • noisy_1 ... noisy_7 for the experiment presented in Section 4.2; each folder corresponds to a different level of noise, 1% ... 7%.
  • supplement.pdf is the supplementary material submitted with the paper.

Citation

Please, cite the paper if you use it:

@inproceedings{Neustroev2022,
  title     = {Deep Reinforcement Learning for Active Wake Control},
  author    = {Neustroev, Grigory and Andringa, Sytze P.E. and Verzijlbergh, Remco A. and de~Weerdt, Mathijs M.},
  booktitle = {International Conference on Autonomous Agents and Multi-Agent Systems},
  year      = {2022},
  address   = {Online},
  publisher = {IFAAMAS},
  month     = {May},
  numpages  = {10}
}

wind-farm-env's People

Contributors

greg-neustroev avatar sytzeandr avatar mdeweerdt avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.