Giter VIP home page Giter VIP logo

climate-cooperation-competition's Introduction

Competition: Fostering Global Cooperation to Mitigate Climate Change

PyTorch 1.9.0 Python 3.7 Warp drive 1.7.0 Ray 1.0.0 Paper Code Tutorial

(Code Tutorial Notebook on Kaggle with free GPU available)

This is the code respository for the competition on modeling global cooperation in the RICE-N Integrated Assessment Model. This competition is co-organized by MILA and Salesforce Research.

The RICE-N IAM is an agent-based model that incorporates DICE climate-economic dynamics and multi-lateral negotiation protocols between several fictitious nations.

In this competition, you will design negotiation protocols and contracts between nations. You will use the simulation and agents to evaluate their impact on the climate and the economy.

We recommend that GPU users use warp_drive and CPU users use rllib.

Tutorial

Resources

Installation

Notice: we recommend using Linux or MacOS. For Windows users, we recommend to use virtual machine running Ubuntu 20.04 or Windows Subsystem for Linux.

You can get a copy of the code by cloning the repo using Git:

git clone https://github.com/mila-iqia/climate-cooperation-competition
cd climate-cooperation-competition

As an alternative, one can also use:

git clone https://e.coding.net/ai4climatecoop/ai4climatecoop/climate-cooperation-competition.git
cd climate-cooperation-competition

We recommend using a virtual environment (such as provided by virtualenv or Anaconda).

You can install the dependencies using pip:

pip install -r requirements.txt

Get Started

Then run the getting started Jupyter notebook, by starting Jupyter:

jupyter notebook

and then navigating to getting_started.ipynb.

It provides a quick walkthrough for registering for the competition and creating a valid submission.

Training with reinforcement learning

RL agents can be trained using the RICE-N simulation using one of these two frameworks:

  1. RLlib: The pythonic environment can be trained on your local CPU machine using open-source RL framework, RLlib.
  2. WarpDrive: WarpDrive is a GPU-based framework that allows for over 10x faster training compared to CPU-based training. It requires the simulation to be written out in CUDA C, and we also provide a starter version of the simulation environment written in CUDA C (rice_step.cu)

We also provide starter scripts to train the simulation you build with either of the above frameworks.

Note that we only allow these two options, since our backend submission evaluation process only supports these at the moment.

For training with RLlib, rllib (1.0.0), torch (1.9.0) and gym (0.21) packages are required.

For training with WarpDrive, the rl-warp-drive (>=1.6.5) package is needed.

Note that these requirements are automatically installed (or updated) when you run the corresponding training scripts.

Docker image (for GPU users)

We have also provided a sample dockerfile for your reference. It mainly uses a Nvidia PyTorch base image, and installs the pycuda package as well. Note: pycuda is only required if you would like to train using WarpDrive.

Docker image (for CPU users)

Thanks for the contribution from @muxspace. We also have an end-to-end docker environment ready for CPU users. Please refer to README_CPU.md for more details.

Customizing and running the simulation

See the Colab_Tutorial.ipynb. Open In Colab for details.

It provides examples on modifying the code to implement different negotiation protocols. It describes ways of changing the agent observations and action spaces corresponding to the proposed negotiation protocols and implementing the negotiation logic in the provided code.

The notebook has a walkthrough of how to train RL agents with the simulation code and how to visualize results from the simulation after running it with a set of agents.

For those who have limited access to Colab, please try to use free GPUs on Kaggle. Please notice that Kaggle platform requires mobile phone verification to be able to access the GPUs. One may find the settings to get GPUs and internet connect on the right hand side after clicking on the link above and login.

Training RL agents in your simulation

Once you build your simulation, you can use either of the following scripts to perform training.

  • train_with_rllib.py: this script performs end-to-end training with RLlib. The experiment run configuration will be read in from rice_rllib.yaml, which contains the environment configuration, logging and saving settings and the trainer and policy network parameters. The duration of training can be set via the num_episodes parameter. We have also provided an initial implementation of a linear PyTorch policy model in torch_models.py. You can add other policy models you wish to use into that file.

USAGE: The training script (with RLlib) is invoked using (from the root directory)

    python scripts/train_with_rllib.py
  • train_with_warp_drive.py: this script performs end-to-end training with WarpDrive. The experiment run configuration will be read in from rice_warpdrive.yaml. Currently, WarpDrive just supports the Advantage Actor-Critic (A2C) and the Proximal Policy Optimization (PPO) algorithms, and the fully-connected policy model.

USAGE: The training script (with WarpDrive) is invoked using

    python scripts/train_with_warpdrive.py

As training progresses, some key metrics (such as the mean episode reward) are printed on screen for your reference. At the end of training, a zipped submission file is automatically created and saved for your reference. The zipped file essentially comprises the following

  • An identifier file (.rllib or .warpdrive) indicating which framework was used towards training.
  • The environment files - rice.py and rice_helpers.py.
  • A copy of the yaml configuration file (rice_rllib.yaml or rice_warpdrive.yaml) used for training.
  • PyTorch policy model(s) (of type ".state_dict") containing the trained weights for the policy network(s). Only the trained policy model for the final timestep will be copied over into the submission zip. If you would like to instead submit the trained policy model at a different timestep, please see the section below on creating your submission file.
  • For submissions using WarpDrive, the submission will also contain CUDA-specific files rice_step.cu and rice_cuda that were used for training.

Contributing

We are always looking for contributors from various domains to help us make this simulation more realistic.

If there are bugs or corner cases, please open a PR detailing the issue and consider submitting to Track 3!

Citation

To cite this code, please use the information in CITATION.cff and the following bibtex entry:

@software{Zhang_RICE-N_2022,
author = {Zhang, Tianyu and Srinivasa, Sunil and Williams, Andrew and Phade, Soham and Zhang, Yang and Gupta, Prateek and Bengio, Yoshua and Zheng, Stephan},
month = {7},
title = {{RICE-N}},
url = {https://github.com/mila-iqia/climate-cooperation-competition},
version = {1.0.0},
year = {2022}
}

@misc{https://doi.org/10.48550/arxiv.2208.07004,
  doi = {10.48550/ARXIV.2208.07004},
  url = {https://arxiv.org/abs/2208.07004},
  author = {Zhang, Tianyu and Williams, Andrew and Phade, Soham and Srinivasa, Sunil and Zhang, Yang and Gupta, Prateek and Bengio, Yoshua and Zheng, Stephan},
  title = {AI for Global Climate Cooperation: Modeling Global Climate Negotiations, Agreements, and Long-Term Cooperation in RICE-N},  
  publisher = {arXiv},
  year = {2022}
}

License

For license information, see LICENSE.txt.

climate-cooperation-competition's People

Contributors

andrewrwilliams avatar brenting avatar muxspace avatar pg2455 avatar sohamphade avatar stephanzheng avatar sunil-s avatar tianyu-z avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

climate-cooperation-competition's Issues

cannot run `train_with_warp_drive`

I meet a few errors when running train_with_warp_drive,

one is from scripts.run_unittests import import_class_from_path seem not good, it will complain scripts cannot be found in module, so I remove scripts., I believe the Python path of this lib shall be cleaned up a little bit

the 2nd problem is when I run this training script, the first time is fine, but the second time and on I constantly got this error, I believe it comes from the package importer, can you check on that?

Traceback (most recent call last):
  File "/home/user/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2191, in _handle_ns
    loader = importer.find_spec(packageName).loader
  File "<frozen importlib._bootstrap_external>", line 1391, in find_spec
  File "<frozen importlib._bootstrap_external>", line 59, in _path_join
  File "<frozen importlib._bootstrap_external>", line 59, in <listcomp>
AttributeError: 'PosixPath' object has no attribute 'rstrip'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train_with_warp_drive.py", line 52, in <module>
    other_imports = perform_other_imports()
  File "train_with_warp_drive.py", line 34, in perform_other_imports
    import torch
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/torch/__init__.py", line 29, in <module>
    from .torch_version import __version__ as __version__
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/torch/torch_version.py", line 3, in <module>
    from pkg_resources import packaging  # type: ignore[attr-defined]
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3238, in <module>
    @_call_aside
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3222, in _call_aside
    f(*args, **kwargs)
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3266, in _initialize_master_working_set
    for dist in working_set
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3266, in <genexpr>
    for dist in working_set
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2780, in activate
    declare_namespace(pkg)
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2279, in declare_namespace
    _handle_ns(packageName, path_item)
  File "/home/tian-lan/miniconda/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2196, in _handle_ns
    loader = importer.find_module(packageName)
  File "<frozen importlib._bootstrap_external>", line 431, in _find_module_shim
  File "<frozen importlib._bootstrap_external>", line 1346, in find_loader
  File "<frozen importlib._bootstrap_external>", line 1391, in find_spec
  File "<frozen importlib._bootstrap_external>", line 59, in _path_join
  File "<frozen importlib._bootstrap_external>", line 59, in <listcomp>
AttributeError: 'PosixPath' object has no attribute 'rstrip'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.