-
Create conda environment:
conda create -n interpretable_marl python=3.7
-
Activate conda environment:
conda activate interpretable_marl
-
Install GPU deps:
conda install -c conda-forge cudatoolkit=11.2.2 cudnn=8.1.0
(https://gretel.ai/blog/install-tensorflow-with-cuda-cdnn-and-gpu-support-in-4-easy-steps) -
Save library path for gpu deps:
mkdir -p $CONDA_PREFIX/etc/conda/activate.d echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/' > $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
-
Close and reopen terminal or SSH and reactivate conda environment if needed
-
Move into the root project directory:
cd interpretable_marl
(notinterpretable_marl/interpretable_marl
) -
Clone overcooked_ai:
git clone https://github.com/HumanCompatibleAI/overcooked_ai.git
(not using git submodules since only one repo, but if we add more, we should use submodules) -
Install overcooked :
pip install -e overcooked_ai/
-
Install poetry:
pip install poetry
-
Install the root project with poetry:
poetry install
-
Open & run the notebook tutorial in overcooked to verify the upstream overcooked setup works
- Login to wandb:
wandb login
- Run
python interpretable_marl/run_single.py
to verify the interpretable_marl setup works - Edit or run experiments as desired
TODO: Add instructions for editing model, augmenting state in both training and evaluation, running experiments, changing action verbiage, debugging, changing config, etc.