Giter VIP home page Giter VIP logo

multiagentperception's Introduction

When2com: Multi-Agent Perception via Communication Graph Grouping

License: MIT

This is the PyTorch implementation of our paper:
When2com: Multi-Agent Perception via Communication Graph Grouping
Yen-Cheng Liu, Junjiao Tian, Nathaniel Glaser, Zsolt Kira
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020

[Paper] [GitHub] [Project]

Prerequisites

  • Python 3.6
  • Pytorch 0.4.1
  • Other required packages in requirement.txt

Getting started

Download and install miniconda

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Create conda environment

conda create -n semseg python=3.6
source actviate semseg

Install the required packages

pip install -r requirements.txt

Download AirSim-MAP dataset and unzip it.

  • Download the zip file you would like to run

Alt text

Move the datasets to the dataset path

mkdir dataset
mv (dataset folder name) dataset/

Training

# [Single-request multi-support] All norm  
python train.py --config configs/srms-allnorm.yml --gpu=0

# [Multi-request multi-support] when2com model  
python train.py --config configs/mrms-when2com.yml --gpu=0

Testing

# [Single-request multi-support] All norm  
python test.py --config configs/srms-allnorm.yml --model_path <your trained weights> --gpu=0

# [Multi-request multi-support] when2com model  
python test.py --config configs/mrms-when2com.yml --model_path <your trained weights> --gpu=0

Acknowledgments

  • This work was supported by ONR grant N00014-18-1-2829.
  • This code is built upon the implementation from Pytorch-semseg.

Citation

If you find this repository useful, please cite our paper:

@inproceedings{liu2020when2com,
    title={When2com: Multi-Agent Perception via Communication Graph Grouping},
    author={Yen-Cheng Liu and Junjiao Tian and Nathaniel Glaser and Zsolt Kira},
    booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2020}
}

multiagentperception's People

Contributors

dependabot[bot] avatar ycliu93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

multiagentperception's Issues

Something about training

Sorry to bother you. Thanks for you work. When I enter the command 'python train.py --config configs/multi-request-multyi-support/mrms_when2com.yml --gpu=3' in the terminal to run the file, an error 'RuntimeError: CuDNN error: CUDNN_STATUS_MAPPING_ERROR' will be reported. Do you know how to solve this problem?

The required version of the packages and python

Hello! Could you please tell me exactly the required version of the packages and python. In the tutorial, the required version of python is 3.6. In the requirements.txt, the required version of packages are
matplotlib==2.0.0
numpy==1.22.0
scipy==0.19.0
torch==0.4.1
torchvision==0.2.0
tqdm==4.11.2
pydensecrf
protobuf
tensorboardX
pyyaml
pretrainedmodels
opencv-python
But there are some version mismatch issues.
So I will appreciate it if you can tell me exactly the required version of the packages and python

No module named 'ptsemseg.visual'


File "test.py", line 14, in
from ptsemseg.visual import draw_bounding
ModuleNotFoundError: No module named 'ptsemseg.visual'

I can't import visual module!Could you please provide me with this module? thank you very much

Error about Dataset

Sorry to bother you! Here I unziped "airsim-mrms-noise-data.zip", then I run the model, it throw out the error below:

Loaded: selection label.
Traceback (most recent call last):
  File "train.py", line 142, in <module>
    t_loader = data_loader(
  File "G:\TWang\MultiAgentPerception-master\ptsemseg\loader\airsim_loader.py", line 258, in __init__
    raise Exception(
Exception: No files for split=[train] found in dataset/airsim-mrms-noise-data

Actually, I split the dataset into the form, according to
屏幕截图 2021-11-18 104736

Could you show me how you construct your dataset form? Or there lack the code of preprocess dataset to generate the split form?

Can not access Datasets

Hi, can you show me how to access your AirSim-MAP datasets? I only got 404 not found error message back. Thanks a lot!

questions about who2com.

I find that the method of fusing the features is not the same as in the paper that relies on the attention mechanism score to select the features of the corresponding agent to fuse .
In the code, during training, the fused features are obtained directly using the attention mechanism, and it seems that the returned agent number, i.e., action_argmax , is not used subsequently.

outputs, log_action, action_argmax = self.model(images, training=True)

And you can see that the use of torch.argmax during training does not match the thesis
if training:
action = torch.argmax(prob_action, dim=2)
return pred, prob_action, action

https://github.com/GT-RIPL/MultiAgentPerception/blob/4ef300547a7f7af2676a034f7cf742b009f57d99/ptsemseg/trainer.py#L384C1-L398C1
And instead of using commun_label during training, it was used as a metric during validation and testing.
I feel that the log_action, action_argmax are really meaningless, and the features obtained during training are not all the features of a vehicle, but are obtained by using the attention method. Instead of selecting all the features of a certain car, in the test, new features are added after the attention is used.
https://github.com/GT-RIPL/MultiAgentPerception/blob/4ef300547a7f7af2676a034f7cf742b009f57d99/ptsemseg/models/agent.py#L629C1-L651C56
image

Dataset link doesnt work

Hi,

The dataset link is expired, can you provide another link ?

Thanks in advance, keep up the hard work :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.