Giter VIP home page Giter VIP logo

mg-gan's Introduction

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction

This repository contains the code for the paper

MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction
Patrick Dendorfer*, Sven Elflein*, Laura Leal-Taixé (* equal contribution)
International Conference on Computer Vision (ICCV), 2021

Motivation

The distribution over future trajectories of pedestrians is often multi-modal and does not have connected support (a).

We found that single generator GANs introduce out-of-distribution (OOD) samples in this case due to GANs mapping the continuous latent variable z with a continuous function (b). These OOD samples might introduce unforseen behavior in real world applications, such as autonomous driving.

To resolve this problem, we propose to learn the target distribution in a piecewise manner using multiple generators, effectively preventing OOD samples (c).

Model

Our model consists of four key components: Encoding modules, Attention modules, and our novel contribution PM-Network learning a distribution over multiple Generators.


Setup

First, setup Python environment

conda create -f environment.yml -n mggan
conda activate mggan

Then, download the datasets (data.zip) from here and unzip in the root of this repository

unzip data.zip

which will create a folder ./data/datasets.

Training

Models can be trained using the script mggan/model/train.py using the following command

python mggan/model/train.py --name <name_of_experiment> --num_gens <number_of_generators>  --dataset <dataset_name> --epochs 50

This generates a output folder in ./logs/<name_of_experiment> with Tensorboard logs and the model checkpoints. You can use tensorboard --logdir ./logs/<name_of_experiment> to monitor the training process.

Evaluation

For evaluation of metrics (ADE, FDE, Precison, Recall) for k=1 to k=20 predictions, use

python scripts/evaluate.py --model_path <path_to_model_directory>  --output_folder <folder_to_store_result_csv>

One can use --eval-set <dataset_name> to evaluate models on other test sets than the dataset the model was trained on. This is useful to evaluate the BIWI models on the Garden of Forking Paths dataset (gofp) for which we report results in the paper.

Pre-trained models

We provide pre-trained models for MG-GAN with 2-8 generators together with the training configurations, on the BIWI datasets and Stanford Drone dataset (SDD) here.

Citation

If our work is useful to you, please consider citing

@inproceedings{dendorfer2021iccv,
  title={MG-GAN: A Multi-Generator Model Preventing Out-of-Distribution Samples in Pedestrian Trajectory Prediction}, 
  author={Dendorfer, Patrick and Elflein, Sven and Leal-Taixé, Laura},
  month={October}
  year={2021},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  }

mg-gan's People

Contributors

selflein avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mg-gan's Issues

Question regarding Garden of Forking Path Dataset

Hello,

I see there are more scenes in the test set (ETH, Hotel, and ZARA1) than the train set (ETH) in your pre-processed dataset of GOFP. Could you kindly elaborate on why it is that?

Thanks,
Sourav Das

Reproducible MG-GAN code for the FPD dataset

Hello Patrick, Sven,

This is Sourav Das, a 1st year Ph.D. student at the University of Padova, Italy.

This Github repository has the reproducible implementation for the datasets: ETH, Hotel, Social_Stanford_Synthetic, Stanford, Univ, Zara1, Zara2, and GOFP.

I would like to reproduce the results on FPD datasets also. Could you kindly share with me the code with support for the FPD dataset?

Here is my Github: https://github.com/SodaCoder

Thanks in advance,

Question about ETH&UCY Dataset

Hi, I notice that trajectories in some datasets are not consistent with provided in Social GAN. May I ask how do you preprocess your data? It will be helpful to conduct my experiments in a fair environment. Thanks!

request to visualizer

Hello author! I admire your work and would like to reproduce your results. There is a small requirement here that needs to trouble you. Do you have a visual code, which has shown the effect in your paper. Thanks again for your work and contributions!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.