Giter VIP home page Giter VIP logo

full_body_anonymization's Introduction

Realistic Full-Body Anonymization with Surface-Guided GANs

This is the official source code for the paper "Realistic Full-Body Anonymization with Surface-Guided GANs".

[Arixv Paper] [Appendix] [Google Colab demo] [WACV 2023 Conference Presentation]

Surface-guided GANs is an automatic full-body anonymization technique based on Generative Adversarial Networks.

The key idea of surface-guided GANs is to guide the generative model with dense pixel-to-surface information (based on continuous surface embeddings). This yields highly realistic anonymization result and allows for diverse anonymization.

Check out the new DeepPrivacy2! It significantly improves anonymization quality compared to this repository.

Requirements

  • Pytorch >= 1.9
  • Torchvision >= 0.11
  • Python >= 3.8
  • CUDA capable device for training. Training was done with 1-4 32GB V100 GPUs.

Installation

We recommend to setup and install pytorch with anaconda following the pytorch installation instructions.

  1. Clone repository: git clone https://github.com/hukkelas/full_body_anonymization/.
  2. Install using setup.py:
pip install -e .

Otherwise, you can setup your environment with our provided Dockerfile.

Test the model

Anonymizing files

The file anonymize.py can anonymize image paths, directories and videos. python anonymize.py --help prints the different options.

To anonymize, visualize and save an output image, you can write:

python3 anonymize.py configs/surface_guided/configE.py coco_val2017_000000001000.jpg --visualize --save

The truncation value decides the "creativity" of the generator, which you can specify in the range (0, 1). Setting -t 1 will generate diverse anonymization between runs. For config A/B/C, the truncation value accepts range of (0, $\infty$). Setting -t=None will apply to latent truncation.

python3 anonymize.py configs/surface_guided/configE.py coco_val2017_000000001000.jpg --visualize --save -t 1

Gradio App

Check out the interactive demo with our gradio implementation. Run

python3 app.py

Train the model

See docs/TRAINING.md.

Reproducing paper results

See docs/REPRODUCING.md.

License

All code, except the stated below, is released under MIT License.

Code under:

Citation

If you use this code for your research, please cite:

@inproceedings{hukkelas23FBA,
  author={Hukkelås, Håkon and Smebye, Morten and Mester, Rudolf and Lindseth, Frank},
  booktitle={2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, 
  title={Realistic Full-Body Anonymization with Surface-Guided GANs}, 
  year={2023},
  volume={},
  number={},
  pages={1430-1440},
  doi={10.1109/WACV56688.2023.00148}}

full_body_anonymization's People

Contributors

hukkelas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

nolophe

full_body_anonymization's Issues

dockerfile

excuse, dockerfile can't links to your file

Adding checkpoint saving

It seems like the checkpoint is saved only at the end of training. As training takes long I think it would be good to save current checkpoints with logging.
Am I right to add it in fba/engine/trainer.py ->train_step function:

    def train_step(self):
        with torch.autograd.profiler.record_function("data_fetch"):
            batch = next(self.data_train)
        self.to_log.update(self.step_D(batch))
        self.to_log.update(self.step_G(batch))
        self.EMA_generator.update(self.generator)
        if logger.global_step >= self._next_log_point:
            log = {f"loss/{key}": item.item() for key, item in self.to_log.items()}
            logger.log_variable("amp/grad_scale", self.scaler.get_scale())
            logger.log_dictionary(log, commit=True)
            self._next_log_point += self._ims_per_log
            self.to_log = {}
            super().save_checkpoint() 

Datasets problem

Excuse me, when I tred to train your model I found the datasets.md was empty.
So how can I prepare my datasets or have the datasets.md finished?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.