Giter VIP home page Giter VIP logo

adv_defense_decoy's Introduction

Jointly Defending DeepFake Manipulation and Adversarial Attack using Decoy Mechanism

Overview

This is the official pytorch implementation of the paper, G.L. Chen and C.C. Hsu, "Jointly Defending DeepFake Manipulation and Adversarial Attack using Decoy Mechanism", IEEE T-PAMI 2023. In this research, we proposed a novel decoy mechanism based on statistical hypothesis testing against DeepFake manipulation and adversarial attacks. The proposed decoy mechanism successfully defended the common adversarial attacks and indirectly improved the power of hypothesis test to achieve 100% detection accuracy for both white-box and black-box attacks. We demonstrated that the decoy effect can be generalized to compressed and unseen manipulation methods for both DeepFake and attack detection.


Dataset

We use the offcial setting to split FF++ into train, val, and test. For DFDC and CelebDF, we randomly sample 100 real and 100 fake videos for evaluation. All the face images are cropped using BlazeFace. Please download datasets (FF++ (10% test set), DFDC, CelebDF) and organize as follow:

│decoy
├──data
│   ├──CelebDF
│   ├──DFDC
│   ├──FF++
├──...

Pretrained Model

All the models were trained on FF++ raw training set. Please download the model weights and organize as follow:

│decoy
├──weight
│   ├──Meso.pt
│   ├──MesoDeception.pt
│   ├──Xception.pt
│   ├──XDeception.pt
├──...

Prerequisites

The code is tested on Ubuntu 20.04, Geforce RTX 3090 * 2, and cuda 10.2.

  • python 3.7
  • pytorch 1.10.2
  • torchvision 0.11.3


Training

Train the vanilla model:

CUDA_VISIBLE_DEVICES=-1 python3 -B train.py --model_name Xception --train_video_batch 10 --train_img_batch 8 --save_path ./weight/Xception.pt --log_path ./log/Xception.log

Train the deceptive model:

CUDA_VISIBLE_DEVICES=-1 python3 -B train.py --model_name XDeception --train_video_batch 10 --train_img_batch 8 --save_path ./weight/XDeception.pt --log_path ./log/XDeception.log --deception

Optional parameters:

model_name : Xception (Meso)   -   training model
train_video_batch : 10  -  sampling batch of videos per iteration
train_img_batch : 8  -  sampling batch of images per video
save_path : ./weight/Xception.pt  - saving path of model weight
log_path : ./log/Xception.log  -   saving path of training log
deception :  whether to use deceptive model

If the deception flag is used, please modify Xception to XDeception and Meso to MesoDeception.


Evaluation

Evaluate the vanilla model under PGD attack:

CUDA_VISIBLE_DEVICES=-1 python3 -B attack.py --model_name Xception --pretrained_weight ./weight/Xception.pt --test_clean --eps 0.005 --iters 5 --log_path ./log/FF++/raw/Xception_attack.log --test_root_dir ./data/FF++/raw --test_file_path ./file/FF++_test10.txt

Evaluate the deceptive model under PGD attack:

CUDA_VISIBLE_DEVICES=-1 python3 -B attack.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --test_clean --eps 0.005 --iters 5 --log_path ./log/FF++/raw/XDeception_attack.log --deception --test_root_dir ./data/FF++/raw --test_file_path ./file/FF++_test10.txt

Optional parameters:

model_name : Xception (Meso)   -   training model
pretrained_weight : ./weight/Xception.pt  -   model weight path
test_clean :  whether to evaluate original performance
eps : 0.005  -  maximum perturbation of PGD
iters : 5  -  iteration of PGD
log_path : ./log/FF++/raw/XDeception_attack.log  -   saving path of evaluation log
deception :  whether to use deceptive model
test_root_dir : ./data/FF++/raw  - evaluated dataset
test_file_path : ./file/FF++_test10.txt  - evaluated dataset file

If the deception flag is used, please modify Xception to XDeception and Meso to MesoDeception. The test_root_dir can be "./data/FF++/raw", "./data/FF++/c23", "./data/FF++/c40", "./data/DFDC", "./data/CelebDF". The test_file_path should be corresponding to test_root_dir.

Evaluate the deceptive model under NES-PGD attack in black-box setting:

CUDA_VISIBLE_DEVICES=-1 python3 -B attack.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --test_clean --eps 0.005 --iters 5 --log_path ./log/FF++/raw/XDeception_attack.log --deception --test_root_dir ./data/FF++/raw --test_file_path ./file/FF++_test10.txt --black --nes_iters 2 --nes_batch 40

Optional parameters:

black :  whether to attack model in black-box setting
nes_iters : 2   -   iteration for gradient estimation
nes_batch : 40  -   batch size for gradient estimation

The total iteration for gradient estimation of NES is nes_iters * nes_batch. If you have more GPU memory, you can increase the batch size.

Evaluate the attack detection in white-box setting:

CUDA_VISIBLE_DEVICES=-1 python3 -B detect.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --iters 30 --log_path ./log/FF++/raw/XD_det.log --test_root_dir ./data/FF++/raw

Evaluate the attack detection in black-box setting:

CUDA_VISIBLE_DEVICES=-1 python3 -B detect.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --iters 30 --log_path ./log/FF++/raw/XD_det.log --test_root_dir ./data/FF++/raw --black --nes_iters 2 --nes_batch 40

Citation

If you find this project useful in your research, please consider citing:

@article{chen2023jointly,
  title={Jointly Defending DeepFake Manipulation and Adversarial Attack using Decoy Mechanism},
  author={Chen, Guan-Lin and Hsu, Chih-Chung},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2023},
  publisher={IEEE}
}

adv_defense_decoy's People

Contributors

alright1117 avatar

Stargazers

Chia-Ming (Chris) Lee avatar  avatar RED avatar  avatar Rongchuan Zhang avatar Li Hongze avatar

Watchers

 avatar

Forkers

jesse1029 joe1007

adv_defense_decoy's Issues

the difference between meso and mesoD

in your code, i didn't see any difference between the meso network and the mesoD network;
both of them are

def __init__(self):
	super(MesoDeception, self).__init__()
	self.model1 = MesoInception4(num_classes=1)
	self.model2 = MesoInception4(num_classes=1)

def forward(self, input):
	real_score = self.model1(input)
	fake_score = self.model2(input)
	
	return torch.cat((real_score, fake_score), axis = 1)

is that right?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.