This is the official pytorch implementation of the paper, G.L. Chen and C.C. Hsu, "Jointly Defending DeepFake Manipulation and Adversarial Attack using Decoy Mechanism", IEEE T-PAMI 2023. In this research, we proposed a novel decoy mechanism based on statistical hypothesis testing against DeepFake manipulation and adversarial attacks. The proposed decoy mechanism successfully defended the common adversarial attacks and indirectly improved the power of hypothesis test to achieve 100% detection accuracy for both white-box and black-box attacks. We demonstrated that the decoy effect can be generalized to compressed and unseen manipulation methods for both DeepFake and attack detection.
We use the offcial setting to split FF++ into train, val, and test. For DFDC and CelebDF, we randomly sample 100 real and 100 fake videos for evaluation. All the face images are cropped using BlazeFace. Please download datasets (FF++ (10% test set), DFDC, CelebDF) and organize as follow:
│decoy
├──data
│ ├──CelebDF
│ ├──DFDC
│ ├──FF++
├──...
All the models were trained on FF++ raw training set. Please download the model weights and organize as follow:
│decoy
├──weight
│ ├──Meso.pt
│ ├──MesoDeception.pt
│ ├──Xception.pt
│ ├──XDeception.pt
├──...
The code is tested on Ubuntu 20.04, Geforce RTX 3090 * 2, and cuda 10.2.
- python 3.7
- pytorch 1.10.2
- torchvision 0.11.3
CUDA_VISIBLE_DEVICES=-1 python3 -B train.py --model_name Xception --train_video_batch 10 --train_img_batch 8 --save_path ./weight/Xception.pt --log_path ./log/Xception.log
CUDA_VISIBLE_DEVICES=-1 python3 -B train.py --model_name XDeception --train_video_batch 10 --train_img_batch 8 --save_path ./weight/XDeception.pt --log_path ./log/XDeception.log --deception
Optional parameters:
model_name
: Xception (Meso) - training model
train_video_batch
: 10 - sampling batch of videos per iteration
train_img_batch
: 8 - sampling batch of images per video
save_path
: ./weight/Xception.pt - saving path of model weight
log_path
: ./log/Xception.log - saving path of training log
deception
: whether to use deceptive model
If the deception flag is used, please modify Xception to XDeception and Meso to MesoDeception.
CUDA_VISIBLE_DEVICES=-1 python3 -B attack.py --model_name Xception --pretrained_weight ./weight/Xception.pt --test_clean --eps 0.005 --iters 5 --log_path ./log/FF++/raw/Xception_attack.log --test_root_dir ./data/FF++/raw --test_file_path ./file/FF++_test10.txt
CUDA_VISIBLE_DEVICES=-1 python3 -B attack.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --test_clean --eps 0.005 --iters 5 --log_path ./log/FF++/raw/XDeception_attack.log --deception --test_root_dir ./data/FF++/raw --test_file_path ./file/FF++_test10.txt
Optional parameters:
model_name
: Xception (Meso) - training model
pretrained_weight
: ./weight/Xception.pt - model weight path
test_clean
: whether to evaluate original performance
eps
: 0.005 - maximum perturbation of PGD
iters
: 5 - iteration of PGD
log_path
: ./log/FF++/raw/XDeception_attack.log - saving path of evaluation log
deception
: whether to use deceptive model
test_root_dir
: ./data/FF++/raw - evaluated dataset
test_file_path
: ./file/FF++_test10.txt - evaluated dataset file
If the deception flag is used, please modify Xception to XDeception and Meso to MesoDeception. The test_root_dir can be "./data/FF++/raw", "./data/FF++/c23", "./data/FF++/c40", "./data/DFDC", "./data/CelebDF". The test_file_path should be corresponding to test_root_dir.
CUDA_VISIBLE_DEVICES=-1 python3 -B attack.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --test_clean --eps 0.005 --iters 5 --log_path ./log/FF++/raw/XDeception_attack.log --deception --test_root_dir ./data/FF++/raw --test_file_path ./file/FF++_test10.txt --black --nes_iters 2 --nes_batch 40
Optional parameters:
black
: whether to attack model in black-box setting
nes_iters
: 2 - iteration for gradient estimation
nes_batch
: 40 - batch size for gradient estimation
The total iteration for gradient estimation of NES is nes_iters * nes_batch. If you have more GPU memory, you can increase the batch size.
CUDA_VISIBLE_DEVICES=-1 python3 -B detect.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --iters 30 --log_path ./log/FF++/raw/XD_det.log --test_root_dir ./data/FF++/raw
CUDA_VISIBLE_DEVICES=-1 python3 -B detect.py --model_name XDeception --pretrained_weight ./weight/XDeception.pt --iters 30 --log_path ./log/FF++/raw/XD_det.log --test_root_dir ./data/FF++/raw --black --nes_iters 2 --nes_batch 40
If you find this project useful in your research, please consider citing:
@article{chen2023jointly,
title={Jointly Defending DeepFake Manipulation and Adversarial Attack using Decoy Mechanism},
author={Chen, Guan-Lin and Hsu, Chih-Chung},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2023},
publisher={IEEE}
}