Giter VIP home page Giter VIP logo

slattack's Introduction

CVPR2023: Physical-World Optical Adversarial Attacks on 3D Face Recognition

2D face recognition has been proven insecure for physical adversarial attacks. However, few studies have investigated the possibility of attacking real-world 3D face recognition systems. 3D-printed attacks recently proposed cannot generate adversarial points in the air. In this paper, we attack 3D face recognition systems through elaborate optical noises. We took structured light 3D scanners as our attack target. End-to-end attack algorithms are designed to generate adversarial illumination for 3D faces through the inherent or an additional projector to produce adversarial points at arbitrary positions. Nevertheless, face reflectance is a complex procedure because the skin is translucent. To involve this projection-and-capture procedure in optimization loops, we model it by Lambertian rendering model and use SfSNet to estimate the albedo. Moreover, to improve the resistance to distance and angle changes while maintaining the perturbation unnoticeable, a 3D transform invariant loss and two kinds of sensitivity maps are introduced. Experiments are conducted in both simulated and physical worlds. We successfully attacked point-cloud-based and depth-image-based 3D face recognition algorithms while needing fewer perturbations than previous state-of-the-art physical-world 3D adversarial attacks..

Description

Official Implementation of our StructuredLightAttack paper. We extend the C&W attack which includes the 3D structured light reconstruction process. We only include the attack code here without the 3D structured light rebuild code. For 3D structured light rebuild code, you can refer to https://github.com/phreax/structured_light.git.

Watch the demo video here (please set the video resolution as 1080p)

Watch the video

Table of Contents

Recent Updates

2023.3.10: Initial code release

Getting Started

Prerequisites

  • Linux
  • NVIDIA GPU + CUDA CuDNN (CPU may be possible with some modifications, but is not inherently supported)
  • Python 3.8+
  • pytorch, numpy

Installation

  • Clone this repo:
git clone https://github.com/PolyLiYJ/SLAttack.git
cd SLAttack
  • Dependencies:
    We recommend running this repository using Anaconda.
conda install --yes --file requirements.txt

Run the untarget attack

  • To excecute L2Loss attack on Bosphorus dataset and PointNet and save the adversarial point cloud.
  • L2Loss is optimized on the phase map. Please see our paper.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss  --binary_step 1
  • For comparison, you can also excecute L2Loss_pt attack on Bosphorus dataset and PointNet and save the adversarial point cloud.
  • L2Loss_pt loss is optimized on the point cloud.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss_pt

Run the target attack

  • To excecute L2 attack on Bosphorus dataset and PointNet and save the adversarial point cloud.
  • L2 loss is optimized on the phase map. Please see our paper.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss \
--whether_target --binary_step 1
  • For comparison, you can also excecute L2Loss_pt on Bosphorus dataset and PointNet and save the adversarial point cloud.
  • L2Loss_pt is optimized on the point cloud.
python3 Test_CW_SL.py --dataset Bosphorus \
--model PointNet \
--dist_function L2Loss_pt \
--whether_target

To reproduce the attack success rate in our paper

  • Download the Bosphorus dataset at http://bosphorus.ee.boun.edu.tr/Home.aspx
  • Convert the dataset to text files, see dataset/readbnt.py. We use the farthest point sampling from pytorch3d to downsample the faces to 4000 points.
  • Prepare the training and test dataset. We split the dataset into training dataset and test dataset by 0.7 and 0.3. The file names and labels are under dataset/test_farthest_sampled.csv and dataset/train_farthest_sampled.csv
  • Train the classification models through Train_classifier.py. We have uploaded the pretrained PointNet, PointNet++Msg, PointNet++Ssg, and DGCNN models to the cls/Bosphorus folder.
python Train_classifier.py --model PointNet
  • Run untarget attack by
python Evaluate.py --whether_1d --attack_lr 0.01 --num_iter 100 --early_break --binary_step 5 --dist_function L2Loss
  • Run the target attack by set the --whether_target parameter. It is suggested to set the device as cuda to accelerate the computation.
python Evaluate.py --whether_target --whether_1d --attack_lr 0.001 --num_iter 100 --binary_step 5 --dist_function L2Loss

Evaluate on different models

  • To test ASR on different models, you can just change the args.model parameters. We have upload the pretrained PointNet, PointNet++Msg, PointNet++Ssg, and DGCNN models to the cls/Bosphorus folder. For example, to test the untarget attack success rate on PointNet++Msg, you can use
python Evaluate.py --whether_1d --attack_lr 0.001 --num_iter 1000 --binary_step 10 --dist_function L2Loss --model PointNet++Msg

Get the adversarial structured light image

  • To get fringe images (adversarial illumination) from the above generated adversarial point cloud. The clean point cloud and adversarial point cloud are needed.
python get_adv_illumination --normal_pc test_face_data/person1.txt --adv_pc test_face_data/adv_person1_untargeted_L1Loss_5.txt --outfolder test_face_data/person1/adversarial_fringe
  • The generated adversarial structured light image is under test_face_data/person1/adversarial_fringe folder.

Citation

If you use this code for your research, please cite our paper Physical-World Optical Adversarial Attacks on 3D Face Recognition:

@inproceedings{
yanjieli2023physicalworld,
title={Physical-World Optical Adversarial  Attacks on 3D Face Recognition},
author={Yanjie Li, Yiquan Li, Xuelong Dai, Songtao Guo, Bin Xiao},
booktitle={Conference on Computer Vision and Pattern Recognition 2023},
year={2023},
url={https://openreview.net/forum?id=vGZl0N9s0s}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.