Giter VIP home page Giter VIP logo

multi-domain-learning-fas's Introduction

SiW-Mv2 Dataset and Multi-domain FAS

drawing

This project page contains Spoof in Wild with Multiple Attacks Version 2 (SiW-Mv2) dataset and the official implementation of our ECCV2022 oral paper "Multi-domain Learning for Updating Face Anti-spoofing Models". [Arxiv] [SiW-Mv2 Dataset]

Authors: Xiao Guo, Yaojie Liu, Anil Jain, Xiaoming Liu

๐Ÿ‘ Our algorithm has been officially accepted and delivered to the IAPRA ODIN program!

๐Ÿ”ฅ๐Ÿ”ฅCheck out our quick demo:

The quick view on the code structure.

./Multi-domain-learning-FAS
    โ”œโ”€โ”€ source_SiW_Mv2 (The spoof detection baseline source code, pre-trained weights and protocol partition files,.)
    โ”œโ”€โ”€ source_multi_domain (The multi-domain updating source code)
    โ””โ”€โ”€ DRA_form_SIWMv2.pdf (Dataset Release Agreement)

Note that the spoof detection baseline is described in the supplementary section of [Arxiv.]

1. SiW-Mv2 Introduction:

Introduction: SiW-Mv2 Dataset is a large-scale face anti-spoofing (FAS) dataset that is first introduced in the multi-domain FAS updating algorithm. The SiW-Mv2 dataset includes 14 spoof attack types, and these spoof attack types are designated and verified by the IARPA ODIN program. Also, SiW-Mv2 dataset is a privacy-aware dataset, in which ALL live subjects in SiW-Mv2 dataset have signed the consent form which ensures the dataset usage for the research purpose. The more details are can be found in page and [paper].

2. SiW-Mv2 Protocols:

To set a baseline for future study on SiW-Mv2, we define three protocols. Note the partition file for each protocol is fixed, which can be found in ./source_SiW_Mv2/pro_3_text/ of Dataset Sec.1.

  • Protocol I: Known Spoof Attack Detection. We divide live subjects and subjects of each spoof pattern into train and test splits. We train the model on the training split and report the overall performance on the test split.

  • Protocol II: Unknown Spoof Attack Detection. We follow the leave-one-out paradigm โ€” keep $13$ spoof attack and $80$% live subjects as the train split, and use the remaining one spoof attacks and left $20$% live subjects as the test split. We report the test split performance for both individual spoof attacks, as well as the averaged performance with standard deviation.

  • Protocol III: Cross-domain Spoof Detection. We partition the SiW-Mv2 into $5$ sub-datasets, where each sub-dataset represents novel spoof type, different age and ethnicity distribution, as well as new illuminations. We train the model on the source domain dataset, and evaluate the model on test splits of $5$ different domains. Each sub-dataset performance, and averaged performance with standard deviation are reported

3. Baseline Performance

  • We implement SRENet as the baseline model, and evaluate this SRENet on three SiW-Mv2 protocols. Please find the details in [paper].
  • To quick reproduce the following numerical numbers with .csv result files, please go to Dataset Sec.2.

drawing

  • In ./source_SiW_Mv2, we provide detailed dataset preprocessing steps as well as the training scripts.

drawing drawing

4. Baseline Pre-trained Weights

  • Also, pre-trained weights for $3$ different protocols can be found in this page.
Protocol Unknown Download Protocol Unknown Download Protocol Unknown Download
I N/A link II Partial Eyes link II Transparent link
II Full Mask link II Paper Mask link II Obfuscation link
II Cosmetic link II Paper glass link II Print link
II Impersonate link II Silicone link II Replay link
II FunnyEyes link II Partial Mouth link II Mannequin link
III Cross Domain link

5. Download

  1. SiW-Mv2 database is available under a license from Michigan State University for research purposes. Sign the Dataset Release Agreement link.

  2. Submit the request and your signed DRA to [email protected] with the following information:

    • Title: SiW-Mv2 Application
    • CC: Your advisor's email
    • Content Line 1: Your name, email, affiliation
    • Content Line 2: Your advisor's name, email, webpage
    • Attachment: Signed DRA
  3. You will receive the download instructions upon approval of your usage of the database.

Reference

If you would like to use our work, please cite:

@inproceedings{xiaoguo2022MDFAS,
    title={Multi-domain Learning for Updating Face Anti-spoofing Models},
    author={Guo, Xiao and Liu, Yaojie and Jain, Anil and Liu, Xiaoming},
    booktitle={ECCV},
    year={2022}
}

This github will continue to update in the near future. If you have any question, please contact: Xiao Guo

multi-domain-learning-fas's People

Contributors

chelsea234 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

multi-domain-learning-fas's Issues

Encounter "ZeroDivisionError: float division by zero" when running source_SiW_Mv2/preprocessing.py

Hello, first thank you for this great work. Below is the error I encounter when running source_SiW_Mv2/preprocessing.py,
image
It seems that the "eye2eye_dis" of the current frame equals to zero, which also makes "xr-xl" equal to zero. The x_scale then cannot be compute (since it will be infinite). May I ask is there the same problem happends when you run this code? I wonder if it is ok to ignore the frame directly. Hope to here from you soon. Thank you very much.

running inferece.py file gives an error

Hi @CHELSEA234

When I run the inference.py file using this command:

python3 inference.py --cuda=0 --pro=1 --dir=./demo/live/ --overwrite --weight_dir=./resources/save_model_siwmv2_pro_1_unknown_Ob

but it gives an error :

File "inference_ed.py", line 194, in test_step
    img, img_name = dataset_inference.nextit()
  File "project/Multi-domain-learning-FAS/source_SiW_Mv2/dataset.py", line 121, in nextit
    return next(self.feed)
  File "torch_env/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 816, in __next__
    raise StopIteration
StopIteration

How can I solve this issue?

about data preprocessing

Thanks for sharing the code. I'm new to FAS and would be very grateful if you could give me some data preprocessing details.

  1. According to the paper, the FAS model and framework are frame-based, right? In preprocessing, is it necessary to extract frames from the video? Then crop out the face from each frame? How many frames are extract from each video?

How to use recon

Hi, I don't understand what is the purpose of recon of the liveness image, is it possible or can it be used to obtain kind of a numerical output that states whether an image is live or spoof?

construct the MD-FAS benchmark

When I construct the FASMD dataset based on the list (SIW E train and OULU E train) you provided, I found that I got the E sub-dataset with 839(from siw)+1620(from oulu) videos, which is a lot more than the 1696 listed in the paper?

How to extract images from videos?

Hi,

thank you very much for your work and for releasing the code :) I have read your code carefully but still have some doubts. How do you prepare datasets? And how can I get frames from videos?

Looking forward to your reply....

Some questions about inference

Hi, in the inference.py file, I see that you load all of models in a list (model_list, model_p_list, model_d_list), but I couldn't see anywhere all of lists can be used in the inference.py file. Can you help me to clear this point to me, please?

the requirements file is not clear

Hi
Please add and a requirements file to define version of the required library for inferencing.
when I run the inference file I faced this error:

face-alignment is not defined

Help with downloading the dataset SIW

Hello,
I sent the request form and the signed DRA to ask for the download link of the SiW dataset a month ago but I haven't received any reply so far. I also sent you an email a few days ago asking for help. If possible, can you check my email and help me with downloading the dataset.
Thank you very much.

How to get live/spoof?

hi, I see the model outputs depth, region, content, and additive traces, how is it possible to transform this to live/spoof to replicate your paper?

About the metrics

In the Paper, you use TPR@FPR=0.5% as metrics but this metric is not included in metric.py (But I find the TPR@FPR=0.2%, 5%, and 1%). And I am curious about the test_architecture.py because it can not work well for the function test_update not being used.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.