Giter VIP home page Giter VIP logo

silent-face-anti-spoofing's Introduction

中文版|English Version
静默活体检测

静默活体检测 (Silent-Face-Anti-Spoofing)

该项目为小视科技的静默活体检测项目,您可以扫描下方的二维码获取安卓端APK,体验静默活体的检测效果.

更新

2020-07-30: 开源caffe模型,分享工业级静默活体检测算法技术解析直播视频以及相关文件。

简介

在本工程中我们开源了活体模型训练架构,数据预处理方法,模型训练和测试脚本以及开源的APK供大家测试使用。

活体检测技术主要是判别机器前出现的人脸是真实还是伪造的,其中借助其他媒介呈现的人脸都可以定义为虚假的人脸,包括打印的纸质照片、电子产品的显示屏幕、硅胶面具、立体的3D人像等。目前主流的活体解决方案分为配合式活体检测和非配合式活体检测(静默活体检测)。配合式活体检测需要用户根据提示完成指定的动作,然后再进行活体校验,静默活体则在用户无感的情况下直接进行活体校验。

因傅里叶频谱图一定程度上能够反应真假脸在频域的差异,因此我们采用了一种基于傅里叶频谱图辅助监督的静默活体检测方法, 模型架构由分类主分支和傅里叶频谱图辅助监督分支构成,整体架构如下图所示:
整体架构图

使用自研的模型剪枝方法,将MobileFaceNet的Flops从0.224G降低待了0.081G,在精度损失不大的情况下,明显提升模型的性能(降低计算量与参数量).

Model FLOPs Params
MobileFaceNet 0.224G 0.991M
MiniFASNetV1 0.081G 0.414M
MiniFASNetV2 0.081G 0.435M

APK

APK源码

开源了适用于安卓平台的部署代码:https://github.com/minivision-ai/Silent-Face-Anti-Spoofing-APK

Demo

关键指标

Model(input 80x80) FLOPs Speed FPR TPR 备注
APK模型 84M 20ms 1e-5 97.8% 开源
高精度模型 162M 40ms 1e-5 99.7% 未开源

测试方法

  • 显示信息:速度(ms), 置信度(0~1)以及活体检测结果(真脸or假脸)
  • 点击右上角图标可设置阈值,如果置信度大于阈值,为真脸,否则为假脸

测试须知

  • 所有测试图片必须通过摄像头采集得到,否则不符合正常场景使用规范,算法效果也无法保证。
  • 因为RGB静默活体对摄像头型号和使用场景鲁棒性受限,所以实际使用体验会有一定差异。
  • 测试时,应保证有完整的人脸出现在视图中,并且人脸旋转角与竖直方向小于30度(符合正常刷脸场景),否则影响体验。  

已测试型号

型号 麒麟990 5G 麒麟990 骁龙845 麒麟810 RK3288
速度/ms 19 23 24 25 90

工程

安装依赖库

pip install -r requirements.txt

Clone

git clone https://github.com/minivision-ai/Silent-Face-Anti-Spoofing  
cd Silent-Face-Anti-Spoofing

数据预处理

1.将训练集分为3类,将相同类别的图片放入一个文件夹;
2.因采用多尺度模型融合的方法,分别用原图和不同的patch训练模型,所以将数据分为原图和基于原图的patch;

  • 原图(org_1_heightxwidth),直接将原图resize到固定尺寸(width, height),如图1所示;
  • 基于原图的patch(scale_heightxwidth),采用人脸检测器人脸,获取人脸框,按照一定比例(scale)对人脸框进行扩边,为了保证模型的输入尺寸的一致性,将人脸框区域resize到固定尺寸(width, height),图2-4分别显示了scale为1,2.7和4的patch样例;
    patch demo

3.采用傅里叶频谱图作为辅助监督,训练集图片在线生成对应的傅里叶频谱图.
数据集的目录结构如下所示

├── datasets
    └── RGB_Images
        ├── org_1_80x60
            ├── 0
		├── aaa.png
		├── bbb.png
		└── ...
            ├── 1
		├── ddd.png
		├── eee.png
		└── ...
            └── 2
		├── ggg.png
		├── hhh.png
		└── ...
        ├── 1_80x80
        └── ...

训练

python train.py --device_ids 0  --patch_info your_patch

测试

./resources/anti_spoof_models 活体检测的融合模型
./resources/detection_model 检测器模型
./images/sample 测试图片

python test.py --image_name your_image_name

相关资源

百度网盘提取码:6d8q
(1)工业级静默活体检测开源算法技术解析直播回放视频;
(2)直播视频中的思维导图文件,存放在files目录下;
(3)开源模型的caffemodel,存放在models目录下;

参考

针对此项目,为了方便开发者们的技术交流,创建了QQ群:1121178835,欢迎加入。

除了本次开源的静默活体检测算法外,小视科技还拥有多项人脸识别、人体识别相关的自研算法及商用SDK。有兴趣的个人开发者或企业开发者可登录小视科技Mini-AI开放平台了解和联系我们。

silent-face-anti-spoofing's People

Contributors

lzcai avatar nbadalls avatar zhuyingseu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

silent-face-anti-spoofing's Issues

train problem

MultiFTNet.py修改: self.model = MiniFASNetV2SE(...) 改为self.model = MiniFASNetV1SE(...) , 图像处理为1_80x80, 训练出来的模型大小是2690,原生resources下的4_0_0_80x80_MiniFASNetV1SE.pth 大小为1813,请问这是怎么回事 ??

FAILED: ReadProtoFromTextFile(param_file, param). Failed to parse NetParameter file: ./resources/detection_model/deploy.prototxt in function 'cv::dnn::ReadNetParamsFromTextFileOrDie'

我在windows10跑这个代码,发现加载不了模型,请问需要改什么吗?
File "D:/project/alive_detect/Silent-Face-Anti-Spoofing-master/test.py", line 109, in
test(args.image_name, args.model_dir, args.device_id)
File "D:/project/alive_detect/Silent-Face-Anti-Spoofing-master/test.py", line 35, in test
model_test = AntiSpoofPredict(device_id)
File "D:\project\alive_detect\Silent-Face-Anti-Spoofing-master\src\anti_spoof_predict.py", line 55, in init
super(AntiSpoofPredict, self).init()
File "D:\project\alive_detect\Silent-Face-Anti-Spoofing-master\src\anti_spoof_predict.py", line 32, in init
self.detector = cv2.dnn.readNetFromCaffe(deploy, caffemodel)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\dnn\src\caffe\caffe_io.cpp:1151: error: (-2:Unspecified error) FAILED: ReadProtoFromTextFile(param_file, param). Failed to parse NetParameter file: ./resources/detection_model/deploy.prototxt in function 'cv::dnn::ReadNetParamsFromTextFileOrDie'

ONNX model !!

Hello, I tried to convert your given model to onnx and converted them successfully, but the output of V1SE model is not consistent with the given pytorch model. I used onnxruntime for inference of the two models.
V2 model is fine but V1SE is not. Can you help?

confidence大于1

感谢分享,我用你的模型重新做预测,图片resize到80*80,发现每个模型输出的confidence都有大于1的情况。请问模型输出的范围不是0--1吗?还是我做错了什么呢?

训练自己的数据

您好,我测试了你们的模型,在我这边的场景下效果并不是很好。我的场景是红外摄像头灰度图,请问一下用你们的工程训练效果会好使吗?或者您那边有一些好的建议吗?

org_1_heightxwidth是80x60吗?

原图(org_1_heightxwidth),直接将原图resize到固定尺寸(width, height),如图1所示;
image
原图(org_1_heightxwidth)是把数据集中宽高比为width/height=3/4的图片resize到80*60 后得到的吗?

Opencv_Loader BGR

opencv cv2.imread is Default BGR,but generate_FT is cv2.COLOR_RGB2GRAY,it's correct?

Model 1_80x80 ?

In your Data Processing README Part, you mentioned about three scaling factors 1_80x80, 2.7_80x80 and 4_80x80 and you have provided 2.7 and 4.0 pretrained models but no 1.0 model? Can you provide 1_80x80 model also.
Thank You

變為即時影像偵測

我現在在嘗試使用電腦相機去做即時的偵測與顯示,我是以cpu去跑,但其FPS不盡理想,不能像demo上的動圖一樣理想,我是以更改test.py去完成,請問更改test.py需要注意些甚麼?請問影響FPS最大部分code是?

Using provided pretrained models for finetuning

Firstly, I would like to thank the minivision group for sharing the source code and trained weights with the open source community. The documentation on the README is great too. You guys even put in the effort to translate them into english. Thank you very much.

I am currently trying to use the pretrained weights provide to fine tune them to my dataset (only 2 class, real or spoof) but i am facing some problems. Is there any tips to doing so?

I note that the state_dict you provided has been pruned before and looking that the way you loaded the models, i see you did the following:

state_dict = torch.load(model_path, map_location=self.device)
       keys = iter(state_dict)
       first_layer_name = keys.__next__()
       if first_layer_name.find('module.') >= 0:
           from collections import OrderedDict
           new_state_dict = OrderedDict()
           for key, value in state_dict.items():
               name_key = key[7:]
               new_state_dict[name_key] = value
           self.model.load_state_dict(new_state_dict)
       else:
           self.model.load_state_dict(state_dict)

where you only used sliced state_dict.

門檻值問題

詢問門檻值的調整是否於下列參數?
anti_spoof_predict.py
self.detector_confidence = 0.6

我在這個值所作的調整,並沒有反應到我的預測結果中
0.8,0.7,0.5,1.0接會出現相同的預測結果

想請問決定門檻值的正確參數是?

效果极差

app试了,python代码也试了,感觉都不行,阈值调不调一个样,只要图片贴的足够近,判断real face分数都在0.9以上
还有也没搞懂为啥用两个模型叠加使用,感觉两个模型给出的结果都一样啊
开始还以为是输入图片的尺寸问题,后来图片输入都跟demo一样是480*640,也不行
1_result
2_result

If without FT

Thanks for your excellenct work and share!
I am wondering whether have you done an ablation experiment ,that's how about the model's performance if we remove the FT generator? Because I canot be sure the amazing performance comes from your original CNN model or from the the gain brought by FT.
Looking forward for your reply,thks!

Do you have a validation dataset for training?

Hi you, thanks for your work. I have read your code and seem as you don't use validation in training process.
Whether Am I wrong? Can you clarify it for me. I am very appreciated it. Thank you!!

Model Architecture for Scale1_80x80 Training

Hello,

Is it possible to provide just the model architecture file for Scale 1 80x80
Then i can train on my own database and share with you.

Does Scale 1 model needs Squeeze Excitation module for training?

Thanks,

计算花费时间部分单位有误

time.time()获取的是秒数.毫秒数,直接相减的单位应该是秒数,而不是ms,例如我自己的电脑显示0.11ms 而实际上是0.11s

PReLU转化问题

您好,这项工作非常好,但是在实际使用中我想将开源的caffemodel转化为TensorRT,但是TensorRT不支持PReLU层,我想请问一下,这个PReLU层该怎么转化为TensorRT支持的Relu层呢?非常感谢。

Dataset Overview?

Hello, i really want to understand what type of dataset you used. I know it has three class: 2D-Fake, 3D-Fake and Real but my question is can you provide a sample images of these three classes.
Also have you used any opensource dataset like CelebA or FFHQ for Real face samples. And How you collected these dataset. Just an overview will help in training.
Also can you describe the procedure how you collected/make the dataset, and size of the dataset (number of images for each class)

Thanks

how should the dataset be?

Is the dataset you are using an open source dataset or was prepared by you?
I read the data preprocessing section but it was not clear for me.
What should be the dataset structure for training?

Why no pading to square before resizing

你好,请问为什么预处理的时候不先把人脸框padding 成square然后再做resize呢,这样不会另到人脸框的aspect ratio 变化造成畸变吗

傅里叶分支问题

你好,傅里叶分支的target为输入图像经过傅立叶变换后和resize后的结果,而看代码,他的pred是神经网络某一层的特征图,这两者似乎不能做对应。直接求MSELoss貌似也不合原理。你那边能做一下解释吗。

conversion to caffe Error?

Really nice.
I was trying to convert the given models in caffe using https://github.com/xxradon/PytorchToCaffe
The first model was successfully converted i.e 2.7_80x80_MiniFASNetV2.pth but for model 4_0_0_80x80_MiniFASNetV1SE.pth
error occurs:
conv1.conv
conv: blob1
conv1 was added to layers
139799641391824:conv_blob1 was added to blobs
conv1.bn
batch_norm1 was added to layers
139799641391176:batch_norm_blob1 was added to blobs
bn_scale1 was added to layers
conv1.prelu
prelu1 was added to layers
139799641391680:prelu_blob1 was added to blobs
conv2_dw.conv
conv: prelu_blob1
conv2 was added to layers
139799641392040:conv_blob2 was added to blobs
conv2_dw.bn
batch_norm2 was added to layers
139799641391752:batch_norm_blob2 was added to blobs
bn_scale2 was added to layers
conv2_dw.prelu
prelu2 was added to layers
139799641392256:prelu_blob2 was added to blobs
conv_23.conv.conv
conv: prelu_blob2
conv3 was added to layers
139799641392616:conv_blob3 was added to blobs
conv_23.conv.bn
batch_norm3 was added to layers
139799641392976:batch_norm_blob3 was added to blobs
bn_scale3 was added to layers
conv_23.conv.prelu
prelu3 was added to layers
139799641392184:prelu_blob3 was added to blobs
conv_23.conv_dw.conv
conv: prelu_blob3
conv4 was added to layers
139799641392400:conv_blob4 was added to blobs
conv_23.conv_dw.bn
batch_norm4 was added to layers
139799641391536:batch_norm_blob4 was added to blobs
bn_scale4 was added to layers
conv_23.conv_dw.prelu
prelu4 was added to layers
139799641391320:prelu_blob4 was added to blobs
conv_23.project.conv
conv: prelu_blob4
conv5 was added to layers
139799641391608:conv_blob5 was added to blobs
conv_23.project.bn
batch_norm5 was added to layers
139799641392328:batch_norm_blob5 was added to blobs
bn_scale5 was added to layers
conv_3.model.0.conv.conv
conv: batch_norm_blob5
conv6 was added to layers
139799641391464:conv_blob6 was added to blobs
conv_3.model.0.conv.bn
batch_norm6 was added to layers
139799641392760:batch_norm_blob6 was added to blobs
bn_scale6 was added to layers
conv_3.model.0.conv.prelu
prelu5 was added to layers
139799641392904:prelu_blob5 was added to blobs
conv_3.model.0.conv_dw.conv
conv: prelu_blob5
conv7 was added to layers
139799641392112:conv_blob7 was added to blobs
conv_3.model.0.conv_dw.bn
batch_norm7 was added to layers
139799641391896:batch_norm_blob7 was added to blobs
bn_scale7 was added to layers
conv_3.model.0.conv_dw.prelu
prelu6 was added to layers
139799641392472:prelu_blob6 was added to blobs
conv_3.model.0.project.conv
conv: prelu_blob6
conv8 was added to layers
139799641394992:conv_blob8 was added to blobs
conv_3.model.0.project.bn
batch_norm8 was added to layers
139799641394920:batch_norm_blob8 was added to blobs
bn_scale8 was added to layers
add1 was added to layers
139799641395064:add_blob1 was added to blobs
conv_3.model.1.conv.conv
conv: add_blob1
conv9 was added to layers
139799641394704:conv_blob9 was added to blobs
conv_3.model.1.conv.bn
batch_norm9 was added to layers
139799641394632:batch_norm_blob9 was added to blobs
bn_scale9 was added to layers
conv_3.model.1.conv.prelu
prelu7 was added to layers
139799641394776:prelu_blob7 was added to blobs
conv_3.model.1.conv_dw.conv
conv: prelu_blob7
conv10 was added to layers
139799641394416:conv_blob10 was added to blobs
conv_3.model.1.conv_dw.bn
batch_norm10 was added to layers
139799641394344:batch_norm_blob10 was added to blobs
bn_scale10 was added to layers
conv_3.model.1.conv_dw.prelu
prelu8 was added to layers
139799641394488:prelu_blob8 was added to blobs
conv_3.model.1.project.conv
conv: prelu_blob8
conv11 was added to layers
139799641394128:conv_blob11 was added to blobs
conv_3.model.1.project.bn
batch_norm11 was added to layers
139799641394056:batch_norm_blob11 was added to blobs
bn_scale11 was added to layers
add2 was added to layers
139799641394272:add_blob2 was added to blobs
conv_3.model.2.conv.conv
conv: add_blob2
conv12 was added to layers
139799641393840:conv_blob12 was added to blobs
conv_3.model.2.conv.bn
batch_norm12 was added to layers
139799641393768:batch_norm_blob12 was added to blobs
bn_scale12 was added to layers
conv_3.model.2.conv.prelu
prelu9 was added to layers
139799641393984:prelu_blob9 was added to blobs
conv_3.model.2.conv_dw.conv
conv: prelu_blob9
conv13 was added to layers
139799641393552:conv_blob13 was added to blobs
conv_3.model.2.conv_dw.bn
batch_norm13 was added to layers
139799641393480:batch_norm_blob13 was added to blobs
bn_scale13 was added to layers
conv_3.model.2.conv_dw.prelu
prelu10 was added to layers
139799641393696:prelu_blob10 was added to blobs
conv_3.model.2.project.conv
conv: prelu_blob10
conv14 was added to layers
139799641393264:conv_blob14 was added to blobs
conv_3.model.2.project.bn
batch_norm14 was added to layers
139799640977344:batch_norm_blob14 was added to blobs
bn_scale14 was added to layers
add3 was added to layers
139799641393408:add_blob3 was added to blobs
conv_3.model.3.conv.conv
conv: add_blob3
conv15 was added to layers
139799640976912:conv_blob15 was added to blobs
conv_3.model.3.conv.bn
batch_norm15 was added to layers
139799640976768:batch_norm_blob15 was added to blobs
bn_scale15 was added to layers
conv_3.model.3.conv.prelu
prelu11 was added to layers
139799640977200:prelu_blob11 was added to blobs
conv_3.model.3.conv_dw.conv
conv: prelu_blob11
conv16 was added to layers
139799640976336:conv_blob16 was added to blobs
conv_3.model.3.conv_dw.bn
batch_norm16 was added to layers
139799640976192:batch_norm_blob16 was added to blobs
bn_scale16 was added to layers
conv_3.model.3.conv_dw.prelu
prelu12 was added to layers
139799640976624:prelu_blob12 was added to blobs
conv_3.model.3.project.conv
conv: prelu_blob12
conv17 was added to layers
139799640975760:conv_blob17 was added to blobs
conv_3.model.3.project.bn
batch_norm17 was added to layers
139799640975616:batch_norm_blob17 was added to blobs
bn_scale17 was added to layers
conv_3.model.3.se_module.avg_pool
ave_pool1 was added to layers
139799640975184:ave_pool_blob1 was added to blobs
conv_3.model.3.se_module.fc1
conv: ave_pool_blob1
conv18 was added to layers
139799640976048:conv_blob18 was added to blobs
conv_3.model.3.se_module.bn1
batch_norm18 was added to layers
139799640975040:batch_norm_blob18 was added to blobs
bn_scale18 was added to layers
conv_3.model.3.se_module.relu
relu1 was added to layers
139799640975472:relu_blob1 was added to blobs
conv_3.model.3.se_module.fc2
conv: relu_blob1
conv19 was added to layers
139799640974608:conv_blob19 was added to blobs
conv_3.model.3.se_module.bn2
batch_norm19 was added to layers
139799640974464:batch_norm_blob19 was added to blobs
bn_scale19 was added to layers
mul1 was added to layers
139799640974032:mul_blob1 was added to blobs
WARNING: CANNOT FOUND blob 139799640974896
Traceback (most recent call last):
File "silent_spoof_convert_to_caffe.py", line 36, in
pytorch_to_caffe.trans_net(net,input,name)
File "./pytorch_to_caffe.py", line 786, in trans_net
out = net.forward(input_var)
File "/home/boson/WORKDONE/FaceSpoofing/Silent-Face-Anti-Spoofing/src/model_lib/MiniFASNet.py", line 222, in forward
out = self.conv_3(out)
File "/home/boson/Pytorch-cu9.2/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/boson/WORKDONE/FaceSpoofing/Silent-Face-Anti-Spoofing/src/model_lib/MiniFASNet.py", line 134, in forward
return self.model(x)
File "/home/boson/Pytorch-cu9.2/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/boson/Pytorch-cu9.2/venv/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/boson/Pytorch-cu9.2/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/boson/WORKDONE/FaceSpoofing/Silent-Face-Anti-Spoofing/src/model_lib/MiniFASNet.py", line 156, in forward
x = self.se_module(x)
File "/home/boson/Pytorch-cu9.2/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/home/boson/WORKDONE/FaceSpoofing/Silent-Face-Anti-Spoofing/src/model_lib/MiniFASNet.py", line 113, in forward
return module_input * x
File "./pytorch_to_caffe.py", line 583, in _mul
bottom=[log.blobs(input), log.blobs(args[0])], top=top_blobs)
File "./Caffe/layer_param.py", line 33, in init
self.bottom.extend(bottom)
TypeError: None has type NoneType, but expected one of: bytes, unicode

Can you help?

Can't open deploy.prototxt in function cv::dnn::ReadProtoFromTextFile

self.detector = cv2.dnn.readNetFromCaffe(deploy, caffemodel)
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\dnn\src\caffe\caffe_io.cpp:1121: error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open "./resources/detection_model/deploy.prototxt" in function 'cv::dnn::ReadProtoFromTextFile'

我遇到了這樣的問題,這是我opencv的版本 opencv-python==4.2.0.34
嘗試過更改為絕對路徑但並沒有效果
環境:
python==3.8.4
torch==1.6.0+cpu
torchvision==0.7.0+cpu
自己有找過問題解法,可是沒有進展
希望有人能幫助我

win/ubuntu推断结果不一致

image_F1.jpg在win下0.73FakeFace,但是在ubuntu18.04下是0.55RealFace,同样的电脑双系统,同样的环境包版本,为什么会推断出不一致的结果?

训练完的模型怎么使用

训练完的模型使用时报错,运行test.py在win10报错
Traceback (most recent call last):
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 189, in nti
n = int(s.strip() or "0", 8)
ValueError: invalid literal for int() with base 8: '_rebuil'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 2299, in next
tarinfo = self.tarinfo.fromtarfile(self)
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 1093, in fromtarfile
obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 1035, in frombuf
chksum = nti(buf[148:156])
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 191, in nti
raise InvalidHeaderError("invalid header")
tarfile.InvalidHeaderError: invalid header

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\anaconda3\envs\silence\lib\site-packages\torch\serialization.py", line 555, in _load
return legacy_load(f)
File "D:\anaconda3\envs\silence\lib\site-packages\torch\serialization.py", line 466, in legacy_load
with closing(tarfile.open(fileobj=f, mode='r:', format=tarfile.PAX_FORMAT)) as tar,
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 1591, in open
return func(name, filemode, fileobj, **kwargs)
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 1621, in taropen
return cls(name, mode, fileobj, **kwargs)
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 1484, in init
self.firstmember = self.next()
File "D:\anaconda3\envs\silence\lib\tarfile.py", line 2311, in next
raise ReadError(str(e))
tarfile.ReadError: invalid header

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:/2020project/Silent-Face-Anti-Spoofing-master/test.py", line 110, in
test(args.image_name, args.model_dir, args.device_id)
File "D:/2020project/Silent-Face-Anti-Spoofing-master/test.py", line 60, in test
prediction += model_test.predict(img, os.path.join(model_dir, model_name))
File "D:\2020project\Silent-Face-Anti-Spoofing-master\src\anti_spoof_predict.py", line 87, in predict
self._load_model(model_path)
File "D:\2020project\Silent-Face-Anti-Spoofing-master\src\anti_spoof_predict.py", line 67, in _load_model
state_dict = torch.load(model_path, map_location=self.device)
File "D:\anaconda3\envs\silence\lib\site-packages\torch\serialization.py", line 386, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "D:\anaconda3\envs\silence\lib\site-packages\torch\serialization.py", line 559, in _load
raise RuntimeError("{} is a zip archive (did you mean to use torch.jit.load()?)".format(f.name))
RuntimeError: ./resources/anti_spoof_models\1_80x80_MiniFASNetV1.pth is a zip archive (did you mean to use torch.jit.load()?)

在linux报错
python test.py
Traceback (most recent call last):
File "test.py", line 109, in
test(args.image_name, args.model_dir, args.device_id)
File "test.py", line 59, in test
prediction += model_test.predict(img, os.path.join(model_dir, model_name))
File "/ywcx/guanxiao/Silent-Face-Anti-Spoofing/src/anti_spoof_predict.py", line 87, in predict
self._load_model(model_path)
File "/ywcx/guanxiao/Silent-Face-Anti-Spoofing/src/anti_spoof_predict.py", line 64, in _load_model
self.model = MODEL_MAPPINGmodel_type.to(self.device)
KeyError: 'model'

请问这个要怎么解决,在linux我改了模型的后缀,MODEL_MAPPING = {
'MiniFASNetV1': MiniFASNetV1,
'MiniFASNetV2': MiniFASNetV2,
'MiniFASNetV1SE':MiniFASNetV1SE,
'MiniFASNetV2SE':MiniFASNetV2SE
}
这四个都试了,都说模型不对,默认训练的是什么模型啊

Model performance

Thank you so much for such amazing code - and for making your weights available!

Do you have ROC curves for assessing performance?

Have you run the classifier over any available benchmark data sets?

Thanks again!

手机屏幕图片效果误判率高

 你好,我这边调用windows笔记本调用摄像头,然后用手机屏幕打开自拍照,测试效果不是很理想,阈值很高,达到0.99

论文链接

您好!论文可以分享一下吗,为什么不是live,spoof二分类,三分类是什么,谢谢!

多尺度模型

resource中anti_spoof_models中只有2.7和4尺度的模型,没有原始尺度的模型(scale=1),能否提供下

Why do we add the model predictions?

In test.py the softmaxed model probabilities are summed.

prediction += model_test.predict(img, os.path.join(model_dir, model_name))

This will result in values > 1 which makes no sense for probabilities.

Is there a better way to proceed?

Thanks!

關於train的一些問題

由於我以雜誌上的人去做驗證,發現效果並不佳,所以我想train來提昇這部份
我在train時碰到些問題想詢問一下:

  1. 我需要在datasets下依目錄格式去去建立資料夾並放入我要的訓練的資料?
  2. patch_info參數應該選org_1_80x60 / 1_80x80 / 2.7_80x80 / 4_80x80,也就是說它只會訓練我選的資料?
  3. org_1_80x60,下有0、1、2的資料夾,應該為標籤,請問真臉、假臉2d、假臉3d分別對應標籤0、1、2?
  4. 要訓練的資料需要先自行調整這些格式(org_1_80x60 / 1_80x80 / 2.7_80x80 / 4_80x80)?
  5. 可以發布每個標籤的訓練資料各一比嗎?希望有個範本來建立自己的資料集

Question about model_type, num_classes,image size ?

Hi,
First, thanks for your excellent product.
I am currently testing your ant-spoofing program.
There are some question.

  1. What is the role of "model_type"(MiniFASNetV1/MiniFASNetV2/MiniFASNetV1SE/MiniFASNetV2SE) ?
    There is only 4 items or can be added ?

  2. What is the role of "num_classes" ?
    There is 3 or 4 in your source ...
    the 3 or 4 means "0" , "1", "2" ?
    ├── datasets
    └── RGB_Images
    ├── org_1_80x60
    ├── 0
    ├── 1
    └── 2
    What is the meaning 0/1/2 respectively ?

    If I want to 2 kind of classes(Real Face , Fake Face) , I can change 3 or 4 to 2 in your source ?

  3. I want to train the 2 kind images(Real Face / Fake Face)
    3.1) I can compose dataset like below ?
    It is reasonable ?
    ├── datasets
    └── RGB_Images
    ├── 1_230x230
    ├── Real
    ├── RealFace1.jpg
    ├── ....
    └── Fake
    ├── FakeFace1.jpg
    ├── FakeFace2.jpg
    ├── ...

         In your source the is only org_1x80x60 , 1_80x80 , 2.7_80x80 , 4_80x80.
         I have image (203x203)
         I should make image size as yoy did ?
           for example,  org_1x203x203 , 1_80x80 , 2.7_80x80 , 4_80x80. ...
    

    3.2) after finishing training, how do I name the output => snapshot .pth and copy it to "Silent-Face-Anti-Spoofing-master\resources\anti_spoof_models"
    Do I name it as "1_80x80_MiniFASNetV2.pth" or another name or "1_230x230_MiniFASNetV2.pth" or new what ?
    Of course , I anlayzed your source but it is a little bit confusing .

Thanks in Advance

convert caffe model not work well

使用PytorchToCaffe转换出来的模型(转换的prototxt与官方的一样),输入图片推理得到的3个值一样大小,用官方提供的推理没有问题, 请问是哪里出了问题 ?是否可以提供转换caffe模型的工具

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.