Giter VIP home page Giter VIP logo

arcface-pytorch's Introduction

Hi,很高兴遇见你 👋

arcface-pytorch's People

Contributors

bubbliiiing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

arcface-pytorch's Issues

test

ValueError: Cannot have number of splits n_splits=10 greater than the number of samples: n_samples=0.

Convert those pth models to onnx models

Hi,

How can I convert those pth models to onnx models? I tried several torch2onnx.py (https://github.com/deepinsight/insightface/blob/master/recognition/arcface_torch/torch2onnx.py) from different repositories but fails.

Many thanks!

-Scott

load mobilenet_v1_backbone_weights.pth error

use this weights to train. but unfortunately, it raised an error as below:

arcface-pytorch/train.py", line 144, in
pretrained_dict = {k: v for k, v in pretrained_dict.items() if np.shape(model_dict[k]) == np.shape(v)}
KeyError: 'stage1.0.0.weight'

eval_LFW.py

为什么我下载了预训练好的权重arcface_mobilefacenet.pth以后,设置backbone为mobilefacenet,运行eval_LFW.py结果accuracy只有百分之60多呢

在运行LFW评估时出现了以下错误

每次都是在 5888/6000 这个地方出现的, 请问可能是什么问题呢

Test Epoch: [5888/6000 (96%)]: : 24it [00:12,  1.86it/s]
Traceback (most recent call last):
  File "eval_LFW.py", line 66, in <module>
    test(test_loader, model, png_save_path, log_interval, batch_size, cuda)
  File "/home/zk/project/arcface-pytorch/utils/utils_metrics.py", line 150, in test
    tpr, fpr, accuracy, val, val_std, far, best_thresholds = evaluate(
  File "/home/zk/project/arcface-pytorch/utils/utils_metrics.py", line 14, in evaluate
    val, val_std, far = calculate_val(thresholds, distances,
  File "/home/zk/project/arcface-pytorch/utils/utils_metrics.py", line 81, in calculate_val
    f = interpolate.interp1d(far_train, thresholds, kind='slinear')
  File "/home/zk/.conda/envs/dl/lib/python3.8/site-packages/scipy/interpolate/_interpolate.py", line 571, in __init__
    self._spline = make_interp_spline(xx, yy, k=order,
  File "/home/zk/.conda/envs/dl/lib/python3.8/site-packages/scipy/interpolate/_bsplines.py", line 1252, in make_interp_spline
    raise ValueError("Expect x to not have duplicates")
ValueError: Expect x to not have duplicates

arcface+retinaface

博主 什么时候能出一个arcface与retinaface结合的程序呀 例如您之前发布的retinaface+facenet一样

The loss function does not converge

use my own datasets,however,the acc of train.py always equals 0.At the same time,the loss function does not converge.Why? i need adjust hyper-parameter(s or m)?

LFW数据集验证时报错

开始进行LFW数据集的验证。
Traceback (most recent call last):
File "train.py", line 337, in
fit_one_epoch(model_train, model, loss_history, optimizer, epoch, epoch_step, epoch_step_val, gen, gen_val, Epoch, Cuda, LFW_loader, lfw_eval_flag, fp16, scaler, save_period, save_dir, local_rank)
File "/root/autodl-tmp/arcface-pytorch-main/utils/utils_fit.py", line 114, in fit_one_epoch
_, _, accuracy, _, _, _, _ = evaluate(distances,labels)
File "/root/autodl-tmp/arcface-pytorch-main/utils/utils_metrics.py", line 13, in evaluate
val, val_std, far = calculate_val(thresholds, distances,
File "/root/autodl-tmp/arcface-pytorch-main/utils/utils_metrics.py", line 72, in calculate_val
f = interpolate.interp1d(far_train, thresholds, kind='slinear')
File "/root/miniconda3/lib/python3.8/site-packages/scipy/interpolate/_interpolate.py", line 571, in init
self._spline = make_interp_spline(xx, yy, k=order,
File "/root/miniconda3/lib/python3.8/site-packages/scipy/interpolate/_bsplines.py", line 1252, in make_interp_spline
raise ValueError("Expect x to not have duplicates")
ValueError: Expect x to not have duplicates

Best_thresholds

请问进行评估的时候,为什么Best_thresholds会超过1呀

有关添加fc层的问题

您好,我想要学习您的arcface代码并尝试使用在项目中,但是项目需要模型输出标签得分。那就需要为模型添加fc层,请问应该如何添加呢。初入领域,有些一筹莫展

请教有试过模型转ncnn吗

Up有试过将arcface_mobilefacenet.pt转ncnn吗?
我试了下转onnx再转ncnn,ncnn输出的结果全是1或-1, 但是pt和onnx结果都是正常且一致。

predict问题

请问下在predict里只能预测两张人脸的相似度吗,不能预测一个图片是不是人脸吗?

CASIA-WebFaces数据集

我下载下来CASIA-WebFaces数据集是96×112的,不是112×112。是我的问题,还是数据集就是96×112的呢?

增加新的测试数据集

作者只使用了LFW测试数据集,如果想要增加AgeDB30和CFP-FP测试数据集,是否独立添加?callback如何修改呢?

关于net/arcface.py文件第34行

我在租的服务器上跑的时候遇到了“RuntimeError: expected device cuda:0 and dtype Float but got device cuda:0 and dtype Long”这个错误,
torch是1.2.0版本的,网上说是因为版本太低,也可以通过更改数据类型解决,
我是把onehot替换为onehot.float()之后就解决了。

test result is lower

我用你提供的lfw数据和模型,得到如下结果。 与期望的accuracy 99.1,有些差距。

Accuracy: 0.98800+-0.00722
Best_thresholds: 1.17000
Validation rate: 0.92400+-0.02086 @ FAR=0.00100

关于image_shape的问题

请问在该实现中训练的时候image_shape必须是112x112x3的吗,是否可以使用不同长宽和长宽比的图片?

预训练模型的问题

您好,不好意思打扰您了,想请问一下为什么加载了您提供的预训练的model以后,在CASIA-WebFace数据集训练时,刚开始的准确率还是比较低呢(从0开始)?正常来说您的模型是在CASIA-WebFace数据集训练的,那么一开始的准确率也应该比较高吧。

evaluate问题

如果我想用Agedb30数据集去替换LFW数据来验证模型,怎么修改程序?谢谢大神

模型训练时候报错,运行train.py(win和linux系统报错不同)

感谢博主开源!从读研开始一直关注博主,谢谢博主贡献代码。
下边是我运行时候遇到的两个问题,如果有遇到同样问题可以参考。
在运行时候会遇到两个问题,在win系统上,会报错ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (6000,) + inhomogeneous part(pytorch版本为2.0.1).linux系统上会报错RuntimeError: CuDNN error: CUDNN_STATUS-NOT_INITIALIZED(torch==1.2.0)。
查了下一个是数据长度不匹配,一个是cuda加速问题,在linux上cuda=false,用cpu就没这个问题。其他方法也懒得在尝试了
第二个是predict.py文件,如果显示匹配不到qt类似问题,打不开图片,环境装个pyqt5的包即可解决。

为什么我对模型backbone进行了修改,训练时候一直为0的准确率呢,就比如说我采用了mobileNetV4这个模型,

请问为什么我采用这这个模型,我将模型最后一层的输出改为嵌入向量维度不就可以吗,为什么准确率一直是0呢,请问问题出在哪里呢,我看着B导mobileNetV1的模型也是这么做的啊,为啥这么做就不可以呢

class MobileNetV4(nn.Module):
def init(self, model,embedding_size=128,dropout_keep_prob=0.5, pretrained=False):
# MobileNetV4ConvSmall MobileNetV4ConvMedium MobileNetV4ConvLarge
# MobileNetV4HybridMedium MobileNetV4HybridLarge
"""Params to initiate MobilenNetV4
Args:
model : support 5 types of models as indicated in
"https://github.com/tensorflow/models/blob/master/official/vision/modeling/backbones/mobilenet.py"
"""
super().init()
assert model in MODEL_SPECS.keys()
self.model = model
self.spec = MODEL_SPECS[self.model]

    # conv0
    self.conv0 = build_blocks(self.spec['conv0'])
    # layer1
    self.layer1 = build_blocks(self.spec['layer1'])
    # layer2
    self.layer2 = build_blocks(self.spec['layer2'])
    # layer3
    self.layer3 = build_blocks(self.spec['layer3'])
    # layer4
    self.layer4 = build_blocks(self.spec['layer4'])
    # layer5
    self.layer5 = build_blocks(self.spec['layer5'])
    self.layer6 = nn.Conv2d(1280,512,kernel_size=1)
    #embeding
    self.layer7 = nn.Linear(512*49, embedding_size)
    # self.layer6 = nn.Conv2d(1280,embedding_size,1)
    self.features = nn.BatchNorm1d(embedding_size, eps=1e-05)
    self.dropout = nn.Dropout(p=dropout_keep_prob, inplace=True)
def forward(self, x):
    x0 = self.conv0(x)

    x1 = self.layer1(x0)
    x2 = self.layer2(x1)
    x3 = self.layer3(x2)
    x4 = self.layer4(x3)
    x5 = self.layer5(x4)
    x6 = self.layer6(x5)

    x7 = torch.flatten(x6, 1)
    # 或者 x5 = x5.view(x5.size(0), -1)
    x8 = self.layer7(x7)
  #  x6 = self.features(x6)
    print(x8.shape)
    return x8

validition rate is low

Accuracy: 0.90542+-0.00699
Best_thresholds: 1.29000
Validation rate: 0.65617+-0.01883 @ FAR=0.00117
could you help me hoe to increase the validation rate
ands what does FAR MEAN?

关于ArcFace提取特征向量

您好!
请问下predict模式输入两张图时,asrface.py
output1 = self.net(photo_1).cpu().numpy()
output2 = self.net(photo_2).cpu().numpy()
output1和output2 输出的是模型提取出的图像的特征向量吗?

Dataset

when I download LFW dataset from original dataset it have shape (250, 250, 3) and eval, it have result 0.77. When I use your dataset LFW in baidu, I check img shape (96, 112, 3) and result like in table. I read your code, it resize shape (112, 112, 3) in data loader. Why it wrong like that. Thanks you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.