Giter VIP home page Giter VIP logo

deepface's Introduction

Hi 👋

chenriwei's github statsTop Langs

  • 🔭 I’m currently working on Bytedance AI-Lab
  • 🌱 I’m currently learning computer vision & deep learning application
  • 📫 How to reach me: ...

deepface's People

Contributors

riweichen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepface's Issues

Why is there no sharing weights in the prototxt?

Hi, I also get a work to implement the DeepID recently, and I have searched many repositories about Face Verification and DeepID in github. I found none of them shows any features about sharing weights in their model configurations, which, however, is clearly included in 3th and 4th convolution layers of the model in Yi Sun's Deep Learning Face Representation from Predicting 10,000 Classes (http://mmlab.ie.cuhk.edu.hk/pdf/YiSun_CVPR14.pdf). I don't know why this happened. It seems like everyone ignored this characteristic for some conventional reason? Looking forward to your response!

关于ROC曲线

你好,我训练你的FaceRecongnition,在350000次迭代的时候得不出你在result的曲线,请问是什么原因导致的?

Add new person

Dear Riwei Chen,
May I ask what happened when I want to add a new person into trained model? Shall I re-train all section of your NET or no need to re-train FaceDetection and FaceAlignment step?
So, Could you tell me how long does it take, please? Do you have any suggest for me if I want to add new person into existing class?

有关训练数据制作

您好,我想问下这个训练数据应该怎么做,转lmdb时描述文件的具体的格式是怎样的?

no layer names 'LOCAL'

hi, i used the deploy.prototxt file to do face point detection, but the conv layer was named as 'LOCAL',which is not a name in Caffe. so i tried to use 'type: CONVOLUTION' instead, but the model you provided can not work , here is the error message:
Cannot copy param 0 weights from layer 'conv1'; shape mismatch.

i think my Caffe is different with yours . could you provide your caffe too? thank you

Could you help me for this issue, thanks !

python FaceDetection/baseline/evaluate_48.py
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 8:7: Message type "caffe.NetParameter" has no field named "layer".
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0426 13:25:16.098065 2162 upgrade_proto.cpp:631] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/arthur/DeepFace/FaceDetection/baseline/deploy_fc.prototxt
*** Check failure stack trace: ***
Aborted (core dumped)

about CelebA dataset

Hi, Do you have the identity information for CelebA dataset to use it for face recognition training?
If you have, please provide me.
Thanks in advance.

DeepID 的训练图片尺寸

RiweiChen 你好,

在train_val.prototxt 的data layer 里有看你的训练图片尺寸为64x64,然而在[Deep Learning Face Representation from Predicting 10,000 Classes](http://www.ee.cuhk.edu.hk/~xgwang/papers/ sunWTcvpr14.pdf) (DeepID) 论文里提到港大使用的尺寸是39 × 31 或31 × 31

The input is 39 × 31 × k for rectangle patches, and 31 × 31 × k for square patches, where k = 3 for color patches and k = 1 for gray patches.

想请问会选择用不同的尺寸大小是有什么样的考量吗?

谢谢!

loading model problem

Hi

I try to load face alignment model but get the the following error:

WARNING: Logging before InitGoogleLogging() is written to STDERR
W1225 16:07:29.089817 26911 _caffe.cpp:122] DEPRECATION WARNING - deprecated use of Python interface
W1225 16:07:29.089855 26911 _caffe.cpp:123] Use this instead (with the named "weights" parameter):
W1225 16:07:29.089861 26911 _caffe.cpp:125] Net('../try1_2/depoly.prototxt', 1, weights='../Model/try1_2/snapshot_iter_1500000.caffemodel')
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 10:3: Unknown enumeration value of "LOCAL" for field "type".
F1225 16:07:29.091105 26911 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: ../try1_2/depoly.prototxt
*** Check failure stack trace: ***
Aborted (core dumped)

where is the problem?

error

ren@keegan  /home/ren/Code/CaffeFace/DeepFace-master/FaceDetection/baseline   python evaluate_48.py
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0308 10:43:48.193229 6102 net.cpp:42] Initializing net from parameters:
name: "DeepID_face_detection"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 2400
input_dim: 2400
state {
phase: TEST
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
name: "conv1_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv1_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 20
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "norm1"
type: "LRN"
bottom: "conv1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "norm1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
name: "conv2_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv2_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 40
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "norm2"
type: "LRN"
bottom: "conv2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "norm2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
param {
name: "conv3_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv3_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 60
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "norm3"
type: "LRN"
bottom: "conv3"
top: "norm3"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool3"
type: "Pooling"
bottom: "norm3"
top: "pool3"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv4"
type: "Convolution"
bottom: "pool3"
top: "conv4"
param {
name: "conv4_w"
lr_mult: 1
decay_mult: 1
}
param {
name: "conv4_b"
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 80
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "norm4"
type: "LRN"
bottom: "conv4"
top: "norm4"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "pool4"
type: "Pooling"
bottom: "norm4"
top: "pool4"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "deepid-conv"
type: "Convolution"
bottom: "pool4"
top: "deepid-conv"
convolution_param {
num_output: 160
kernel_size: 1
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "deepid-conv"
top: "deepid-conv"
}
layer {
name: "fc7-conv"
type: "Convolution"
bottom: "deepid-conv"
top: "fc7-conv"
convolution_param {
num_output: 2
kernel_size: 1
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "fc7-conv"
top: "prob"
}
I0308 10:43:48.193851 6102 net.cpp:370] Input 0 -> data
I0308 10:43:48.202942 6102 layer_factory.hpp:74] Creating layer conv1
I0308 10:43:48.203001 6102 net.cpp:90] Creating Layer conv1
I0308 10:43:48.203017 6102 net.cpp:410] conv1 <- data
I0308 10:43:48.203035 6102 net.cpp:368] conv1 -> conv1
I0308 10:43:48.203055 6102 net.cpp:120] Setting up conv1
I0308 10:43:50.349145 6102 net.cpp:127] Top shape: 1 20 2400 2400 (115200000)
I0308 10:43:50.349195 6102 layer_factory.hpp:74] Creating layer relu1
I0308 10:43:50.349208 6102 net.cpp:90] Creating Layer relu1
I0308 10:43:50.349215 6102 net.cpp:410] relu1 <- conv1
I0308 10:43:50.349221 6102 net.cpp:357] relu1 -> conv1 (in-place)
I0308 10:43:50.349231 6102 net.cpp:120] Setting up relu1
I0308 10:43:50.349428 6102 net.cpp:127] Top shape: 1 20 2400 2400 (115200000)
I0308 10:43:50.349447 6102 layer_factory.hpp:74] Creating layer norm1
I0308 10:43:50.349458 6102 net.cpp:90] Creating Layer norm1
I0308 10:43:50.349463 6102 net.cpp:410] norm1 <- conv1
I0308 10:43:50.349469 6102 net.cpp:368] norm1 -> norm1
I0308 10:43:50.349478 6102 net.cpp:120] Setting up norm1
I0308 10:43:50.349488 6102 net.cpp:127] Top shape: 1 20 2400 2400 (115200000)
I0308 10:43:50.349493 6102 layer_factory.hpp:74] Creating layer pool1
I0308 10:43:50.349501 6102 net.cpp:90] Creating Layer pool1
I0308 10:43:50.349506 6102 net.cpp:410] pool1 <- norm1
I0308 10:43:50.349512 6102 net.cpp:368] pool1 -> pool1
I0308 10:43:50.349519 6102 net.cpp:120] Setting up pool1
I0308 10:43:50.365315 6102 net.cpp:127] Top shape: 1 20 1200 1200 (28800000)
I0308 10:43:50.365340 6102 layer_factory.hpp:74] Creating layer conv2
I0308 10:43:50.365358 6102 net.cpp:90] Creating Layer conv2
I0308 10:43:50.365365 6102 net.cpp:410] conv2 <- pool1
I0308 10:43:50.365375 6102 net.cpp:368] conv2 -> conv2
I0308 10:43:50.365389 6102 net.cpp:120] Setting up conv2
I0308 10:43:50.367513 6102 net.cpp:127] Top shape: 1 40 1200 1200 (57600000)
I0308 10:43:50.367563 6102 layer_factory.hpp:74] Creating layer relu2
I0308 10:43:50.367576 6102 net.cpp:90] Creating Layer relu2
I0308 10:43:50.367583 6102 net.cpp:410] relu2 <- conv2
I0308 10:43:50.367588 6102 net.cpp:357] relu2 -> conv2 (in-place)
I0308 10:43:50.367596 6102 net.cpp:120] Setting up relu2
I0308 10:43:50.367911 6102 net.cpp:127] Top shape: 1 40 1200 1200 (57600000)
I0308 10:43:50.367933 6102 layer_factory.hpp:74] Creating layer norm2
I0308 10:43:50.367943 6102 net.cpp:90] Creating Layer norm2
I0308 10:43:50.367949 6102 net.cpp:410] norm2 <- conv2
I0308 10:43:50.367955 6102 net.cpp:368] norm2 -> norm2
I0308 10:43:50.367964 6102 net.cpp:120] Setting up norm2
I0308 10:43:50.367975 6102 net.cpp:127] Top shape: 1 40 1200 1200 (57600000)
I0308 10:43:50.367980 6102 layer_factory.hpp:74] Creating layer pool2
I0308 10:43:50.367987 6102 net.cpp:90] Creating Layer pool2
I0308 10:43:50.367992 6102 net.cpp:410] pool2 <- norm2
I0308 10:43:50.367998 6102 net.cpp:368] pool2 -> pool2
I0308 10:43:50.368005 6102 net.cpp:120] Setting up pool2
I0308 10:43:50.368131 6102 net.cpp:127] Top shape: 1 40 600 600 (14400000)
I0308 10:43:50.368140 6102 layer_factory.hpp:74] Creating layer conv3
I0308 10:43:50.368161 6102 net.cpp:90] Creating Layer conv3
I0308 10:43:50.368166 6102 net.cpp:410] conv3 <- pool2
I0308 10:43:50.368173 6102 net.cpp:368] conv3 -> conv3
I0308 10:43:50.368185 6102 net.cpp:120] Setting up conv3
I0308 10:43:50.370653 6102 net.cpp:127] Top shape: 1 60 600 600 (21600000)
I0308 10:43:50.370688 6102 layer_factory.hpp:74] Creating layer relu3
I0308 10:43:50.370707 6102 net.cpp:90] Creating Layer relu3
I0308 10:43:50.370717 6102 net.cpp:410] relu3 <- conv3
I0308 10:43:50.370728 6102 net.cpp:357] relu3 -> conv3 (in-place)
I0308 10:43:50.370741 6102 net.cpp:120] Setting up relu3
I0308 10:43:50.371130 6102 net.cpp:127] Top shape: 1 60 600 600 (21600000)
I0308 10:43:50.371153 6102 layer_factory.hpp:74] Creating layer norm3
I0308 10:43:50.371170 6102 net.cpp:90] Creating Layer norm3
I0308 10:43:50.371179 6102 net.cpp:410] norm3 <- conv3
I0308 10:43:50.371191 6102 net.cpp:368] norm3 -> norm3
I0308 10:43:50.371207 6102 net.cpp:120] Setting up norm3
I0308 10:43:50.371232 6102 net.cpp:127] Top shape: 1 60 600 600 (21600000)
I0308 10:43:50.371242 6102 layer_factory.hpp:74] Creating layer pool3
I0308 10:43:50.371254 6102 net.cpp:90] Creating Layer pool3
I0308 10:43:50.371263 6102 net.cpp:410] pool3 <- norm3
I0308 10:43:50.371274 6102 net.cpp:368] pool3 -> pool3
I0308 10:43:50.371286 6102 net.cpp:120] Setting up pool3
I0308 10:43:50.371572 6102 net.cpp:127] Top shape: 1 60 300 300 (5400000)
I0308 10:43:50.371592 6102 layer_factory.hpp:74] Creating layer conv4
I0308 10:43:50.371608 6102 net.cpp:90] Creating Layer conv4
I0308 10:43:50.371618 6102 net.cpp:410] conv4 <- pool3
I0308 10:43:50.371631 6102 net.cpp:368] conv4 -> conv4
I0308 10:43:50.371646 6102 net.cpp:120] Setting up conv4
I0308 10:43:50.374976 6102 net.cpp:127] Top shape: 1 80 300 300 (7200000)
I0308 10:43:50.375010 6102 layer_factory.hpp:74] Creating layer relu4
I0308 10:43:50.375026 6102 net.cpp:90] Creating Layer relu4
I0308 10:43:50.375036 6102 net.cpp:410] relu4 <- conv4
I0308 10:43:50.375048 6102 net.cpp:357] relu4 -> conv4 (in-place)
I0308 10:43:50.375061 6102 net.cpp:120] Setting up relu4
I0308 10:43:50.375435 6102 net.cpp:127] Top shape: 1 80 300 300 (7200000)
I0308 10:43:50.375457 6102 layer_factory.hpp:74] Creating layer norm4
I0308 10:43:50.375471 6102 net.cpp:90] Creating Layer norm4
I0308 10:43:50.375480 6102 net.cpp:410] norm4 <- conv4
I0308 10:43:50.375495 6102 net.cpp:368] norm4 -> norm4
I0308 10:43:50.375509 6102 net.cpp:120] Setting up norm4
I0308 10:43:50.375522 6102 net.cpp:127] Top shape: 1 80 300 300 (7200000)
I0308 10:43:50.375532 6102 layer_factory.hpp:74] Creating layer pool4
I0308 10:43:50.375545 6102 net.cpp:90] Creating Layer pool4
I0308 10:43:50.375553 6102 net.cpp:410] pool4 <- norm4
I0308 10:43:50.375566 6102 net.cpp:368] pool4 -> pool4
I0308 10:43:50.375577 6102 net.cpp:120] Setting up pool4
I0308 10:43:50.375849 6102 net.cpp:127] Top shape: 1 80 150 150 (1800000)
I0308 10:43:50.375867 6102 layer_factory.hpp:74] Creating layer deepid-conv
I0308 10:43:50.375885 6102 net.cpp:90] Creating Layer deepid-conv
I0308 10:43:50.375895 6102 net.cpp:410] deepid-conv <- pool4
I0308 10:43:50.375907 6102 net.cpp:368] deepid-conv -> deepid-conv
I0308 10:43:50.375921 6102 net.cpp:120] Setting up deepid-conv
I0308 10:43:50.377126 6102 net.cpp:127] Top shape: 1 160 150 150 (3600000)
I0308 10:43:50.377161 6102 layer_factory.hpp:74] Creating layer relu6
I0308 10:43:50.377177 6102 net.cpp:90] Creating Layer relu6
I0308 10:43:50.377187 6102 net.cpp:410] relu6 <- deepid-conv
I0308 10:43:50.377199 6102 net.cpp:357] relu6 -> deepid-conv (in-place)
I0308 10:43:50.377212 6102 net.cpp:120] Setting up relu6
I0308 10:43:50.377583 6102 net.cpp:127] Top shape: 1 160 150 150 (3600000)
I0308 10:43:50.377604 6102 layer_factory.hpp:74] Creating layer fc7-conv
I0308 10:43:50.377619 6102 net.cpp:90] Creating Layer fc7-conv
I0308 10:43:50.377629 6102 net.cpp:410] fc7-conv <- deepid-conv
I0308 10:43:50.377640 6102 net.cpp:368] fc7-conv -> fc7-conv
I0308 10:43:50.377653 6102 net.cpp:120] Setting up fc7-conv
I0308 10:43:50.378639 6102 net.cpp:127] Top shape: 1 2 150 150 (45000)
I0308 10:43:50.378669 6102 layer_factory.hpp:74] Creating layer prob
I0308 10:43:50.378684 6102 net.cpp:90] Creating Layer prob
I0308 10:43:50.378696 6102 net.cpp:410] prob <- fc7-conv
I0308 10:43:50.378708 6102 net.cpp:368] prob -> prob
I0308 10:43:50.378720 6102 net.cpp:120] Setting up prob
I0308 10:43:50.379678 6102 net.cpp:127] Top shape: 1 2 150 150 (45000)
I0308 10:43:50.379701 6102 net.cpp:194] prob does not need backward computation.
I0308 10:43:50.379712 6102 net.cpp:194] fc7-conv does not need backward computation.
I0308 10:43:50.379721 6102 net.cpp:194] relu6 does not need backward computation.
I0308 10:43:50.379731 6102 net.cpp:194] deepid-conv does not need backward computation.
I0308 10:43:50.379740 6102 net.cpp:194] pool4 does not need backward computation.
I0308 10:43:50.379750 6102 net.cpp:194] norm4 does not need backward computation.
I0308 10:43:50.379758 6102 net.cpp:194] relu4 does not need backward computation.
I0308 10:43:50.379766 6102 net.cpp:194] conv4 does not need backward computation.
I0308 10:43:50.379777 6102 net.cpp:194] pool3 does not need backward computation.
I0308 10:43:50.379786 6102 net.cpp:194] norm3 does not need backward computation.
I0308 10:43:50.379796 6102 net.cpp:194] relu3 does not need backward computation.
I0308 10:43:50.379804 6102 net.cpp:194] conv3 does not need backward computation.
I0308 10:43:50.379815 6102 net.cpp:194] pool2 does not need backward computation.
I0308 10:43:50.379824 6102 net.cpp:194] norm2 does not need backward computation.
I0308 10:43:50.379833 6102 net.cpp:194] relu2 does not need backward computation.
I0308 10:43:50.379842 6102 net.cpp:194] conv2 does not need backward computation.
I0308 10:43:50.379853 6102 net.cpp:194] pool1 does not need backward computation.
I0308 10:43:50.379861 6102 net.cpp:194] norm1 does not need backward computation.
I0308 10:43:50.379869 6102 net.cpp:194] relu1 does not need backward computation.
I0308 10:43:50.379878 6102 net.cpp:194] conv1 does not need backward computation.
I0308 10:43:50.379886 6102 net.cpp:235] This network produces output prob
I0308 10:43:50.379911 6102 net.cpp:482] Collecting Learning Rate and Weight Decay.
I0308 10:43:50.379925 6102 net.cpp:247] Network initialization done.
I0308 10:43:50.379933 6102 net.cpp:248] Memory required for data: 2649960000
E0308 10:43:50.442247 6102 upgrade_proto.cpp:618] Attempting to upgrade input file specified using deprecated V1LayerParameter: /home/ren/Code/CaffeFace/DeepFace-master/FaceDetection/try1_4/deploy.prototxt
E0308 10:43:50.442294 6102 upgrade_proto.cpp:636] Input NetParameter to be upgraded already specifies 'layer' fields; these will be ignored for the upgrade.
E0308 10:43:50.442327 6102 upgrade_proto.cpp:623] Warning: had one or more problems upgrading V1LayerParameter (see above); continuing anyway.
I0308 10:43:50.442374 6102 net.cpp:42] Initializing net from parameters:
name: "DeepID_face_detection"
input: "data"
input_dim: 1
input_dim: 1
input_dim: 32
input_dim: 32
state {
phase: TEST
}
layer {
name: "relu3"
type: "ReLU"
bottom: "pool3"
top: "pool3"
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "relu5"
type: "ReLU"
bottom: "conv5"
top: "conv5"
}
F0308 10:43:50.442462 6102 insert_splits.cpp:35] Unknown blob input pool3 to layer 0
*** Check failure stack trace: ***
[1] 6102 abort (core dumped) python evaluate_48.py

I want to change to c

your code:out = net.forward_all(data=np.asarray([transformer.preprocess('data', scale_img)]))
your code:def _Net_forward_all(self, blobs=None, *_kwargs):
all_outs = {out: [] for out in set(self.outputs + (blobs or []))}
for batch in self._batch(kwargs):
outs = self.forward(blobs=blobs, *_batch)

   i can not understard

my:how to change to c++ code,like cnn->blob_by_name(layer); which layer is outMap--response map

Difference in deploy.prototxt and train_val.prototxt in FaceRecognition

I am seeing difference in deploy.prototxt and train_val.prototxt in FaceRecognition.
In deploy.prototxt, ReLU and InnerProduct layer is missing. What is the reason for same ?.

Following layers are available in train_val.prototxt and not available in deploy.prototxt.

layer {
  name: "relu6_1"
  type: "ReLU"
  bottom: "deepid_1"
  top: "deepid_1"
}

layer {
  name: "fc8_1"
  type:  "InnerProduct"
  bottom: "deepid_1"
  top: "fc8_1"
  param {
    name: "fc8_w"
    lr_mult: 1
    decay_mult: 1
  }
  param {
    name: "fc8_b"
    lr_mult: 2
    decay_mult: 0
  }
  inner_product_param {
    num_output: 10575
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

Improve documentation

Since all comments in the python scripts are written in chinese, I wonder if one could improve the docs for this tool. How do you start it? What parameters and options are available?

a question about hdf5

I am having a very strange problem with hdf5.
it will return an "IOError: unable to create file (File accessibility: unable to open file)". Here is the offending line of code:
h5py.File('/path/to/file', 'w')
I think the filepath is right and i am not trying to create a file that already exists. do you have any idea about this problem?

LMDB for custom image dataset for Face recognition using Caffe

Hi, I'm new to Caffe. I have done the face detection using caffe. Now i'm trying to perform face recognition using Caffe, using custom image dataset. The problem is i dont know where to start. Only thing i have is images of faces of several people. I know that we need to convert them to LMDB. Incase of Face Detection we give faces(without bothering about the people). But i dont know how to deal with several faces. Just help me

No module named deepface - FACESMILE

hi,
I tried to run the facesmile demo from the Deep face
1.I changed the filepath and image names in the demo.py file
2.Then run the demo using
python demo.py
It shows error:
File "demo.py", line 8, in
from deepface import face_verify
ImportError: No module named deepface

How to install the deepface module?

Thanks,
Regards
CIBIN

train_2.list, val_2.list ?

Hi, I was looking 'FaceDetection''s train_val.prototxt of try2_1, but I couldn't find those file.

How can I get train_2.list & val_2.list ? Thx

how to run sample faceDetection?

I builded the caffe in the DeepFace/resource/caffe-local
1.When i build the caffe it shows the following error.

src/caffe/layers/window_data_layer.cpp:25:11: error: ‘const int CV_LOAD_IMAGE_COLOR’ redeclared as different kind of symbol
const int CV_LOAD_IMAGE_COLOR = cv::IMREAD_COLOR;

So i commented the above line in src/caffe/layers/window_data_layer.cp
Caffe is builded succesfully

2.Then changed the caffe path in the Facedetection/baseline/evaluate_48.py and run
python evaluate_48.py
It shows the error
File "evaluate_48.py", line 13, in

import caffe

File "/home/hubino/Documents/DeepFace/resource/caffe-local/python/caffe/init.py", line 1, in
from .pycaffe import Net, SGDSolver
File "/home/hubino/Documents/DeepFace/resource/caffe-local/python/caffe/pycaffe.py", line 10, in
from ._caffe import Net, SGDSolver
ImportError: libcudart.so.6.5: cannot open shared object file: No such file or directory

3.Then i deleted the puthon caffe library and builded the python caffe library(_caffe.so),since i am using cuda7.5.

4.Then build the evaluate_48.py,it shows error

Traceback (most recent call last):
File "evaluate_48.py", line 17, in
caffe.set_device(0)
AttributeError: 'module' object has no attribute 'set_device'

1.How to solve this error?
2.Can you please update the readme with instruction,which will be useful to build the samples

I also tried to run the script train.sh in the try1_4 folder
The caffe is loaded,but it shows error

I0810 20:23:37.807682 11183 solver.cpp:67] Creating training net from net file: train_val.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 3:7: Message type "caffe.NetParameter" has no field named "layer".

How to solve it?

fully-convolution layer settings

hi,
why the transformed model, named deploy_fc.prototxt, set the 'deepid-conv' layer to be 'num_output :160 kernel_size : 1' ?
according to this, i think the kernel size in 'deepid-conv' layer should be 3. cause the output size of 'pool4' layer is 80x3x3 and the parameter size of 'deepid' layer is 160 * 720(115200), however parameter size in the 'deepid-conv' layer is 160 x 80 x 1 x 1 (12800 not equal to 115200)... any suggestions about this??
thx in advance.

请教lfw的测试方法

你好,我最近在测试lfw的准确率,但不是很了解,有看官网,对测试不是很确定。它的测试数据分为10个文件夹,是不是每次以9个为训练集,另一个为验证集。然后对这10个文件循环一次做训练集和验证集,这样对吗?最后的结果以10次准确率的平均为结果,是不是?

about translate contribute

Hello, we are Korean university students. This project is an area we are interested in, so we would like to contribute.
We want to contribute through the Korean translation of the project. Is it okay to contribute?

question about FaceAlignment's prototxt

hi. i see your code about FaceAlignment
i wonder elements about FaceAlignment's prototxt
(i have a error duration training)
this prototxt...(train_val.prototxt)

layers {
name: "conv2"
type: LOCAL
bottom: "pool1"
top: "conv2"
blobs_lr: 1
blobs_lr: 2
local_param {
num_output: 40
kernel_size: 3
stride: 1
#pad: 2
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
}

error : LOCAL & local_param

caffe not understand this elements.

how to ...??

Suggestion

make it to recognise the whole body,as a result more accurate results

Dataset

Dear Riwei Chen,

Why did you use different database in facedetection and face recognition?

Also you said that "For more implement details, please reference my blog"

May I ask put the direct link of your project on your blog, please? Because your blog is not English and I couldn't find the detail of your project in your blog.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.