Giter VIP home page Giter VIP logo

fas-sgtd's People

Contributors

clks-wzz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fas-sgtd's Issues

Mask is an array full of 0, leads to Nan for logits

Thanks for this project !
This project is awesome. However I have an issue that occur really often and I can not figure out how to solve it :/

I am testing the repository in multi_frame using your script test.py. When starting the test, everything works fine and the predict fonction yields features correctly except for one thing : For almost half of the features the mask is an array full of 0 for the first two dimensions.

This imply that the new depth map (calculated on line 151 of test.py will be an array full of 0 :
depth_map = depth_map[..., 0]*masks[..., 0]

and the depth_mean will be an Nan value (because of division by 0) :
depth_mean = np.sum(depth_map) / np.sum(masks[..., 0])

And logits as well...
cla_ratio = flags.paras.cla_ratio depth_ratio = 1 - cla_ratio logits[0] = depth_ratio * depth_mean + cla_ratio * logits[0] logits[1] = 1.0 - depth_mean

So I need to figure out why the mask is an array full of 0, and how to fix the it to get the output I want !
If anybody could help me I would be very thankful !

PS : If you need any additional information let me know, I am 100% available

Dataset download

Hello, can you provide DMAD(dataset) download link? I want to train model. Thank you!

download model

Hello!
Unfortunately, I can’t download your model from a baidu. Could you put your model to dropbox?
thank you very much!

Inconsistency with paper w.r.t spatial gradient & 1x1 convolution

Hi,

In your paper (https://arxiv.org/abs/2003.08061) you mention in formula (2) that your spatial gradient (sobel) filters are applied to the output of the 1x1 convolution in the skip connection. However in your implementation you apply the 1x1 after applying the sobel filters to the res block input:

gradient_gabor_pw = slim.conv2d(gradient_gabor, out_dim, [1,1],stride=[1,1],activation_fn=None,scope=name+'/rgc_pw_gabor',padding='SAME')

Could you comment on which implementation is correct please?

Thanks!

the single_frame in mydata

hi thanks for you release the code;
I have run the single frame code on my own data of face_anti_spoofing;
in my opnion, the key point of the single frame method is the "Depthwise Spatial
Gradient Magnitude" but I found out that in my exprienment, it is not play a good role in my dataset;
how about your dataset;
on more question : Was the Depthwise Spatial Gradient Magnitude better than "short cut" in you expriment?

looking forward for your reply!
thanks

No module named 'tensorflow.contrib'

I tried run your model with tensorflow 2.0.0 and encountered this error. Is this a version issue? What should I do to resolve this error if I want to use latest tensorflow version.

Is the dataset DMAD publicly available?

In the paper "Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing", it is said that a double-modal dataset (DMAD) is collected. Is this publicly available?

convert ckpt to pb

Thanks for your code.I want to ask how can I convert ckpt to pb based on test.py?

Conv2_cd block with 3x3 instead of 1x1 convolution

Hi,

Given how I understand the paper that introduces the central difference convolution (https://arxiv.org/pdf/2003.04092.pdf) and its reference implementation (https://github.com/ZitongYu/CDCN/blob/master/CVPR2020_paper_codes/models/CDCNs.py#L58) we sum up the kernel of the original 3x3 convolution and use this sum to multiply it to the input via a 1x1 convolution (which is then subtracted from the output of the original 3x3 convolution).

However, your implementation here (https://github.com/clks-wzz/FAS-SGTD/blob/master/fas_sgtd_single_frame/util/util_network.py#L118) seems to suggest that you tile the sum to a 3x3 kernel, which would mean that you multiply it with the input using a receptive field of 3x3 instead of 1x1!

Can you comment on whether my understanding is correct and if so, what the reasoning behind this change is compared to the original approach?

the shape of input

Hi, I want to trian the model in my datasets, but I get an error.
"Input to reshape is a tensor with 3072 values, but the requested shape has 1024"
Can you tell me about the shape your dataset input and label? Thank you!

NaN validation loss while training the phase 1

Thanks for sharing your ideas and source code. It is interesting.

I re-implement your model in tensorflow 2. While training the validation loss is NaN. Did you encounter this problem? If yes, how did you solve this.

Thanks again.

How to pass the testing samples to the model?

Hi there

In your paper, you write that "...For the final classification score, we feed the sequential frames into the network and obtain depth maps and the living logits".
However, I can't find that part of code in the file generate_data_test.py. Everything this file does is almost the same for training: we need to pass a scene image (video's frame) and its depth map, even when testing (it's seem unreasonable).

Have I misunderstood somewhere? Could you please point this detail out for me...

Thank you very much in advance!

How to evaluate on new dataset?

Hi,
How can I test on new dataset?
should the files be in video format or should I just give video frames as images?
tnx in advance

Problems when testing.

When I run test.py and change isOnline to False the following error occurs:

File "/media/saeed/New Volume/PAD/Codes/FAS-SGTD/fas_sgtd_multi_frame/test.py", line 284, in
offline_eval()
File "/media/saeed/New Volume/PAD/Codes/FAS-SGTD/fas_sgtd_multi_frame/test.py", line 275, in offline_eval
officialEval(os.path.join(flags.path.model, 'model.ckpt-%d'%(iter_now)))
File "/media/saeed/New Volume/PAD/Codes/FAS-SGTD/fas_sgtd_multi_frame/test.py", line 205, in officialEval
officialEvalSub(path_txt_dev, [flags.path.dev_file], 'dev', path_model_now)
File "/media/saeed/New Volume/PAD/Codes/FAS-SGTD/fas_sgtd_multi_frame/test.py", line 195, in officialEvalSub
fid.write(video_name_encode + ',' + str(video_score_mean) + '\n')
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

Process finished with exit code 1

Please help to fix it. Thanks.

evaluating on OULU video frames gave an error

I extracted frames from videos in OULU dataset and then run test.py but in loop over the features the evaluation failed:
shape of items of my ALLDATA=[image_face_cat_reshaped, vertices_map_cat, mask_cat] is as below
image_face_cat.shape = (256, 256, 15)
vertices_map_cat.shape = (1, 32, 32, 3, 5)
mask_cat.shape = (1, 32, 32, 3, 5)

def officialEvalSub(txt_name, data_list, mode, path_model_now):
    def realProb(logits):
        #return np.exp(logits[1])/(np.exp(logits[0])+np.exp(logits[1]))
        x = np.array(logits)
        y = np.exp(x[0])/np.sum(np.exp(x))
        #y = x[0]
        return y
    def name_encode(name_):
        if mode == 'dev':
            return name_
        elif mode == 'test':
            name_split = name_.split('_')
            name_10 = name_split[0] + name_split[3] + name_split[1] + name_split[2]
            name_16 = hex(int(name_10))
            name_16 = name_16[0] + name_16[2:]
            return name_16
        else:
            print('Error mode: requires dev or test')
            exit(1)

    eval_input_fn = input_fn_maker(data_list, shuffle=False,
                                batch_size = 1,
                                epoch=1)
    features=mnist_classifier.predict(
            input_fn=eval_input_fn,
            checkpoint_path= path_model_now )
    fid = open(txt_name, 'w')
    fea_ind = 0
    acc_mean = 0.0
    video_name = None
    video_score = 0.0
    video_frame_count = 0.0
    for feature in features:
        logits = feature['logits']
        '''
        logits_tmp = logits[1]
        logits[1] = logits[0]
        logits[0] = logits_tmp
        '''
        labels = feature['labels']
        names = feature['names']
        depth_map = feature['depth_map']
        masks = feature['masks']

the error:

Traceback (most recent call last):
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call
    return fn(*args)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn
    target_list, run_metadata)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{function_node __inference_Dataset_map_parser_fun_70}} Input to reshape is a tensor with 15360 values, but the requested shape has 5120
         [[{{node Reshape_2}}]]
         [[IteratorGetNext]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "test_ref2.py", line 164, in <module>
    evaluator.evaluate(is_online=FLAGS.flags.isOnline)
  File "test_ref2.py", line 71, in evaluate
    self.online_eval()
  File "test_ref2.py", line 90, in online_eval
    self.perform_evaluation(iter_now)
  File "test_ref2.py", line 118, in perform_evaluation
    for feature in features:
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 640, in predict
    preds_evaluated = mon_sess.run(predictions)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/training/monitored_session.py", line 754, in run
    run_metadata=run_metadata)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/training/monitored_session.py", line 1259, in run
    run_metadata=run_metadata)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/training/monitored_session.py", line 1360, in run
    raise six.reraise(*original_exc_info)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/six.py", line 719, in reraise
    raise value
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/training/monitored_session.py", line 1345, in run
    return self._sess.run(*args, **kwargs)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/training/monitored_session.py", line 1418, in run
    run_metadata=run_metadata)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/training/monitored_session.py", line 1176, in run
    return self._sess.run(*args, **kwargs)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 956, in run
    run_metadata_ptr)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
    run_metadata)
  File "/home/javaneh/miniconda3/envs/tf1_conda_env/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError:  Input to reshape is a tensor with 15360 values, but the requested shape has 5120
         [[{{node Reshape_2}}]]
         [[IteratorGetNext]]

@clks-wzz , Thanks in advance for your help

data and label format description

The oulu-npu dataset is difficult to download. Can you provide the data and label format? or provide the data and label format description in dataset?Thanks.

数据准备

您好,OULU这个数据集难以获得,请问如何使用自己的数据集跑通程序?

problems when tesing

Hello. I have a problem when testing, error is as follows:
Traceback (most recent call last):
File "test_test.py", line 272, in
offline_eval()
File "test_test.py", line 264, in offline_eval
officialEval(os.path.join(flags.path.model, 'model.ckpt-%d'%(iter_now)))
File "test_test.py", line 198, in officialEval
officialEvalSub(path_txt_test, [flags.path.test_file], 'test', path_model_now)
File "test_test.py", line 186, in officialEvalSub
video_score_mean = video_score/video_frame_count
ZeroDivisionError: float division by zero

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.