Giter VIP home page Giter VIP logo

dss's Introduction

AcadHomepage

一个现代、响应式的个人学术主页



一些例子:

主要特点

  • 自动更新谷歌学术引用: 借助谷歌学术爬虫和github action功能,本仓库可以自动更新作者的引用数和论文引用数。
  • 支持谷歌Analytics: 你可以通过简单的配置来实现使用谷歌Analytics跟踪网页的流量。
  • 响应式的: 此主页会针对不同的屏幕尺寸自动调整布局。
  • 美观而简约: 此主页美观而简约,适合个人学术主页的搭建。
  • 搜索引擎优化: 搜索引擎优化 (SEO) 帮助搜索引擎轻松找到您在主页上发布的信息,然后将其与类似网站进行排名,并获得排名优势。

快速开始

  1. Fork本仓库到USERNAME/USERNAME.github.io,其中USERNAME是你的github用户名。
  2. 配置谷歌学术引用爬虫:
    1. 在你的谷歌学术引用页面的url里找到你的谷歌学术ID:例如,在url https://scholar.google.com/citations?user=SCHOLAR_ID 中,SCHOLAR_ID部分即为你的谷歌学术ID。
    2. 在github本仓库页面的Settings -> Secrets -> Actions -> New repository secret中,添加GOOGLE_SCHOLAR_ID变量:name=GOOGLE_SCHOLAR_IDvalue=SCHOLAR_ID
    3. 在github本仓库页面的Action中,点击*"I understand my workflows, go ahead and enable them"*启用workflows by clicking *"。本action将会谷歌学术引用的统计量数据gs_data.json到本仓库的google-scholar-stats分支中。每次修改main分支的内容会触发该action。本action也会在每天08:00 UTC定时触发。
  3. 使用 favicon-generator生成favicon(网页icon文件),并下载所有文件到REPO/images
  4. 修改主页配置文件_config.yml:
    1. title: 主页标题
    2. description: 主页的描述
    3. repository: USER_NAME/REPO_NAME
    4. google_analytics_id (可选的): 谷歌Analytics ID
    5. SEO相关的键值 (可选的): 从搜索引擎的控制台里获得对应的ID (例如:Google, Bing and Baidu),然后粘贴到这里。
    6. author: 主页作者信息,包括其他网页、Email、所在城市、大学等。
    7. google_scholar_stats_use_cdn: 使用CDN读取存储于https://raw.githubusercontent.com/的google scholar引用统计数据,防止中国大陆地区被墙无法访问的情况。但是CDN有缓存,因此google_scholar_stats_use_cdn : True时,引用数据更新会有延迟。
    8. 更多的配置信息在注释中有详细描述。
  5. 将你的主页内容添加到 _pages/about.md.
  6. 你的主页将会被部署到https://USERNAME.github.io.

本地调试

  1. 使用git clone将本项目克隆到本地。
  2. 安装Jekyll的构建环境,包括RubyRubyGemsGCCMake。可参考该教程
  3. 运行 bash run_server.sh 来启动Jekyll实时重载服务器。
  4. 在浏览器里打开 http://127.0.0.1:4000。如果你修改了网页的源码,服务器会自动重新编译并刷新页面。
  5. 当你修改完毕你的页面以后, 使用git命令,commit你的改动并push到你的github仓库中。

Acknowledges

  • AcadHomepage incorporates Font Awesome, which is distributed under the terms of the SIL OFL 1.1 and MIT License.
  • AcadHomepage is influenced by the github repo mmistakes/minimal-mistakes, which is distributed under the MIT License.
  • AcadHomepage is influenced by the github repo academicpages/academicpages.github.io, which is distributed under the MIT License.

dss's People

Contributors

houqb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dss's Issues

How should I use your code?

I want test a picture by your pretrained model,how should I use your code?Where is the entrance of the test picture?

The groundtruth of result data in your Baiduyun could not reproduce your result

I download your result data, code and model in order to reproduce your result, and I really done
But I use the ECSSD groudtruth in your result data could not reproduce your result, so I change to the original ECSSD dataset's groudtruth and I reproduced.
There are some differences between your gt and original gt dataset, such as data type and files' storage space.
So can you check the gt and give the right one ?
I found difference in SOD, PASCAL-S and ECSSD, thank you!

Error when run_saliency.py

I0321 17:23:04.430434 8648 layer_factory.hpp:77] Creating layer loss-dsn6
I0321 17:23:04.446079 8648 net.cpp:100] Creating Layer loss-dsn6
I0321 17:23:04.446079 8648 net.cpp:444] loss-dsn6 <- upscore-dsn6_crop_0_split_0
I0321 17:23:04.446079 8648 net.cpp:444] loss-dsn6 <- label_data_1_split_0
I0321 17:23:04.446079 8648 net.cpp:418] loss-dsn6 -> loss-dsn6
F0321 17:23:04.463042 8648 sigmoid_cross_entropy_loss_layer.cpp:23] Check failed: bottom[0]->count() == bottom[1]->count() (104000 vs. 1) SIGMOID_CROSS_ENTROPY_LOSS layer inputs must have the same count.
*** Check failure stack trace: ***

微信截图_20190321154219

How can I get the code to run?

Thank you very much in advance.

Check failed: ReadProtoFromBinaryFile(param_file, param) Failed to parse NetParameter file

I am using caffe in your github(https://github.com/Andrew-Qibin/caffe_dss). I followed the instructions in the DSS readme. When I executed the net = caffe.Net('deploy.prototxt', 'dss_model_released.caffemodel', caffe.TEST) in DSS-tutorial.ipynb, it reported the following error.

I0808 20:35:52.644126 30 net.cpp:255] Network initialization done.
F0808 20:35:52.754325 30 upgrade_proto.cpp:95] Check failed: ReadProtoFromBinaryFile(param_file, param) Failed to parse NetParameter file: dss_model_released.caffemodel
*** Check failure stack trace: ***
Aborted (core dumped)

It seems that the caffemodel was not loaded successfully.
Can anyone help me solve this problem?
Thanks.

Check failed: error == cudaSuccess (2 vs. 0) out of memory

I am getting this error at the line net.forward() during testing:
F0304 14:26:21.611594 4889 syncedmem.cpp:71] Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

  1. How to solve this.
  2. I understand the default batch size of testing phase is 1. Is my understanding right?

tensorflow code

could you release the tensorflow code with training part ? Thanks!

Error when trying to run the demo

When I try to run cell #3 of the DSS tutorial notebook, I get the following error:

I0529 12:11:43.640518 15105 layer_factory.hpp:77] Creating layer crop
I0529 12:11:43.640524 15105 net.cpp:91] Creating Layer crop
I0529 12:11:43.640528 15105 net.cpp:425] crop <- score-dsn6-up
I0529 12:11:43.640532 15105 net.cpp:425] crop <- data_input_0_split_1
I0529 12:11:43.640537 15105 net.cpp:399] crop -> upscore-dsn6
F0529 12:11:43.640548 15105 crop_layer.cpp:68] Check failed: bottom[0]->shape(i) - crop_offset >= bottom[1]->shape(i) (1 vs. 3) invalid crop parameters in dimension: 1
*** Check failure stack trace: ***

I'm using caffe release 1.0.
How can I get the code to run?

Thank you very much in advance.

help needed

Does DSS support window&caffe? And can I know the requirements of cudnn version?

A question about the learning rate of pre-trained VGG

You had created an amazing and large model based on pre-trained VGG model, but you set the same learning rate in the pre-trained models and almost all the following layers in your training. However, the scale of saliency datasets are much smaller than ImageNet dataset, and the results may suffered from overfitting problem. So, how did you solve this problem in you training ?

Loss problem

I use my own caffe and find the loss doesn't decrease a lot. After a period of training, nan happens. The data layer is same as predict process provided by you while label lnput is normalized to 0-1. I find that there is a Crop layer, which reshape output salient map to the shape of 'data'. However, salient map has 1 channel while data has 3 channels, why we need to make the shapes same. Is there any problem with it? Also, when I use shape 500,500 as width and height of input, nan happens very quickly while I use shape 512,512. the loss doesn't decrease a lot. It quite strange.

why the layer 'concat_dsn3' did not include 'upscore-dsn4-3' ?

layer { name: "concat-dsn3" bottom: "conv3-dsn3" bottom: "upscore-dsn6-3" bottom: "upscore-dsn5-3" top: "concat-dsn3" type: "Concat" concat_param { concat_dim: 1} }

@Andrew-Qibin Hi, I'm re-implement this network in keras, and have read your paper before.
I remember the concate in your paper present in a dense way. The conct-dsn3 should be include a bottom like 'upscore-dsn4-3', but there only include upscore-dsn5-3 and upscore-dsn6-3.

the layer 'concat_dsn3' included 'upscore-dsn4-3' or did not ?

how to compute the processing time (0.08s) of an image of size 400×300

I had read many CVPR2017 paper related to saliency detection, your model and NLDF(Non-Local Deep Features for Salient Object Detection) all say the processing time == 0.08s, but the model and DL framwork(TF vs Caffe) are different, which should cause a bit different. This makes me confused.
Can you upload an simple ipynb file to explain how the 0.08s to be computed ? or just tell me your method if it's easy to implement.
Thank you very much.

results of paper

Can you please upload the results of the paper in google drive link. I am unable to access Baidu drive.

the test performance is inconsistent with the result in paper

Hi, Andrew,

Thank you very much to provide the code to the paper DSS.

I followed the instructions to download the code and the pretrained model. Then I used the provided code DSS-tutorial.ipynb to run test on ECSSD dataset. The PR curve I got is even worse than DCL.

Any idea about what happened? Thank you in advance.

memory

I have a question, how much memory do you need to run the code? I try to apply the short connection to other network models, but there are always out of memory errors,I was looking forward to receiving your reply.

Label image issue

Can the ground truth have pixel values other than 0 and 1 (after normalization)?
After loading the label images using your data layer, I was checking the unique pixel values. For any image the values are always 0 and 1 (normalize=true) even though the label image has values other than 0 and 255.

Does your data layer support only binary label images?

Unknown layer type: ImageLabelmapData

F0919 09:43:45.324304 18941 layer_factory.hpp:81] Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: ImageLabelmapData
How to fix this problem? hope someone can help me, thanks~

hi,can you tell me how to get the msra-b dataset?

I tried to reproduce the result,but unfortunately I got lower grades with training on the MSRA10K dataset.so i want to train on the the msra-b dataset as you do,but i only get the groundtruth of msra-b,could you provide me the msra-b dataset? thank you very much!

Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: train_val.prototxt

Hello, when I am using your code in my computer, I found this problem.
When I configured the caffe and run python run_saliency.py in the main directory, this error occured

I1018 16:44:03.186270 3995 solver.cpp:86] Creating training net from train_net file: train_val.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 20:14: Message type "caffe.ImageDataParameter" has no field named "normalize".
F1018 16:44:03.186388 3995 upgrade_proto.cpp:934] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: train_val.prototxt.

I checked the file train_val.prototxt, it had the param 'normalize' like this:
layer {
name: "data"
type: "ImageLabelmapData"
top: "data"
top: "label"
include { phase: TRAIN }
transform_param {
mirror: true
mean_value: 104.00699
mean_value: 116.66877
mean_value: 122.67892
}
image_data_param {
root_folder: "/opt/dataset/saliency/msra_b/"
source: "../../data/msra_b/train.lst"
batch_size: 1
shuffle: true
normalize: true
}
}

Any idea?

Error while executing run_saliency.py

I am using caffe 1.0.0. I followed the instructions in the DSS readme, but when I try to execute the script run_saliency.py, I am getting the following error:

[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 20:14: Message type "caffe.ImageDataParameter" has no field named "normalize".
F0724 17:51:04.304796 10783 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: train_val.prototxt
*** Check failure stack trace: ***
Aborted (core dumped)

The error occurs because of line 20 of train_val.prototxt which has normalize: true.

Can you please help me in understanding why this happens? Thanks in advance.

Unknown layer type: ImageLabelmapData

When i train the model, i can't solve the problem " Unknown layer type: ImageLabelmapData ". If you know a method to solve the problem, please to help me. Thank you very much!!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.