Comments (8)
@hkkevinhf Thanks for your information. Could you please be more specific or let us re-run your experiments? "AUC-shuffled score is much lower than that reported in the paper on UCF dataset." Which paper do you mean? It's hard to figure out the reason. We do not encounter a similar issue before.
from dhf1k.
@wenguanwang hi, the paper concerned is "Revisiting Video Saliency: A Large-scale Benchmark and a New Model".
I used the test code and model weight in 'ACL' to generate the results. ( The code was downloaded from
https://drive.google.com/open?id=1sW0tf9RQMO4RR7SyKhU8Kmbm4jwkFGpQ )
I evaluated the results using evaluation code in 'ACL-evaluation.rar' .
The detailed information is below:
test code: ACL/main.py
model weight for test: ACL/ACL.h5
evaluation code: ACL-evaluation/demo_ours .m
I add a line in demo_ours.m in order to see the overall metrics on a dataset. Other files all remain unchanged. The demo_ours.m I used is shown below:
`**%% Demo.m
% All the codes in "code_forMetrics" are from MIT Saliency Benchmark (https://github.com/cvzoya/saliency). Please refer to their webpage for more details.
% load global parameters, you should set up the "ROOT_DIR" to your own path
% for data.
clear all
METRIC_DIR = 'code_forMetrics';
addpath(genpath(METRIC_DIR));
CACHE = ['./cache/'];
Path = '/data/Paper_code/ACL/';
Datasets = 'UCF';
Metrics{1} = 'AUC_Judd';
Metrics{2} = 'similarity';
Metrics{3} = 'AUC_shuffled';
Metrics{4} = 'CC';
Metrics{5} = 'NSS';
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Results and all_results should have same number of cells
Results{1} = 'saliency'; % ours method
% Results{2} = 'NIPS08';
results = zeros(300,1);
all_results{1}=zeros(300,5);
all_results{2}=zeros(300,5);
mean_results{1}=zeros(1,5);
for k =1:1 % indexing methods
Results{k}
for j = 3:3 % indexing metrics
if ~exist([CACHE 'ourdataset_' Results{k} '_' Metrics{j} '.mat'], 'file')
videos = dir([Path Datasets '/test/']);
for i = 1:length(videos)-2 % loop videos
disp(i);
options.SALIENCY_DIR = [Path Datasets '/test/' videos(i+2).name '/' Results{k} '/'];
options.GTSALIENCY_DIR = [Path Datasets '/test/' videos(i+2).name '/maps/'];
options.GTFIXATION_DIR = [Path Datasets '/test/' videos(i+2).name '/fixation/maps/'];
[results(i), all]= readAllFrames(options, Metrics{j} );
end
all_results{k}(:,j)=results;
mean_results{k} = sum(all_results{k}) / (length(videos)-2);
% save([CACHE Datasets '_' Results{k} '_' Metrics{j} '.mat'], 'mean_results');
else
load([CACHE 'ourdataset_' Results{k} '_' Metrics{j} '.mat']);
end
end
end**`
from dhf1k.
@hkkevinhf Many thanks for your detailed information and quick response. I will let my intern carefully check this issue. It will take some time. Thanks for your understanding!
from dhf1k.
@wenguanwang Thanks. Look forward to your reply.
from dhf1k.
@hkkevinhf could you please offer all five scores for the output saliency maps?
from dhf1k.
@wenguanwang yes. As for the UCF test set, the five scores (AUC-J, SIM, S-AUC, CC, NSS) are 0.8977, 0.4058, 0.5619, 0.5070, 2.5413 respectively. The s-AUC scores for Hollywood2 test set and DHF1K val set also seem strange, but I didn't record them. If you need, I will evaluate them once more.
from dhf1k.
@hkkevinhf , we rechecked our evaluation code and found the inconsistency of the S-AUC is caused by the sampling strategy of the reference fixation map (only using the fixations of the same video). This only happens in the released evaluation code. Not to worry, as the evaluation code in the server is still the correct version. We uploaded an updated version in "code_for_Metrics.zip". Note that the S-AUC will have some variations due to the sampling strategy. Many thanks for your reminder.
from dhf1k.
@wenguanwang ,received. Thanks for your effort and kind reply.
from dhf1k.
Related Issues (20)
- Is the audio presented to the viewer during fixation collection? HOT 4
- How are ground truth saliency maps generated from recorded fixations? HOT 11
- Absent annotations HOT 1
- Testing setting of Hollywood2 dataset HOT 3
- Gaussian blurring of fixation map of Hollywood2 and UCFSports HOT 2
- package version HOT 1
- UCF download link? HOT 2
- Annotations for only 26 videos HOT 2
- The loss is nan. HOT 1
- discrepancy in exportdata_train and DHF1K fixation maps? HOT 4
- 想问下有训练代码吗? HOT 1
- Attributes for first 700 videos HOT 5
- Regarding saliency metric (especially CC) HOT 9
- trained model HOT 3
- questions about the paths and files
- Dataset license HOT 2
- About ACL.h5
- modify generate_frame.m to python HOT 1
- Hollywood2 google link broken
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dhf1k.