Giter VIP home page Giter VIP logo

Comments (8)

wenguanwang avatar wenguanwang commented on May 29, 2024

@hkkevinhf Thanks for your information. Could you please be more specific or let us re-run your experiments? "AUC-shuffled score is much lower than that reported in the paper on UCF dataset." Which paper do you mean? It's hard to figure out the reason. We do not encounter a similar issue before.

from dhf1k.

hkkevinhf avatar hkkevinhf commented on May 29, 2024

@wenguanwang hi, the paper concerned is "Revisiting Video Saliency: A Large-scale Benchmark and a New Model".
I used the test code and model weight in 'ACL' to generate the results. ( The code was downloaded from
https://drive.google.com/open?id=1sW0tf9RQMO4RR7SyKhU8Kmbm4jwkFGpQ )
I evaluated the results using evaluation code in 'ACL-evaluation.rar' .

The detailed information is below:
test code: ACL/main.py
model weight for test: ACL/ACL.h5
evaluation code: ACL-evaluation/demo_ours .m

I add a line in demo_ours.m in order to see the overall metrics on a dataset. Other files all remain unchanged. The demo_ours.m I used is shown below:

`**%% Demo.m
% All the codes in "code_forMetrics" are from MIT Saliency Benchmark (https://github.com/cvzoya/saliency). Please refer to their webpage for more details.

% load global parameters, you should set up the "ROOT_DIR" to your own path
% for data.
clear all
METRIC_DIR = 'code_forMetrics';
addpath(genpath(METRIC_DIR));

CACHE = ['./cache/'];
Path = '/data/Paper_code/ACL/';
Datasets = 'UCF';

Metrics{1} = 'AUC_Judd';
Metrics{2} = 'similarity';
Metrics{3} = 'AUC_shuffled';
Metrics{4} = 'CC';
Metrics{5} = 'NSS';

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Results and all_results should have same number of cells 
Results{1} = 'saliency'; % ours method
% Results{2} = 'NIPS08';

results = zeros(300,1);
all_results{1}=zeros(300,5);
all_results{2}=zeros(300,5);

mean_results{1}=zeros(1,5);

for k =1:1 % indexing methods
Results{k}
for j = 3:3 % indexing metrics

    if ~exist([CACHE 'ourdataset_' Results{k} '_' Metrics{j} '.mat'], 'file')
   
        videos = dir([Path Datasets '/test/']);
        
        for i = 1:length(videos)-2    % loop videos
            disp(i);
            options.SALIENCY_DIR = [Path Datasets '/test/' videos(i+2).name '/' Results{k} '/'];
            options.GTSALIENCY_DIR = [Path Datasets '/test/' videos(i+2).name '/maps/'];
            options.GTFIXATION_DIR = [Path Datasets '/test/' videos(i+2).name '/fixation/maps/'];
            [results(i), all]= readAllFrames(options, Metrics{j} );
        end
        
        all_results{k}(:,j)=results;
        
        mean_results{k} = sum(all_results{k}) / (length(videos)-2);
        
        % save([CACHE Datasets '_' Results{k} '_' Metrics{j} '.mat'], 'mean_results');
    else
        
        load([CACHE 'ourdataset_' Results{k} '_' Metrics{j} '.mat']);
    
    end
    
end

end**`

from dhf1k.

wenguanwang avatar wenguanwang commented on May 29, 2024

@hkkevinhf Many thanks for your detailed information and quick response. I will let my intern carefully check this issue. It will take some time. Thanks for your understanding!

from dhf1k.

hkkevinhf avatar hkkevinhf commented on May 29, 2024

@wenguanwang Thanks. Look forward to your reply.

from dhf1k.

wenguanwang avatar wenguanwang commented on May 29, 2024

@hkkevinhf could you please offer all five scores for the output saliency maps?

from dhf1k.

hkkevinhf avatar hkkevinhf commented on May 29, 2024

@wenguanwang yes. As for the UCF test set, the five scores (AUC-J, SIM, S-AUC, CC, NSS) are 0.8977, 0.4058, 0.5619, 0.5070, 2.5413 respectively. The s-AUC scores for Hollywood2 test set and DHF1K val set also seem strange, but I didn't record them. If you need, I will evaluate them once more.

from dhf1k.

wenguanwang avatar wenguanwang commented on May 29, 2024

@hkkevinhf , we rechecked our evaluation code and found the inconsistency of the S-AUC is caused by the sampling strategy of the reference fixation map (only using the fixations of the same video). This only happens in the released evaluation code. Not to worry, as the evaluation code in the server is still the correct version. We uploaded an updated version in "code_for_Metrics.zip". Note that the S-AUC will have some variations due to the sampling strategy. Many thanks for your reminder.

from dhf1k.

hkkevinhf avatar hkkevinhf commented on May 29, 2024

@wenguanwang ,received. Thanks for your effort and kind reply.

from dhf1k.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.