Giter VIP home page Giter VIP logo

crowdcount-stackpool's People

Contributors

siyuhuang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

crowdcount-stackpool's Issues

What does the Figure2 mean?

I have read your papers. But I don't understand what the Figure2 tries to explain and what the colors(gray, blue, and orange) represent. Can you give me some ideas about that? Thank you!

Questions about patches

  1. Your paper says that we randomly crop 9 patches on each training image, where all the patches are half the size of the original image. Actually after running create_training_set_shtech.m, the patches' height and width are about a quarter of the original images.
  2. wn2 = w/8; hn2 = h/8;
    wn2 =8 * floor(wn2/8);
    hn2 =8 * floor(hn2/8);
    annPoints = image_info{1}.location;
    if( w <= 2*wn2 )
    im = imresize(im,[ h,2*wn2+1]);
    annPoints(:,1) = annPoints(:,1)*2*wn2/w;
    end
    if( h <= 2*hn2)
    im = imresize(im,[2*hn2+1,w]);
    annPoints(:,2) = annPoints(:,2)*2*hn2/h;
    end

    What does the code try to do? The condition w <= 2*wn2 is equivalent to w<=16*floor(w/64). But only when w=0 or w<=-16, the condition meets.

The network spends too much time on some images

When I test my trained model on the test data of ShanghaiTechA, there are 92 images, each of which takes more than 1 s and meanwhile there are 61 images, each of which takes less than 0.1 s. My GPU is GTX 1070.
My test method is

torch.cuda.synchronize()
start = time.time()
density_map = net(im_data)
density_map = density_map.data.cpu().numpy()
torch.cuda.synchronize()
end =  time.time()

Why does it happen? How can I solve it? Thank you!

some questions about train

when I run train.py it misses the train_path, I'm a freshman in this area so can you help me to train zhe dataset?

about the kernel

Hello, is the transformation density map using a geometric adaptive kernel?

the number difference between the gt density map before and after downsample

hi~
Can you share how did you deal with the number difference between the gt density map before and after downsample?
Specifically, when I trained the model, I found the number of people (groundtruth) calculated by the ground truth density map before and after downsampling is different. So, if I train with the downsampled gt map, the ground truth number in the final MSE is the sum of downsampled map or the original gt number?
Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.