Giter VIP home page Giter VIP logo

ffavod's People

Contributors

hu64 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ffavod's Issues

Evaluation index problem

First of all, thank you for your help, I have solved the previous problem.
However, new problems have emerged. I used centernet as my baseline to train on this data set, using coco indicators, but I found that the indicators change very strangely during training. Loss decreases with training, but the accuracy of the network on the test set gets worse with training, which is far from the results in your paper. I did not use the cneternet you used, but I am sure that there is no problem with my network, and it can get normal results on the coco data set.
I used the ignore.txt file of the dataset to process the data, because I think that unlabeled instances may cause instability in network training. I don't know what kind of treatment you have done?
I currently have no way to get a relatively normal baseline indicator. It makes it difficult for me to continue my work. Do you have any good suggestions?
For small goals, have you made improvements to the official code (it does not seem to be from the paper)?
What index did you use during training, VOC or COCO?

Originally posted by @Bin-ze in #2 (comment)

Questions about evaluation indicators

I downloaded the UAVDT data set and evaluation tool, but I did not find out how to use it to generate test results from the tool. I did a lot of searches, and there are no similar tutorials on the Internet. I searched the original paper and did not find the author. Contact information.
But I really need the same evaluation criteria as the open source papers to evaluate the quality of my network. You published your paper with this data set. I wonder if you can send me the testing tools you use. I want to use this to evaluate your own network.

Can't reproduce the results

Hi,
I have a hard time reproducing the results reported in your repo/paper using the provided pre-trained models. On the UAVDT test dataset (using the same split you shared /val-1-on-30.json), I attempted to evaluate the shared per-trained model (spotnet2_vid_uavdt.pth), but I was not able to obtain the same mAP as reported in the paper.

image

Is there anything else I could do to reproduce the results.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.