Comments (26)
Has everybody solved the problem?My loss_l can drop but loss_c can't
from m2det.
Good question!
- You can follow the data/voc0712.py to customize your dataloader.
- It's OK to finetune from the published model, but you have to completely align the model size before loading the weight, you have to modify some code in utils/core.py.
from m2det.
Hi, Thx for sharing your code. I've finetuned in your published model in my own dataset.I set the dataset in a VOC2007 format and I have set the num_class = 7 instead of 81. And I think relevant parameter are set properly
But the result is so bad! I test in your image and my own image, there are many bounding box(more than 1K).I am so confused. Could you tell me what's the problem?
from m2det.
Hi, Thx for sharing your code. I've finetuned in your published model in my own dataset.I set the dataset in a VOC2007 format and I have set the num_class = 7 instead of 81. And I think relevant parameter are set properly
But the result is so bad! I test in your image and my own image, there are many bounding box(more than 1K).I am so confused. Could you tell me what's the problem?
i met the same problem, very confusing.
from m2det.
Facing exact same issue.
How do your losses look like during training? Did you check the results of test.py
on some test data that you have?
from m2det.
Facing exact same issue.
How do your losses look like during training? Did you check the results of
test.py
on some test data that you have?
In my experiment,the console will print loss as default. The trainning setting is the same as the project(I don't change). The loss_L is about 3.237 after trainning and loss_C drop from 11 to about 3. Result(runing on test.py) on my test data is so bad too(so many bounding box)!
from m2det.
Well, it seems that your training has still not converged, as those values are pretty high. In my case, I have Loss_L = 1.28
and Loss_C=0.4
but still lots of FP bounding boxes.
from m2det.
Hi, sorry for late reply.
@Roujack, your loss values with only 7 categories are not so correct. For example, the stable VOC(with 20 categories) losses are about loss_l: 1.2-1.4, while loss_c: 0.3-0.5. @dshahrokhian may have a more stable training process.
The reason that have so many FP bboxes, I guess you can:
- check the GT label: 1 - k, not 0 - k-1. There is a BG class.
- Visualize the training images, if same as the val image, you should check the pre-process or post-process. If much better than the val images, you should check whether the training process is overfit. Because I don't how scale is your dataset, the training process needs to be tuned.
from m2det.
@qijiezhao ,Thanks for your sharing great object about objective detection.
After 12 epochs on COCO, get the results as follow:
About how many epochs you train your weights?
from m2det.
@qijiezhao ,Thanks for your sharing great object about objective detection.
After 12 epochs on COCO, get the results as follow:
About how many epochs you train your weights?
Refer this: https://github.com/qijiezhao/M2Det/blob/master/configs/m2det512_vgg.py#L27
from m2det.
@qijiezhao problem is still unresolved==
here is some screenshots.
number of class setting:
loss in 10 epoches(the dataset has 6464 images). It seems normal now:
test on trainning image(so bad!):
from m2det.
@Roujack Are you using pytorch==0.4.1
for both training and testing? That partially solved the problem for me, still trying to figure out how to improve results further.
from m2det.
@dshahrokhian pytorch version is 0.4.1.
from m2det.
I see that you have a lot of bounding boxes with low confidences. Did you try increasing the threshold in demo.py
function draw_detection
? Something like 0.9.
from m2det.
@dshahrokhian yeah. As I increase the score threshold with 0.9 in demo.py function draw_detection, and set the num_per_class = 5 in configs/m2det512_vgg.pth, the boundingbox number is decrease.
But the bounding box is not locate object well and the classification labels are wrong:
I train on my customized dataset in 10 epoch and the loss_l = 1.2 and loss_c = 0.3.
what do you think I can improve the detector behaviour?
from m2det.
I met same issue.
from m2det.
hi,when i finetuning,i got loss_l=0.6,and loss_c=1.2...the loss_c is big..
i have change the code about calss as your screenshots,have any where should be change?
Thanks @Roujack @qijiezhao
from m2det.
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
from m2det.
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggse
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
请问我在voc0712上跑了160个epoch,map只有65%,程序只改了batchsize,num_class还有VOC_CLASSES,想知道你具体怎么训练到82%的
from m2det.
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggse
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
请问我在voc0712上跑了160个epoch,map只有65%,程序只改了batchsize,num_class还有VOC_CLASSES,想知道你具体怎么训练到82%的
hi @dshahrokhian , i got loss_l=0.63,and loss_c=0.8, VOC0712 I can get a good result,mAP reach 82%, but when train my dateset(14000samples), the test result is very bad....do you think overfitting? or any suggset?
用了coco的预训练吗
from m2det.
Any progress here?
from m2det.
@qijiezhao
I see this:
It's OK to finetune from the published model, but you have to completely align the model size before loading the weight, you have to modify some code in utils/core.py.
How to modify the size ,and which codes have to modify in utils/core.py, would you please to help me? Thanks very much
from m2det.
嗨@dshahrokhian,我得了loss_l = 0.63,而loss_c = 0.8,VOC0712我可以得到一个好结果,mAP达到82%,但是当训练我的日期组(14000samples)时,测试结果非常糟糕....你觉得吗?过度拟合?或任何suggset?
Hello, I can only run 72.5% of mAP on the pascal voc2007 dataset. How do you adjust it?
from m2det.
I have the same issue of so many bounding box :(
from m2det.
@zhulei1228 I remember somebody mention weighting loss_l and loss_c. Maybe you should give it a try?
from m2det.
Thank for your sharing!
I want to konw how to prepare the custom train_val data, the data format is
img_path x_min,y_min,x_max,y_max,cls ?besides,how can i finetune on your published model in a custom dataset? or I'd better train from scratch
did you modify the model for custom data ??
is it work fine ??
Thanking You !!
from m2det.
Related Issues (20)
- I wrote how to run on mac os.
- The test for coco2017 and get mAP 0.0? model is m2det512_vgg.pth
- Someone have test result on VOC2007 ? HOT 5
- train error HOT 5
- 论文中有一处错误
- Are you using batch size 8 for bn update?
- Why adjust_learning_rate() only refers to step_lr of COCO? HOT 1
- can anyone explain what is the loss_L and loss_c in the training results ? HOT 1
- AttributeError: 'CustomDataset' object has no attribute 'evaluate_detections'AttributeError: 'CustomDataset' object has no attribute 'evaluate_detections'
- Two question about upsample process
- error when run demo.py
- RuntimeError: all tensors must be on devices[0] HOT 1
- mAP精度问题
- On which dataset was m2det512_vgg.pth trained?
- Hard Negative Mining problem
- demo | qt.qpa.xcb: could not connect to display HOT 1
- Does anyone have the weight of resnet101 HOT 1
- File "home/CLNet/main.py", line 87, in <module> main () File "home/CLNet/main.py", line 25, in main model = init_model(args) File "/content/drive/My Drive/home/CLNet/utils/init.py", line 51, in init_model model.load_state_dict(state_dict) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1407, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for CLNet: HOT 1
- GFLOPs!!!
- ImportError: /home/wdc/M2Det/utils/nms/gpu_nms.cpython-37m-x86_64-linux-gnu.so: undefined symbol: cudaSetupArgument
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from m2det.