Comments (6)
Hi,
Actually, you only need to change the "weights", the "filepath" where you save your trained model. And others like "num_classes" and "class_names" are based on your dataset. Besides, some post process parameters like "score_thr", "max_per_img" and "iou_thr" in "infer_engine" are based on your needs.
from vedadet.
Hi,
Actually, you only need to change the "weights", the "filepath" where you save your trained model. And others like "num_classes" and "class_names" are based on your dataset. Besides, some post process parameters like "score_thr", "max_per_img" and "iou_thr" in "infer_engine" are based on your needs.
thank you for your reply,
however,When I change ‘configs/infer/retinanet/retinanet.py’ to ‘configs/infer/tinaface/tinaface.py’, I can run infer.py successfully. Does this command need to be adjusted?【CUDA_VISIBLE_DEVICES="0" python tools/infer.py configs/infer/retinanet/retinanet.py image_path】 thank you
from vedadet.
You mean that you will get error when running "CUDA_VISIBLE_DEVICES="0" python tools/infer.py configs/infer/retinanet/retinanet.py image_path". Please provide the error info.
from vedadet.
You mean that you will get error when running "CUDA_VISIBLE_DEVICES="0" python tools/infer.py configs/infer/retinanet/retinanet.py image_path". Please provide the error info.
Some of them are as follows:
The model and loaded state dict do not match exactly
size mismatch for bbox_head.retina_cls.weight: copying a param with shape torch.Size([3, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([9, 256, 3, 3]).
size mismatch for bbox_head.retina_cls.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([9]).
size mismatch for bbox_head.retina_reg.weight: copying a param with shape torch.Size([12, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([36, 256, 3, 3]).
size mismatch for bbox_head.retina_reg.bias: copying a param with shape torch.Size([12]) from checkpoint, the shape in current model is torch.Size([36]).
unexpected key in source state_dict: backbone.gn1.weight, backbone.gn1.bias, backbone.layer1.0.gn1.weight, backbone.layer1.0.gn1.bias, backbone.layer1.0.gn2.weight, backbone.layer1.0.gn2.bias, backbone.layer1.0.gn3.weight, backbone.layer1.0.gn3.bias, backbone.layer1.1.gn1.weight, backbone.layer1.1.gn1.bias, backbone.layer1.1.gn2.weight, backbone.layer1.1.gn2.bias, backbone.layer1.1.gn3.weight, backbone.layer1.1.gn3.bias, backbone.layer1.2.gn1.weight, backbone.layer1.2.gn1.bias, backbone.layer1.2.gn2.weight, backbone.layer1.2.gn2.bias, backbone.layer1.2.gn3.weight, backbone.layer1.2.gn3.bias, backbone.layer2.0.gn1.weight, backbone.layer2.0.gn1.bias, backbone.layer2.0.gn2.weight, backbone.layer2.0.gn2.bias, backbone.layer2.0.gn3.weight, backbone.layer2.0.gn3.bias, backbone.layer2.1.gn1.weight, backbone.layer2.1.gn1.bias, backbone.layer2.1.gn2.weight, backbone.layer2.1.gn2.bias, backbone.layer2.1.gn3.weight, backbone.layer2.1.gn3.bias, backbone.layer2.2.gn1.weight, backbone.layer2.2.gn1.bias, backbone.layer2.2.gn2.weight, backbone.layer2.2.gn2.bias, backbone.layer2.2.gn3.weight, backbone.layer2.2.gn3.bias, backbone.layer2.3.gn1.weight, backbone.layer2.3.gn1.bias, backbone.layer2.3.gn2.weight, backbone.layer2.3.gn2.bias, backbone.layer2.3.gn3.weight, backbone.layer2.3.gn3.bias, backbone.layer3.0.gn1.weight, backbone.layer3.0.gn1.bias, backbone.layer3.0.gn2.weight, backbone.layer3.0.gn2.bias, backbone.layer3.0.gn3.weight, backbone.layer3.0.gn3.bias, backbone.layer3.0.conv2.conv_offset.weight, backbone.layer3.0.conv2.conv_offset.bias, backbone.layer3.1.gn1.weight, backbone.layer3.1.gn1.bias, backbone.layer3.1.gn2.weight, backbone.layer3.1.gn2.bias, backbone.layer3.1.gn3.weight, backbone.layer3.1.gn3.bias, backbone.layer3.1.conv2.conv_offset.weight, backbone.layer3.1.conv2.conv_offset.bias, backbone.layer3.2.gn1.weight, backbone.layer3.2.gn1.bias, backbone.layer3.2.gn2.weight, backbone.layer3.2.gn2.bias, backbone.layer3.2.gn3.weight, backbone.layer3.2.gn3.bias, backbone.layer3.2.conv2.conv_offset.weight, backbone.layer3.2.conv2.conv_offset.bias, backbone.layer3.3.gn1.weight, backbone.layer3.3.gn1.bias, backbone.layer3.3.gn2.weight, backbone.layer3.3.gn2.bias, backbone.layer3.3.gn3.weight, backbone.layer3.3.gn3.bias, backbone.layer3.3.conv2.conv_offset.weight, backbone.layer3.3.conv2.conv_offset.bias, backbone.layer3.4.gn1.weight, backbone.layer3.4.gn1.bias, backbone.layer3.4.gn2.weight, backbone.layer3.4.gn2.bias, backbone.layer3.4.gn3.weight, backbone.layer3.4.gn3.bias, backbone.layer3.4.conv2.conv_offset.weight, backbone.layer3.4.conv2.conv_offset.bias, backbone.layer3.5.gn1.weight, backbone.layer3.5.gn1.bias, backbone.layer3.5.gn2.weight, backbone.layer3.5.gn2.bias, backbone.layer3.5.gn3.weight, backbone.layer3.5.gn3.bias, backbone.layer3.5.conv2.conv_offset.weight, backbone.layer3.5.conv2.conv_offset.bias, backbone.layer4.0.gn1.weight, backbone.layer4.0.gn1.bias, backbone.layer4.0.gn2.weight, backbone.layer4.0.gn2.bias, backbone.layer4.0.gn3.weight, backbone.layer4.0.gn3.bias, backbone.layer4.0.conv2.conv_offset.weight, backbone.layer4.0.conv2.conv_offset.bias, backbone.layer4.1.gn1.weight, backbone.layer4.1.gn1.bias, backbone.layer4.1.gn2.weight, backbone.layer4.1.gn2.bias, backbone.layer4.1.gn3.weight, backbone.layer4.1.gn3.bias, backbone.layer4.1.conv2.conv_offset.weight, backbone.layer4.1.conv2.conv_offset.bias, backbone.layer4.2.gn1.weight, backbone.layer4.2.gn1.bias, backbone.layer4.2.gn2.weight, backbone.layer4.2.gn2.bias, backbone.layer4.2.gn3.weight, backbone.layer4.2.gn3.bias, backbone.layer4.2.conv2.conv_offset.weight, backbone.layer4.2.conv2.conv_offset.bias, neck.0.lateral_convs.0.conv.weight, neck.0.lateral_convs.0.gn.weight, neck.0.lateral_convs.0.gn.bias, neck.0.lateral_convs.1.conv.weight, neck.0.lateral_convs.1.gn.weight, neck.0.lateral_convs.1.gn.bias, neck.0.lateral_convs.2.conv.weight, neck.0.lateral_convs.2.gn.weight, neck.0.lateral_convs.2.gn.bias, neck.0.lateral_convs.3.conv.weight, neck.0.lateral_convs.3.gn.weight, neck.0.lateral_convs.3.gn.bias, neck.0.fpn_convs.0.conv.weight, neck.0.fpn_convs.0.gn.weight, neck.0.fpn_convs.0.gn.bias, neck.0.fpn_convs.1.conv.weight, neck.0.fpn_convs.1.gn.weight, neck.0.fpn_convs.1.gn.bias, neck.0.fpn_convs.2.conv.weight, neck.0.fpn_convs.2.gn.weight, neck.0.fpn_convs.2.gn.bias, neck.0.fpn_convs.3.conv.weight, neck.0.fpn_convs.3.gn.weight, neck.0.fpn_convs.3.gn.bias, neck.0.fpn_convs.4.conv.weight, neck.0.fpn_convs.4.gn.weight, neck.0.fpn_convs.4.gn.bias, neck.0.fpn_convs.5.conv.weight, neck.0.fpn_convs.5.gn.weight, neck.0.fpn_convs.5.gn.bias, neck.1.level_convs.0.0.conv.weight, neck.1.level_convs.0.0.gn.weight, neck.1.level_convs.0.0.gn.bias, neck.1.level_convs.0.1.conv.weight, neck.1.level_convs.0.1.gn.weight, neck.1.level_convs.0.1.gn.bias, neck.1.level_convs.0.2.conv.weight, neck.1.level_convs.0.2.gn.weight, neck.1.level_convs.0.2.gn.bias, neck.1.level_convs.0.3.conv.weight, neck.1.level_convs.0.3.gn.weight, neck.1.level_convs.0.3.gn.bias, neck.1.level_convs.0.4.conv.weight, neck.1.level_convs.0.4.gn.weight, neck.1.level_convs.0.4.gn.bias, bbox_head.retina_iou.weight, bbox_head.retina_iou.bias, bbox_head.cls_convs.0.gn.weight, bbox_head.cls_convs.0.gn.bias, bbox_head.cls_convs.1.gn.weight, bbox_head.cls_convs.1.gn.bias, bbox_head.cls_convs.2.gn.weight, bbox_head.cls_convs.2.gn.bias, bbox_head.cls_convs.3.gn.weight, bbox_head.cls_convs.3.gn.bias, bbox_head.reg_convs.0.gn.weight, bbox_head.reg_convs.0.gn.bias, bbox_head.reg_convs.1.gn.weight, bbox_head.reg_convs.1.gn.bias, bbox_head.reg_convs.2.gn.weight, bbox_head.reg_convs.2.gn.bias, bbox_head.reg_convs.3.gn.weight, bbox_head.reg_convs.3.gn.bias
from vedadet.
Trained TinaFace model can not be loaded into RetinaNet, cause they are different models. If you want to use some models to infer, you should first train a corresponding model.
from vedadet.
Trained TinaFace model can not be loaded into RetinaNet, cause they are different models. If you want to use some models to infer, you should first train a corresponding model.
ok , i get it, Is there any solution to the false face detection?
from vedadet.
Related Issues (20)
- How to see an infer DEMO? HOT 2
- What's the difference between Inception Module of TinaFace and Context Module of RetinaFace
- RuntimeError: cuda runtime error (98) : invalid device function HOT 2
- error: command '/usr/bin/nvcc' failed with exit status 1 HOT 2
- RuntimeError: CUDA out of memory.
- jit model
- Why not code directly based on mmdet?
- for the people error occured about "pip install -v -e ." HOT 4
- pip install error on windows
- how to change batch_size in vedadet? HOT 1
- AttributeError: 'ConfigDict' object has no attribute 'val_engine'
- what is the format of input data HOT 1
- Raspberry Pi
- Missing THC/THC.h HOT 3
- TinaFace for CocoDataset
- TTA
- There is no val.xml in WiderFace Anno.
- mAP = 0 after training HOT 2
- Cannot set parameter in SnapshotHook HOT 1
- Where to config save place for the finetuned model ?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vedadet.