Giter VIP home page Giter VIP logo

yolov7-face's Introduction

yolov7-face

2023.04 yolov8-face (🔥🔥🔥↑)

New feature

  • Dynamic keypoints
  • WingLoss
  • Efficient backbones
  • EIOU and SIOU
Method Test Size Easy Medium Hard FLOPs (B) @640 Google Baidu
yolov7-lite-t 640 88.7 85.2 71.5 0.8 google gsmn
yolov7-lite-s 640 92.7 89.9 78.5 3.0 google 3sp4
yolov7-tiny 640 94.7 92.6 82.1 13.2 google aujs
yolov7s 640 94.8 93.1 85.2 16.8 google w72z
yolov7 640 96.9 95.5 88.0 103.4 google jrj6
yolov7+TTA 640 97.2 95.8 87.7 103.4 google jrj6
yolov7-w6 960 96.4 95.0 88.3 89.0 google -
yolov7-w6+TTA 1280 96.9 95.8 90.4 89.0 google -

Dataset

WiderFace

yolov7-face-label

Test

QQ Group

QQ Group

Demo

References

yolov7-face's People

Contributors

cansik avatar derronqi avatar dsawll2 avatar ginoknodel avatar kartikeyporwal avatar wallzfe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolov7-face's Issues

缺少文件wider_val.txt

您好,
想尝试运行test_widerface.py, 参数都使用默认值。但却没有wider_val.txt文件。下载了widerface里面也没有。请问这个文件是否需要自己生成? 大致看了一下代码,是不是validation的file list?
FileNotFoundError: [Errno 2] No such file or directory: 'data/widerface/widerface/val/wider_val.txt'

非常感谢!

lebelME object and points to yolo format ton custom train

Hi, I'm trying to generate my own dataset using Labelme to annotate the image, but I can't understand how to transform it to this format, could you help me with this?

Current file:

{
  "version": "5.0.5",
  "flags": {},
  "shapes": [
    {
      "label": "id",
      "points": [
        [
          38.45454545454544,
          469.45454545454544
        ],
        [
          1449.818181818182,
          1405.818181818182
        ]
      ],
      "group_id": null,
      "shape_type": "rectangle",
      "flags": {}
    },
    {
      "label": "point",
      "points": [
        [
          61.181818181818244,
          553.5454545454545
        ]
      ],
      "group_id": null,
      "shape_type": "point",
      "flags": {}
    },
    {
      "label": "point",
      "points": [
        [
          1402.090909090909,
          496.7272727272727
        ]
      ],
      "group_id": null,
      "shape_type": "point",
      "flags": {}
    },
    {
      "label": "point",
      "points": [
        [
          1429.3636363636363,
          1317.1818181818182
        ]
      ],
      "group_id": null,
      "shape_type": "point",
      "flags": {}
    },
    {
      "label": "point",
      "points": [
        [
          99.81818181818187,
          1378.5454545454547
        ]
      ],
      "group_id": null,
      "shape_type": "point",
      "flags": {}
    }
  ],

The current file is for document 4 corners and document rectangle...

训练过程的错误

not enough values to unpack (expected 3, got 0)
微信图片_20221209210554
为什么会出现这个问题呢,有没有大佬解答一下

关于数据、标签目录放置问题

我下载下来以后widerface 的目录如下图
image

readme里边提供的label 目录为
image
运行代码发现,根据img的文件名来构建对应的label 文件名和路径

image

似乎需要手动将label 放置到对应的widerface的子文件夹下,或者修改代码,请问是这样吗? 还是说我这边配置文件有问题,我的配置文件用的widerface.yaml

convert to ncnn?

Hello, I have managed to train my model, but when converting it to ncnn it is different from the example provided in the cpp folder.
I have tried to convert from onnx to ncnn and pnnx, but the order and outputs of the model changed, I have used the script that is in the models folder and also the following implementation.

Script from models/export.py and convert to pnnx:
./pnnx model.pt inputshape=[1,3,640,640]

Output:

pnnxparam = best.torchscript.pnnx.param
pnnxbin = best.torchscript.pnnx.bin
pnnxpy = best.torchscript_pnnx.py
pnnxonnx = best.torchscript.pnnx.onnx
ncnnparam = best.torchscript.ncnn.param
ncnnbin = best.torchscript.ncnn.bin
ncnnpy = best.torchscript_ncnn.py
optlevel = 2
device = cpu
inputshape = [1,3,640,640]f32
inputshape2 = 
customop = 
moduleop = 
############# pass_level0
722  713  612  603  499  490  4  input.1  17  20  23  26  29  x0.1  x1.1  40  x2.1  x3.1  x10.1  input0.1  53  56  59  x4.1  x5.1  68  x6.1  x7.1  x11.1  input1.1  80  83  86  x8.1  x9.1  95  x12.1  x13.1  x114.1  input2.1  107  110  113  x15.1  x14.1  122  x16.1  input3.1  130  133  136  139  142  x17.1  x18.1  153  x19.1  x20.1  x121.1  input4.1  165  168  171  x22.1  x23.1  180  x21.1  x24.1  x125.1  input5.1  192  195  198  x26.1  x27.1  207  x25.1  x28.1  x129.1  input6.1  219  222  225  x30.1  x31.1  234  x29.1  x32.1  x133.1  input7.1  246  249  252  x34.1  x35.1  261  x33.1  x36.1  x137.1  input8.1  273  276  279  x38.1  x39.1  288  x37.1  x40.1  x141.1  input9.1  300  303  306  x42.1  x43.1  315  x41.1  x44.1  x145.1  input10.1  327  330  333  x46.1  x47.1  342  x45.1  input11.1  350  353  356  359  362  x48.1  x49.1  373  x50.1  x51.1  x152.1  input12.1  385  388  391  x53.1  x54.1  400  x52.1  input13.1  408  411  input14.1  417  input15.1  424  427  430  433  input16.1  438  input17.1  445  448  451  454  457  input18.1  464  467  470  473  476  input19.1  483  486  489  492  input20.1  498  501  503  506  input21.1  510  513  input22.1  517  520  input23.1  524  527  input24.1  531  534  input25.1  538  541  input26.1  545  548  input27.1  552  555  input28.1  559  562  input29.1  566  569  input30.1  573  576  input31.1  580  584  596  599  602  605  input32.1  611  614  616  619  input33.1  623  626  input34.1  630  633  input35.1  637  640  input36.1  644  647  input37.1  651  654  input38.1  658  661  input39.1  665  668  input40.1  672  675  input41.1  679  682  input42.1  686  689  input43.1  693  697  707  709  712  715  input44.1  721  724  727  730  input45.1  734  737  input46.1  741  744  input47.1  748  751  input48.1  755  758  input49.1  762  765  input50.1  769  772  input51.1  776  779  input52.1  783  786  input53.1  790  793  input54.1  797  800  input55.1  804  808  818  820  825  
----------------

foldable_constant 490
foldable_constant 499
foldable_constant 603
foldable_constant 612
foldable_constant 713
foldable_constant 722
############# pass_level1
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
no attribute value
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
unknown Parameter value kind prim::Constant
no attribute value
############# pass_level2
############# pass_level3
############# pass_level4
custom op = prepacked::conv2d_clamp_run
############# pass_level5
############# pass_ncnn
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
arg_1  0
ignore F.max_pool2d F.max_pool2d_35 param ceil_mode=False
ignore F.max_pool2d F.max_pool2d_35 param dilation=None
ignore F.max_pool2d F.max_pool2d_35 param kernel_size=(3,3)
ignore F.max_pool2d F.max_pool2d_35 param padding=(1,1)
ignore F.max_pool2d F.max_pool2d_35 param return_indices=False
ignore F.max_pool2d F.max_pool2d_35 param stride=(2,2)

and custom code:
Custom code:

 # Load model
    model = attempt_load(weights, map_location=device)  # load FP32 model
    model.eval()
    example = torch.rand(1, 3, 640, 640).to(device)
    traced_script_module = torch.jit.trace(model, example)
    traced_script_module.save("model.pt")

Output pnnx

pnnxparam = model.pnnx.param
pnnxbin = model.pnnx.bin
pnnxpy = model_pnnx.py
pnnxonnx = model.pnnx.onnx
ncnnparam = model.ncnn.param
ncnnbin = model.ncnn.bin
ncnnpy = model_ncnn.py
optlevel = 2
device = cpu
inputshape = [1,3,640,640]f32
inputshape2 = 
customop = 
moduleop = 
############# pass_level0
inline module = models.common.ADD
inline module = models.common.Concat
inline module = models.common.Conv
inline module = models.common.DWConvblock
inline module = models.common.ImplicitA
inline module = models.common.ImplicitM
inline module = models.common.Shuffle_Block
inline module = models.common.conv_bn_relu_maxpool
inline module = models.yolo.IKeypoint
inline module = models.common.ADD
inline module = models.common.Concat
inline module = models.common.Conv
inline module = models.common.DWConvblock
inline module = models.common.ImplicitA
inline module = models.common.ImplicitM
inline module = models.common.Shuffle_Block
inline module = models.common.conv_bn_relu_maxpool
inline module = models.yolo.IKeypoint
176  177  178  179  180  181  194  195  196  197  198  207  208  209  210  211  212  213  214  x.5  batchsize.3  1439  222  1441  height.3  1442  width.3  channels_per_group.3  x0.3  235  x1.3  238  242  243  x1.7  input.3  261  262  263  264  265  266  267  268  x.9  batchsize.7  1448  276  1450  height.7  1452  width.7  channels_per_group.7  x0.7  289  x2.3  292  293  294  x1.9  input.5  312  313  314  315  316  317  318  319  x.11  batchsize.9  1458  327  1460  height.9  1462  width.9  channels_per_group.9  x0.9  340  x2.5  343  344  345  x1.5  input.7  363  364  365  366  367  368  369  370  x.7  batchsize.5  1468  378  1470  height.5  1472  width.5  channels_per_group.5  x0.5  391  x2.7  394  395  396  409  410  411  412  413  422  423  424  425  426  427  428  429  x.13  batchsize.11  1477  437  1479  height.11  1480  width.11  channels_per_group.11  x0.11  450  x1.11  453  461  462  x1.2  input.2  480  481  482  483  484  485  486  487  x.2  batchsize.2  1486  495  1488  height.2  1490  width.2  channels_per_group.2  x0.2  508  x2.2  511  512  513  x1.4  input.4  531  532  533  534  535  536  537  538  x.4  batchsize.4  1496  546  1498  height.4  1500  width.4  channels_per_group.4  x0.4  559  x2.4  562  563  564  x1.6  input.6  582  583  584  585  586  587  588  589  x.6  batchsize.6  1506  597  1508  height.6  1510  width.6  channels_per_group.6  x0.6  610  x2.6  613  614  615  x1.8  input.8  633  634  635  636  637  638  639  640  x.8  batchsize.8  1516  648  1518  height.8  1520  width.8  channels_per_group.8  x0.8  661  x2.8  664  665  666  x1.10  input.10  684  685  686  687  688  689  690  691  x.10  batchsize.10  1526  699  1528  height.10  1530  width.10  channels_per_group.10  x0.10  712  x2.10  715  716  717  x1.12  input.12  735  736  737  738  739  740  741  742  x.12  batchsize.12  1536  750  1538  height.12  1540  width.12  channels_per_group.12  x0.12  763  x2.12  766  767  768  x1.13  input.9  786  787  788  789  790  791  792  793  x.15  batchsize.13  1546  801  1548  height.13  1550  width.13  channels_per_group.13  x0.13  814  x2.9  817  818  819  832  833  834  835  836  845  846  847  848  849  850  851  852  x.17  batchsize.15  1555  860  1557  height.15  1558  width.15  channels_per_group.15  x0.15  873  x1.15  876  877  878  x1.1  input.11  896  897  898  899  900  901  902  903  x.1  batchsize.1  1564  911  1566  height.1  1568  width.1  channels_per_group.1  x0.1  924  x2.1  927  930  931  112  input.13  939  940  input.15  942  943  944  947  948  123  input.17  956  957  input.19  959  960  961  966  967  input.21  969  970  971  973  978  979  input.23  981  982  983  988  989  input.25  991  992  993  995  1000  1001  input.1  1003  1004  1005  1006  1007  1009  1010  1011  1012  1013  anchor_grid.1  implicit.2  1051  1052  1053  implicit.4  1055  1056  1071  1072  1075  1076  1079  1080  1083  1084  1087  1088  1091  1092  1095  1096  1099  1100  1103  1104  1107  1108  1111  1112  1113  1115  bs.1  ny.1  nx.1  1127  1129  1130  x_det.1  x_kpt.1  kpt_grid_x.1  1589  kpt_grid_y.1  y.1  1136  1137  1138  1599  1139  1140  xy.1  1142  1143  1144  1145  1147  wh.1  1149  1619  1150  1620  1151  1153  1154  1627  1155  1156  1157  1158  1159  1638  1160  1639  1161  1163  1164  1647  1165  1166  1167  1168  1169  1170  1171  1172  1173  y0.1  1177  implicit.6  1180  1181  1182  implicit.8  1184  1185  1200  1201  1204  1205  1208  1209  1212  1213  1216  1217  1220  1221  1224  1225  1228  1229  1232  1233  1236  1237  1240  1241  1242  1244  bs0.1  ny0.1  nx0.1  1256  1258  1259  x_det0.1  x_kpt0.1  kpt_grid_x0.1  1694  kpt_grid_y0.1  y1.1  1265  1703  1266  1704  1267  1706  1268  1708  1269  xy0.1  1271  1715  1272  1273  1274  1276  wh0.1  1278  1728  1279  1729  1280  1282  1283  1737  1284  1285  1286  1287  1288  1749  1289  1750  1290  1292  1293  1758  1294  1295  1296  1297  1298  1299  1300  1301  1302  y2.1  1306  implicit.10  1309  1310  1311  implicit.1  1313  1314  1329  1330  1333  1334  1337  1338  1341  1342  1345  1346  1349  1350  1353  1354  1357  1358  1361  1362  1365  1366  1369  1370  1371  1373  bs1.1  ny1.1  nx1.1  1385  1387  1388  x_det1.1  x_kpt1.1  kpt_grid_x1.1  1806  kpt_grid_y1.1  y3.1  1394  1815  1395  1816  1396  1818  1397  1820  1398  xy1.1  1400  1827  1401  1402  1403  1405  wh1.1  1407  1840  1408  1841  1409  1411  1412  1849  1413  1414  1415  1416  1417  1861  1418  1862  1419  1421  1422  1870  1423  1424  1425  1426  1427  1428  1429  1430  1431  y4.1  1435  1437  152  153  154  155  
----------------

foldable_constant 243
foldable_constant 294
foldable_constant 345
foldable_constant 181
foldable_constant 396
foldable_constant 1439
foldable_constant 462
foldable_constant 513
foldable_constant 564
foldable_constant 615
foldable_constant 666
foldable_constant 717
foldable_constant 768
foldable_constant 819
foldable_constant 878
foldable_constant implicit.2
foldable_constant implicit.4
foldable_constant 1147
foldable_constant implicit.6
foldable_constant implicit.8
foldable_constant 1441
foldable_constant 1442
foldable_constant 1448
foldable_constant 1450
foldable_constant 1276
foldable_constant implicit.10
foldable_constant implicit.1
foldable_constant 1405
foldable_constant 1452
foldable_constant 1458
foldable_constant 1460
foldable_constant 1462
foldable_constant 1468
foldable_constant 1470
foldable_constant 1472
foldable_constant 1477
foldable_constant 1479
foldable_constant 1480
foldable_constant 1486
foldable_constant 1488
foldable_constant 1490
foldable_constant 1496
foldable_constant 1498
foldable_constant 1500
foldable_constant 1506
foldable_constant 1508
foldable_constant 1510
foldable_constant 1516
foldable_constant 1518
foldable_constant 1520
foldable_constant 1526
foldable_constant 1528
foldable_constant 1530
foldable_constant 1536
foldable_constant 1538
foldable_constant 1540
foldable_constant 1546
foldable_constant 1548
foldable_constant 1550
foldable_constant 1555
foldable_constant 1557
foldable_constant 1558
foldable_constant 1564
foldable_constant 1566
foldable_constant 1568
foldable_constant 1599
foldable_constant 1706
foldable_constant 1818
############# pass_level1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
unknown Parameter value kind prim::Constant of TensorType, t.dim = 4
no attribute value
no attribute value
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 5
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
unknown Parameter value kind prim::Constant of TensorType, t.dim = 1
############# pass_level2
############# pass_level3
############# pass_level4
############# pass_level5
############# pass_ncnn
select along batch axis 0 is not supported
select along batch axis 0 is not supported
select along batch axis 0 is not supported
ignore Crop select_0 param dim=0
ignore Crop select_0 param index=0
ignore Crop select_1 param dim=0
ignore Crop select_1 param index=1
ignore Crop select_2 param dim=0
ignore Crop select_2 param index=2

Using onnx to ncnn:

Some errors occurs when converting, but sometimes you can still download the model and [fix manually](https://zhuanlan.zhihu.com/p/93017149). Error messages are following:

Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Gather not supported yet!
# axis=0
Unknown data type 0
Unknown data type 0
Unknown data type 0
Shape not supported yet!
Shape not supported yet!
Shape not supported yet!
Shape not supported yet!
Shape not supported yet!
Shape not supported yet!

What do I need to modify in order to convert the model and use it with the provided example?

I already test with FeiGeChuanShu/ncnn_Android_face#3 (comment) but not work

Anchor设置

@derronqi 你好,大佬,想请教下,要如何根据关键点去设置yolov7-tiny-face.yaml的anchor呢?

Convert model to tflite int8

I'm trying to convert a model yolov7s-face.pt to tflite int8, but i got this.

RuntimeError: tensorflow/lite/kernels/conv.cc:357 input_channel % filter_input_channel != 0 (1 != 0)Node number 2 (CONV_2D) failed to prepare.

RuntimeError                              Traceback (most recent call last)
[<ipython-input-8-d66c99050c84>](https://localhost:8080/#) in <module>
     61 converter.experimental_new_quantizer = True
     62 
---> 63 tflite_model = converter.convert()
     64 open("/content/best-int8.tflite", "wb").write(tflite_model)
     65 #rep_data_gen(dataset_path)

11 frames
[/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/optimize/calibrator.py](https://localhost:8080/#) in _feed_tensors(self, dataset_gen, resize_input)
    127                                      signature_key)
    128           else:
--> 129             self._calibrator.Prepare([list(s.shape) for s in input_array])
    130         else:
    131           if signature_key is not None:

RuntimeError: tensorflow/lite/kernels/conv.cc:357 input_channel % filter_input_channel != 0 (1 != 0)Node number 2 (CONV_2D) failed to prepare.

I made the convert to onnx using export.py from yolov7-face repository, and then generate teh tensorflow savedmodel doing this:

import onnx
from onnx_tf.backend import prepare
import torch
import tensorflow as tf

outpb = "/content/yolov7s-face-pb"
input_names=['input']
output_names=['output']
batch_size = 1
img_size =[640]

img = torch.zeros(batch_size, 3, *img_size)  # image size(1,3,320,192) iDetection

onnx_model = onnx.load("/content/yolov7s-face.onnx")  # load onnx model
tf_rep = prepare(onnx_model)  # prepare tf representation
tf_rep.export_graph(outpb)  # export the model

model = tf.keras.models.load_model(outpb)

Finally, following the tensorflow documentation:

import cv2
import os
import numpy as np
import tensorflow as tf

NORM_H =640
NORM_W =640
BATCH_SIZE =1
tf_model_path = tf_model_path = '/content/yolov7s-face-pb'
dataset_path ='/content/Dataset_face/train'

class BatchGenerator():
    def __init__(self, dataset_path, input_imgsz ,batch_size):
        self.dataset_path=dataset_path
        self.input_imgsz=input_imgsz
        self.batch_size=batch_size
        
    def __call__(self):
        a = []
        anns = [os.path.join(self.dataset_path ,i) for i in os.listdir(self.dataset_path) if i.endswith('.jpg')]
        for i in range(160):
            file_name = anns[i]
            img = cv2.imread(file_name)
            img = cv2.resize(img, (self.input_imgsz, self.input_imgsz))
            img = img / 255.0
            img = img.astype(np.float32, copy=False)
            a.append(img)
        a = np.array(a)
        print(a.shape) # a is np array of 160 3D images
        img = tf.data.Dataset.from_tensor_slices(a).batch(1)
        for i in img.take(BATCH_SIZE):
            print(i)
            yield [i] 

converter = tf.lite.TFLiteConverter.from_saved_model(tf_model_path)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
classe_dataset =BatchGenerator(dataset_path, NORM_H, BATCH_SIZE)
converter.representative_dataset = classe_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8  # or tf.int8
converter.inference_output_type = tf.uint8  # or tf.int8
converter.experimental_new_quantizer = True

tflite_model = converter.convert()
open("/content/best-int8.tflite", "wb").write(tflite_model)
#rep_data_gen(dataset_path)

By the way, i can export tflite fp32 normally.
Any idea what is going on?

How to achieve your performance

Hi, I've trained your yolov7-tiny model from scratch with your default config. However, my best result on the widerface hard is only 81.2. What should I do?

How to train with custom dataset by using the pretrained model?

Hello,
I would like to ask a few questions about this github repo.

  1. What do I need to do to train using the pretrained model?
  2. How can I create my own custom dataset other than Wider-Face? Is there an annotation tool you recommend that does annotation in the same format?
  3. What should I do to train with the dataset I created?
  4. How can I convert the model to onnx after training?

Negative samples

Is it possible to use negative samples during training?
Looks like label -1 doesn't work for this purpose.

yolov5 vs yolov7 faces - performance improvement?

thanks for porting the face detector to yolov7.

So far, I am not clear if the yolov7 model performs better than the yolov5 one for faces. what is your view?

What is the best perf you achieve so far with yolov7 on the widerface dataset, and for which Gflops?

2个关键点训练

大佬您好,我想用这个来识别鱼和鱼的头部点、形心点两个关键点。需要改代码哪些地方呢?模型的head和loss和datasets都需要改吗?
我只改了datasets训练了一下,发现头部点预测有问题,是不是关键点编解码和loss的地方需要调整?

Preprocessing custom Dataset

Hello,

Really great work.

Was wondering how to process a custom dataset in order to use this project. Since the label files provided has 20 columns instead of 15, I am wondering what each column stands for (I think the 1stis the class, and the next 4 are the normalized box values, but I can't figure out the rest especially as there are values >1 e.g., 2 in some columns).

A preprocessing file for a custom dataset or an explanation of the meaning of each of the 20 columns will be much appreciated.

God bless!

About IKeypoint head issue

Hi, @derronqi Thank you for share so great project, and i have confused by this line
'self.no_kpt = 3*self.nkpt ## number of outputs per anchor for keypoints' in yolo.py file.

nkpt represent the number of face kpts, and everey kpt represent by (x, y), so no_kpt should equal to 2* nkpt, but why you write 3*self.nkpt. Do you mind explain it? Thank you

class IKeypoint(nn.Module):
stride = None # strides computed during build
export = False # onnx export

def __init__(self, nc=80, anchors=(), nkpt=5, ch=(), inplace=True, dw_conv_kpt=False):  # detection layer
    super(IKeypoint, self).__init__()
    self.nc = nc  # number of classes
    self.nkpt = nkpt
    self.dw_conv_kpt = dw_conv_kpt
    self.no_det=(nc + 5)  # number of outputs per anchor for box and class    x, y, w, h, confidence, class1 conf, class2 conf ......
    self.no_kpt = 3*self.nkpt ## number of outputs per anchor for keypoints
    self.no = self.no_det+self.no_kpt
    self.nl = len(anchors)  # number of detection layers
    self.na = len(anchors[0]) // 2  # number of anchors

预训练模型

你好,请问你yolo7s-face训练中使用的预训练权值哪里可以获得?

多关键点

怎么修改代码,能够检测不同数量的关键点啊!

Cpp Implementation

Are there anyone who implement the onnx model or trt model by using cpp? The link given in the readme about cpp is not working for me. OpenCV Dnn implementation will be also useful for me.

export to onnx model

Why kpt_label is 4 in models/export.py 74 and 75 line, isn't it 5 for five face keypoints?

nms = models.common.NMS(conf=0.01, kpt_label=4)
nms_export = models.common.NMS_Export(conf=0.01, kpt_label=4)

dynamic key points

Hello Sirs:
Thank you for your great work, but I found you haven't realized dynamic key points detection actually. It has many bugs with key point number changed.

Best regards!

pt convert onnx ?? onnx convert ncnn

1 请问pt转onnx的过程,我的命令是这样:
python export.py --weights weights/yolov7-face.pt --img 640 --batch 1 --grid --simplify
转出来的模型 存在“slice”结构。
2 由于1中转出来的模型存在这个"slice" 结构,导致转ncnn一直失败。
请问您转onnx的代码 是怎么样的

求助大佬

您好,请问我用v7-e6或者d6训练得到的pt文件,用test_widerface.py处理
报错,请问是什么原因呢?报错图稍后贴上

速度

yolov7face的推理速度比yolo5face慢!是为什么

Performance for Mask Images

Hello.TensorRT conversion model for Yolov7s-face model is currently in use
The detection performance of the mask-wearing image is good for the distant face, but in the case of the mask-wearing image in the front, that is, close-up (a composition like an identification photograph), the score is only about 0.5, is there a reason?
Thank you

Retraining of Yolov7 Model on Given Dataset

@derronqi Thank you for very good work .I am retraining the your model with your given dataset. I am running following command

!python3 train.py --device 0 --data data/widerface.yaml --img 640 640 --cfg cfg/yolov7-face.yaml --weights yolov7-w6.pt --cache-images --hyp data/hyp.face.yaml --batch-size 8

But I am getting following logs

     0/299     7.46G    0.1189   0.05726         0   0.05056  0.008458    0.2352        49       640         0         0         0         0         0  0.006643         0
     1/299     7.43G    0.1111   0.04434         0   0.03364  0.007088    0.1962        40       640         0         0         0         0         0  0.007927         0
     2/299     7.43G   0.08521   0.03891         0   0.01996  0.005892      0.15       107       640         0         0         0         0         0   0.01211         0
     3/299     7.43G   0.07363    0.0367         0   0.01529   0.00532    0.1309        50       640         0         0         0         0         0   0.01354         0
     4/299     7.43G   0.06879   0.03564         0   0.01332  0.004998    0.1227       115       640         0         0         0         0         0   0.01686         0
     5/299     7.43G   0.06605   0.03398         0   0.01222  0.004732     0.117       184       640         0         0         0         0         0   0.01902         0
     6/299     7.43G   0.06441   0.03353         0   0.01157  0.004533     0.114        26       640         0         0         0         0         0   0.01897         0
     7/299     7.43G    0.0632    0.0342         0   0.01113  0.004326    0.1129       108       640         0         0         0         0         0   0.01888         0
     8/299     7.43G   0.06179   0.03387         0   0.01074  0.004169    0.1106       177       640         0         0         0         0         0   0.01998         0
     9/299     7.43G   0.06074   0.03323         0   0.01038  0.004025    0.1084        66       640         0         0         0         0         0   0.02078         0
    10/299     7.43G   0.06036   0.03261         0   0.01021  0.003909    0.1071        69       640         0         0         0         0         0   0.01992         0
    11/299     7.43G   0.05991   0.03347         0  0.009964  0.003877    0.1072        64       640         0         0         0         0         0   0.02125         0
    12/299     7.43G   0.05891   0.03231         0  0.009774  0.003759    0.1048        73       640         0         0         0         0         0   0.02161         0
    13/299     7.43G   0.05861    0.0327         0  0.009598   0.00373    0.1046        73       640         0         0         0         0         0   0.02158         0
    14/299     7.43G   0.05811   0.03283         0  0.009474   0.00368    0.1041       106       640         0         0         0         0         0   0.02172         0
    15/299     7.43G   0.05784   0.03212         0  0.009277  0.003649    0.1029        58       640         0         0         0         0         0   0.02172         0
    16/299     7.43G   0.05803   0.03266         0  0.009239  0.003596    0.1035        52       640         0         0         0         0         0    0.0223         0
    17/299     7.43G   0.05772   0.03241         0  0.009141   0.00359    0.1029        71       640         0         0         0         0         0   0.02173         0
    18/299     7.43G   0.05722   0.03202         0  0.008997  0.003549    0.1018        75       640         0         0         0         0         0   0.02224         0
    19/299     7.43G   0.05693   0.03212         0  0.008923  0.003521    0.1015        22       640         0         0         0         0         0   0.02232         0
    20/299     7.43G   0.05669   0.03197         0  0.008885  0.003536    0.1011        47       640         0         0         0         0         0   0.02232         0
    21/299     7.43G   0.05651   0.03232         0  0.008761  0.003488    0.1011        79       640         0         0         0         0         0   0.02256         0
    22/299     7.43G   0.05688   0.03258         0  0.008749  0.003523    0.1017        42       640         0         0         0         0         0    0.0224         0


Accuracy is not improving after 22 epoches. Can you give any Readme file to retrain the your network on your given dataset. Thanks

run test_widerface.py failed

Traceback (most recent call last):
  File "test_widerface.py", line 134, in <module>
    detect(opt=opt)
  File "test_widerface.py", line 43, in detect
    model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))  # run once
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/liaojiaqi/yolov7-face/models/yolo.py", line 362, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/liaojiaqi/yolov7-face/models/yolo.py", line 396, in forward_once
    x = m(x)  # run
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/liaojiaqi/yolov7-face/models/yolo.py", line 65, in forward
    if self.nkpt is None or self.nkpt==0:
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1178, in __getattr__
    type(self).__name__, name))
AttributeError: 'Detect' object has no attribute 'nkpt'

Hello!
I simply run python test_widerface.py, and failed. Could you please tell how to fix this. Which script could I infer some images?
Thank you.

test.py提交一个小bug

if len(wandb_images) < log_imgs and wandb_logger.current_epoch > 0: # Check for test operation
if wandb_logger.current_epoch % wandb_logger.bbox_interval == 0:
box_data = [{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
"class_id": int(cls),
"box_caption": "%s %.3f" % (names[cls], conf),
"scores": {"class_score": conf},
"domain": "pixel"} for *xyxy, conf, cls in pred.tolist()]
print(box_data)
boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
wandb_images.append(wandb_logger.wandb.Image(img[si], boxes=boxes, caption=path.name))

作者您好,这里的for *xyxy, conf, cls in pred.tolist()中的pred.tolist()应该是pred[:, :6].tolist(),您觉得是吗?另外问您一个问题,训练集中的landmark有三个属性,请问第三个可见是1不可见是0吗,加入说做三个点检测,有一个点不可见,landmark第三个属性是不是应该设置为1,1,0,非常期待您的耐心解答

yolo7face更改网络结构

我正在改进yolo7face的主干网络,想用mobilenet代替其主干网络,但是会报错:File "H:\pycharmproject\yolov7-face-main\models\yolo.py", line 125, in init
self.no = self.no_det+self.no_kpt
TypeError: unsupported operand type(s) for +: 'int' and 'list'。
奇怪的是,我在任意位置更改注意力机制时 并没有报错,只有在更换主干网络时报错,希望得到大佬的指点。
123

utils/loss.py似乎不支持多分类问题

您好!我注意到在utils/loss.py的170行附近有这样一处表达:pkpt_score = ps[:, 8::3]
您似乎没有考虑多分类的情况?当分类数量大于1时,关键点的置信度应该并不是从8,而是从7+num_class处开始。

Onnx File

Could you please share the onnx files? The repo you gave in readme redirect me to pan.baidu but I could not enter the baidu. If you share the onnx formats of the models on google drive, I'll be appreciated. Thanks in advance.

Please fix and provide example of usage via torch hub.

It would be nice to use this project as a lib via torch hub, like here is described:
https://docs.ultralytics.com/tutorials/pytorch-hub/

Here are torch hub files in the project, but implied example is not working:

#!/bin/python

import torch

# Model
model = torch.hub.load('derronqi/yolov7-face', 'yolov7', pretrained=True)

# Images
imgs = ['https://ultralytics.com/images/zidane.jpg']  # batch of images

# Inference
results = model(imgs)

# Results
results.print()
results.show()

results.xyxy[0]  # img1 predictions (tensor)
results.pandas().xyxy[0]  # img1 predictions (pandas)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.