Giter VIP home page Giter VIP logo

license-plate-detector's Introduction

License Plate Detection with Yolov5

距离上次车牌检测模型更新已经过了一年多的时间,这段时间也有很多快速、准确的模型提出,我们利用单物体检测算法Yolov5进行了车牌检测模型的训练,通过测试,检测效果和适用性都更突出,支持的模型也更为丰富。

我们开源版本的检测算法经过了多个版本迭代,考虑到检测的效率跟准确率,原始版本逐步淘汰,从最初的基于LBP和Harr特征的车牌检测,感兴趣的小伙伴可以参考train-detector(https://github.com/openalpr/train-detector) 这个仓库;到后来逐步的采用深度学习的方式,包括基于mobilenet-ssd的算法进行检测(https://gitee.com/zeusees/Mobilenet-SSD-License-Plate-Detection) ,基于Retinaface框架进行检测( https://gitee.com/zeusees/license-plate-detector ),后续请尽量采用新模型进行测试。

该版本的检测模型的训练,结合了CCPD数据集跟我们自有的数据,能够做到更多车牌种类的支持。

Pytorch模型测试

Clone and install
  1. git clone https://github.com/zeusees/License-Plate-Detector.git

  2. Pytorch version 1.7.0

  3. Python 3.8

  4. python detect_plate.py

基于C++的NCNN模型测试

Source Code Compile
  1. cd Prj-ncnn

  2. cmake .

  3. make

支持车牌种类

  • 蓝色单层车牌
  • 黄色单层车牌
  • 绿色新能源车牌、民航车牌
  • 黑色单层车牌
  • 白色警牌、军牌、武警车牌
  • 黄色双层车牌
  • 绿色农用车牌
  • 白色双层军牌

测试结果

参考

license-plate-detector's People

Contributors

jinkham avatar szad670401 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

license-plate-detector's Issues

模型参数

您好,我看到这个License-Plate-Detector/weights/yolov5s.pt模型参数13918k,是不是推理速度很快?我现在数据集是目标顺时针四个点的坐标,可能需要进行那些更改

Hello

I also would like to train it with my dataset if possible to set up another model.

input size

when you train or test,you just change input size ? or any other change

请教如何转为onnx

您好,我在自己学习相关的知识,然后自己修改了一下代码,但很可惜一直报错,我把detect_plate.py中的main修改如下:

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--weights', nargs='+', type=str, default='runs/train/exp/weights/last.pt', help='model.pt path(s)')
    parser.add_argument('--image', type=str, default='data/images/test.jpg', help='source')  # file/folder, 0 for webcam
    parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
    opt = parser.parse_args()
    print(opt)
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    # print(opt.weights)
    model = torch.load(opt.weights[0], map_location=device)['model'].float().fuse().eval()
    # model = torch.load(opt.weights)

    batch_size = 1                              # 批处理大小
    input_shape = (3, 800, 800)                 # 输入数据

    # set the model to inference mode
    # torch_model.eval()

    x = torch.randn(batch_size,*input_shape)    # 生成张量
    export_onnx_file = "test.onnx"              # 目的ONNX文件名
    torch.onnx.export(model,
                        x,
                        export_onnx_file,
                        opset_version=10,
                        do_constant_folding=True,   # 是否执行常量折叠优化
                        input_names=["input"],      # 输入名
                        output_names=["output"])

    quit()
    for filename in f:
        print(filename)
        detect_one(model, filename, device)

不过报错如下:

Namespace(image='data/images/test.jpg', img_size=640, weights=['weights/plate_det_model.pt'])
Fusing layers... 
[W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware.
/home/luke/Downloads/License-Plate-Detector/models/yolo.py:62: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if self.grid[i].shape[2:4] != x[i].shape[2:4]:
Traceback (most recent call last):
  File "detect_plate.py", line 185, in <module>
    torch.onnx.export(model,
  File "/home/luke/miniconda3/lib/python3.8/site-packages/torch/onnx/__init__.py", line 225, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/home/luke/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 85, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/home/luke/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 632, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "/home/luke/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 417, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
  File "/home/luke/miniconda3/lib/python3.8/site-packages/torch/onnx/utils.py", line 168, in _optimize_graph
    torch._C._jit_pass_onnx_prepare_inplace_ops_for_onnx(graph)
RuntimeError: 

aten::view(Tensor(a) self, int[] size) -> (Tensor(a)):
Expected a value of type 'Tensor' for argument 'self' but instead found type 'None'.

可以指点下吗?

where is the file "lpd.caffemodel"

There is a code in "demo_license.py" line12 : caffe_model='lpd.caffemodel' ,
but i can't find the file "lpd.caffemodel" , there only exists a file called "lpr.caffemodel", are these two files the same just with a error spelled name?

ncnn模型精度比pytorch低很多

pytorch版大部分可以检测,ncnn版效果没那么好,尝试了不同阈值还是达不到pytorch版的效果,请问是pytorch->onnx->ncnn吗?

树莓派编译链接时报错

[100%] Linking CXX executable LPDetector
/usr/bin/ld: CMakeFiles/LPDetector.dir/LPDetector.cpp.o: in function Detector::Detector()': LPDetector.cpp:(.text+0x5c): undefined reference to ncnn::Net::Net()'
/usr/bin/ld: CMakeFiles/LPDetector.dir/LPDetector.cpp.o: in function Detector::Detector(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)': LPDetector.cpp:(.text+0xfc): undefined reference to ncnn::Net::Net()'
/usr/bin/ld: CMakeFiles/LPDetector.dir/LPDetector.cpp.o: in function Detector::Init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)': LPDetector.cpp:(.text+0x178): undefined reference to ncnn::Net::load_param(char const*)'
/usr/bin/ld: LPDetector.cpp:(.text+0x1a0): undefined reference to ncnn::Net::load_model(char const*)' /usr/bin/ld: CMakeFiles/LPDetector.dir/LPDetector.cpp.o: in function Detector::Detect(cv::Mat&, std::vector<bbox, std::allocator >&)':
LPDetector.cpp:(.text+0x23c): undefined reference to ncnn::Mat::from_pixels_resize(unsigned char const*, int, int, int, int, int, ncnn::Allocator*)' /usr/bin/ld: LPDetector.cpp:(.text+0x254): undefined reference to ncnn::Mat::substract_mean_normalize(float const*, float const*)'
/usr/bin/ld: LPDetector.cpp:(.text+0x2c4): undefined reference to ncnn::Net::create_extractor() const' /usr/bin/ld: LPDetector.cpp:(.text+0x2d4): undefined reference to ncnn::Extractor::set_light_mode(bool)'
/usr/bin/ld: LPDetector.cpp:(.text+0x2e4): undefined reference to ncnn::Extractor::set_num_threads(int)' /usr/bin/ld: LPDetector.cpp:(.text+0x2f8): undefined reference to ncnn::Extractor::input(int, ncnn::Mat const&)'
/usr/bin/ld: LPDetector.cpp:(.text+0x330): undefined reference to ncnn::Extractor::extract(char const*, ncnn::Mat&)' /usr/bin/ld: LPDetector.cpp:(.text+0x344): undefined reference to ncnn::Extractor::extract(char const*, ncnn::Mat&)'
/usr/bin/ld: LPDetector.cpp:(.text+0x358): undefined reference to ncnn::Extractor::extract(char const*, ncnn::Mat&)' /usr/bin/ld: CMakeFiles/LPDetector.dir/LPDetector.cpp.o: in function Detector::Release()':
LPDetector.cpp:(.text._ZN8Detector7ReleaseEv[_ZN8Detector7ReleaseEv]+0x34): undefined reference to `ncnn::Net::~Net()'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/LPDetector.dir/build.make:106: LPDetector] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/LPDetector.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

non_max_suppression_face,没有这个函数?

test.py的第14行:
from utils.general import coco80_to_coco91_class, check_dataset, check_file, check_img_size, box_iou, \ non_max_suppression, scale_coords, xyxy2xywh, xywh2xyxy, set_logging, increment_path, non_max_suppression_face
但是utils文件夹里搜不到这个函数啊
image

get zero-value output when I use ‘lpr’ model

hi, I dont have a Caffe environment so I use opencv.dnn to load lpr caffemodel, but the output of forward inference is all ZERO(out = net.forward()), however, the last but one layer(mbox_loc, mbox_priorbox, mbox_conf_flattern) get non-zero value. I have no idea about this, could you do me a favor and guide me to run this project successfully? thanks!
BTW, my PC envs: Anaconda3 with python3.7, opencv4.1.1
123

Prj-ncnn版本被删除了吗?

···
cd Prj-ncnn

cmake .

make
···
我看教程中还包含这部分,但是代码中已找不到了,是ncnn版本不再维护了吗?

能否最后只检测四个角点的坐标?

你好,大佬,我是一名大二学生,目前在我们学校robomaster战队里负责用神经网络识别装甲板,但是用yolo系列的话最后的bbox并不能很好的拟合装甲板的轮廓,导致我们后期在使用pnp进行姿态解算时会有很大误差,所以我想使用关键点检测的方法得到的bbox效果要好很多,就用装甲板的四个角点,就是我看您的这个项目应该能够实现,我想问一下就是需要改动哪些地方呢,loss和head就行了吗?

多分类模型?

这个模型不是多分类的吗?能否训练一个车牌多分类的模型,包括蓝牌,大使馆牌,警牌等

数据集

大佬可以分享下数据集么,主要是做试验用

当训练多分类时,训练过程中跑test报错

Starting training for 300 epochs...

 Epoch   gpu_mem       box       obj       cls  landmark     total   targets  img_size
 0/299     3.91G     0.059   0.01828   0.02349      0.12    0.2208         6       640: 100%|█████████████████████████████████████████████████████████████████████| 95/95 [00:52<00:00,  1.82it/s]

 Epoch   gpu_mem       box       obj       cls  landmark     total   targets  img_size
 1/299     3.92G   0.05078   0.01312   0.01976   0.09381    0.1775         2       640: 100%|█████████████████████████████████████████████████████████████████████| 95/95 [00:50<00:00,  1.89it/s]

 Epoch   gpu_mem       box       obj       cls  landmark     total   targets  img_size
 2/299     3.92G   0.05082   0.01197   0.01861   0.09338    0.1748         2       640: 100%|█████████████████████████████████████████████████████████████████████| 95/95 [00:48<00:00,  1.96it/s]
           Class      Images     Targets           P           R      [email protected]  [email protected]:.95:   0%|                                                                            | 0/10 [00:00<?, ?it/s]

Traceback (most recent call last):
File "train.py", line 516, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 349, in train
log_imgs=opt.log_imgs if wandb else 0)
File "/home/wyw/License-Plate-Detector/test.py", line 121, in test
output = non_max_suppression_plate(inf_out, conf_thres=conf_thres, iou_thres=iou_thres, labels=lb)
File "/home/wyw/License-Plate-Detector/utils/general.py", line 424, in non_max_suppression_plate
x = torch.cat((box[i], x[i, j + 13, None], x[:, 5:13] ,j[:, None].float()), 1)
RuntimeError: Sizes of tensors must match except in dimension 0. Got 2016 and 3248 (The offending index is 2)

根据`Pytorch模型测试`配置流程,运行`python detect_plate.py`报错

1、源程序build完直接运行报错如下:

image

2、修改了文件路径后运行报错如下:

D:\InstallationDir\anaconda\python.exe E:/lxx/workProjects/uniappHyperLPR/License-Plate-Detector/detect_plate.py
Namespace(weights='runs/train/exp/weights/last.pt', image='data/images/test.jpg', img_size=640)
Traceback (most recent call last):
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\connectionpool.py", line 700, in urlopen
    self._prepare_proxy(conn)
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\connectionpool.py", line 994, in _prepare_proxy
    conn.connect()
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\connection.py", line 364, in connect
    conn = self._connect_tls_proxy(hostname, conn)
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\connection.py", line 501, in _connect_tls_proxy
    socket = ssl_wrap_socket(
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\util\ssl_.py", line 453, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\util\ssl_.py", line 495, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock)
  File "D:\InstallationDir\anaconda\lib\ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "D:\InstallationDir\anaconda\lib\ssl.py", line 1040, in _create
    self.do_handshake()
  File "D:\InstallationDir\anaconda\lib\ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1129)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\InstallationDir\anaconda\lib\site-packages\requests\adapters.py", line 440, in send
    resp = conn.urlopen(
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\connectionpool.py", line 785, in urlopen
    retries = retries.increment(
  File "D:\InstallationDir\anaconda\lib\site-packages\urllib3\util\retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/ultralytics/yolov5/releases/latest (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\lxx\workProjects\uniappHyperLPR\License-Plate-Detector\utils\google_utils.py", line 25, in attempt_download
    response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json()  # github api
  File "D:\InstallationDir\anaconda\lib\site-packages\requests\api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
  File "D:\InstallationDir\anaconda\lib\site-packages\requests\api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "D:\InstallationDir\anaconda\lib\site-packages\requests\sessions.py", line 529, in request
    resp = self.send(prep, **send_kwargs)
  File "D:\InstallationDir\anaconda\lib\site-packages\requests\sessions.py", line 645, in send
    r = adapter.send(request, **kwargs)
  File "D:\InstallationDir\anaconda\lib\site-packages\requests\adapters.py", line 517, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /repos/ultralytics/yolov5/releases/latest (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\lxx\workProjects\uniappHyperLPR\License-Plate-Detector\detect_plate.py", line 172, in <module>
    model = load_model(opt.weights, device)
  File "E:\lxx\workProjects\uniappHyperLPR\License-Plate-Detector\detect_plate.py", line 25, in load_model
    model = attempt_load(weights, map_location=device)  # load FP32 model
  File "E:\lxx\workProjects\uniappHyperLPR\License-Plate-Detector\models\experimental.py", line 117, in attempt_load
    attempt_download(w)
  File "E:\lxx\workProjects\uniappHyperLPR\License-Plate-Detector\utils\google_utils.py", line 30, in attempt_download
    tag = subprocess.check_output('git tag', shell=True).decode('utf-8').split('\n')[-2]
IndexError: list index out of range

进程已结束,退出代码1

大佬看你支持很多种车牌类型的,可以分享下您的数据收集思路吗

看您支持,以下的车牌;
蓝色单层车牌
黄色单层车牌
绿色新能源车牌、民航车牌
黑色单层车牌
白色警牌、军牌、武警车牌
黄色双层车牌
绿色农用车牌
白色双层军牌

如直接搜索"白色警牌或者白色军牌"等,几十张都很难收集,更别说用于训练,可以分享下您是如何收集这些数据的数据思路吗,有什么技巧呢,GAN?

训练出错!

Traceback (most recent call last):
File "train.py", line 516, in
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 191, in train
image_weights=opt.image_weights)
File "/home/wyw/yolov5-License-Plate-Detector/utils/face_datasets.py", line 70, in create_dataloader
image_weights=image_weights,
File "/home/wyw/yolov5-License-Plate-Detector/utils/face_datasets.py", line 164, in init
labels, shapes = zip(*cache.values())
ValueError: not enough values to unpack (expected 2, got 0)

what version of caffe that could generate Mobilenet-SSD?

Hi, i was doing research on car plate recognition recently. i downloaded the program but i couldn't find the caffe version that could compile Mobilenet-SSD suitable for this project. What is the version of caffe did you use?

final loss

can you share the final loss when, i found it large, just give me a reference, such as loc: 0.xxx, cla: 0.xx, landms 0.xx
or loc: 0.00x cla: 0.00x, landms: 0.00x

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.