Giter VIP home page Giter VIP logo

qreader's Introduction

QReader

QReader QReader is a Robust and Straight-Forward solution for reading difficult and tricky QR codes within images in Python. Powered by a YOLOv8 model.

Behind the scenes, the library is composed by two main building blocks: A YOLOv8 QR Detector model trained to detect and segment QR codes (also offered as stand-alone), and the Pyzbar QR Decoder. Using the information extracted from this QR Detector, QReader transparently applies, on top of Pyzbar, different image preprocessing techniques that maximize the decoding rate on difficult images.

Installation

To install QReader, simply run:

pip install qreader

You may need to install some additional pyzbar dependencies:

On Windows:

Rarely, you can see an ugly ImportError related with lizbar-64.dll. If it happens, install the vcredist_x64.exe from the Visual C++ Redistributable Packages for Visual Studio 2013

On Linux:

sudo apt-get install libzbar0

On Mac OS X:

brew install zbar

To install the QReader package locally, run pip

python -m pip install --editable .

NOTE: If you're running QReader in a server with very limited resources, you may want to install the CPU version of PyTorch, before installing QReader. To do so, run: pip install torch --no-cache-dir (Thanks to @cjwalther for his advice).

Usage

Open In Colab

QReader is a very simple and straight-forward library. For most use cases, you'll only need to call detect_and_decode:

from qreader import QReader
import cv2


# Create a QReader instance
qreader = QReader()

# Get the image that contains the QR code
image = cv2.cvtColor(cv2.imread("path/to/image.png"), cv2.COLOR_BGR2RGB)

# Use the detect_and_decode function to get the decoded QR data
decoded_text = qreader.detect_and_decode(image=image)

detect_and_decode will return a tuple containing the decoded string of every QR found in the image.

NOTE: Some entries can be None, it will happen when a QR have been detected but couldn't be decoded.

API Reference

QReader(model_size = 's', min_confidence = 0.5, reencode_to = 'shift-jis', weights_folder = None)

This is the main class of the library. Please, try to instantiate it just once to avoid loading the model every time you need to detect a QR code.

  • model_size: str. The size of the model to use. It can be 'n' (nano), 's' (small), 'm' (medium) or 'l' (large). Larger models could be more accurate but slower. Recommended: 's' (#37). Default: 's'.
  • min_confidence: float. The minimum confidence of the QR detection to be considered valid. Values closer to 0.0 can get more False Positives, while values closer to 1.0 can lose difficult QRs. Default (and recommended): 0.5.
  • reencode_to: str | None. The encoding to reencode the utf-8 decoded QR string. If None, it won't re-encode. If you find some characters being decoded incorrectly, try to set a Code Page that matches your specific charset. Recommendations that have been found useful:
    • 'shift-jis' for Germanic languages
    • 'cp65001' for Asian languages (Thanks to @nguyen-viet-hung for the suggestion)
  • weights_folder: str|None. Folder where the detection model will be downloaded. If None, it will be downloaded to the default qrdet package internal folder, making sure that it gets correctly removed when uninstalling. You could need to change it when working in environments like AWS Lambda where only /tmp folder is writable, as issued in #21. Default: None (<qrdet_package>/.model).

QReader.detect_and_decode(image, return_detections = False)

This method will decode the QR codes in the given image and return the decoded strings (or None, if any of them was detected but not decoded).

  • image: np.ndarray. The image to be read. It is expected to be RGB or BGR (uint8). Format (HxWx3).

  • return_detections: bool. If True, it will return the full detection results together with the decoded QRs. If False, it will return only the decoded content of the QR codes.

  • is_bgr: boolean. If True, the received image is expected to be BGR instead of RGB.

  • Returns: tuple[str | None] | tuple[tuple[dict[str, np.ndarray | float | tuple[float | int, float | int]]], str | None]]: A tuple with all detected QR codes decodified. If return_detections is False, the output will look like: ('Decoded QR 1', 'Decoded QR 2', None, 'Decoded QR 4', ...). If return_detections is True it will look like: (('Decoded QR 1', {'bbox_xyxy': (x1_1, y1_1, x2_1, y2_1), 'confidence': conf_1}), ('Decoded QR 2', {'bbox_xyxy': (x1_2, y1_2, x2_2, y2_2), 'confidence': conf_2, ...}), ...). Look QReader.detect() for more information about detections format.

QReader.detect(image)

This method detects the QR codes in the image and returns a tuple of dictionaries with all the detection information.

  • image: np.ndarray. The image to be read. It is expected to be RGB or BGR (uint8). Format (HxWx3).

  • is_bgr: boolean. If True, the received image is expected to be BGR instead of RGB.

  • Returns: tuple[dict[str, np.ndarray|float|tuple[float|int, float|int]]]. A tuple of dictionaries containing all the information of every detection. Contains the following keys.

Key Value Desc. Value Type Value Form
confidence Detection confidence float conf.
bbox_xyxy Bounding box np.ndarray (4) [x1, y1, x2, y2]
cxcy Center of bounding box tuple[float, float] (x, y)
wh Bounding box width and height tuple[float, float] (w, h)
polygon_xy Precise polygon that segments the QR np.ndarray (N, 2) [[x1, y1], [x2, y2], ...]
quad_xy Four corners polygon that segments the QR np.ndarray (4, 2) [[x1, y1], ..., [x4, y4]]
padded_quad_xy quad_xy padded to fully cover polygon_xy np.ndarray (4, 2) [[x1, y1], ..., [x4, y4]]
image_shape Shape of the input image tuple[int, int] (h, w)

NOTE:

  • All np.ndarray values are of type np.float32
  • All keys (except confidence and image_shape) have a normalized ('n') version. For example,bbox_xyxy represents the bbox of the QR in image coordinates [[0., im_w], [0., im_h]], while bbox_xyxyn contains the same bounding box in normalized coordinates [0., 1.].
  • bbox_xyxy[n] and polygon_xy[n] are clipped to image_shape. You can use them for indexing without further management

NOTE: Is this the only method you will need? Take a look at QRDet.

QReader.decode(image, detection_result)

This method decodes a single QR code on the given image, described by a detection result.

Internally, this method will run the pyzbar decoder, using the information of the detection_result, to apply different image preprocessing techniques that heavily increase the detecoding rate.

  • image: np.ndarray. NumPy Array with the image that contains the QR to decode. The image is expected to be in uint8 format [HxWxC], RGB.

  • detection_result: dict[str, np.ndarray|float|tuple[float|int, float|int]]. One of the detection dicts returned by the detect method. Note that QReader.detect() returns a tuple of these dict. This method expects just one of them.

  • Returns: str | None. The decoded content of the QR code or None if it couldn't be read.

Usage Tests

test_on_mobile
Two sample images. At left, an image taken with a mobile phone. At right, a 64x64 QR pasted over a drawing.

The following code will try to decode these images containing QRs with QReader, pyzbar and OpenCV.

from qreader import QReader
from cv2 import QRCodeDetector, imread
from pyzbar.pyzbar import decode

# Initialize the three tested readers (QRReader, OpenCV and pyzbar)
qreader_reader, cv2_reader, pyzbar_reader = QReader(), QRCodeDetector(), decode

for img_path in ('test_mobile.jpeg', 'test_draw_64x64.jpeg'):
    # Read the image
    img = imread(img_path)

    # Try to decode the QR code with the three readers
    qreader_out = qreader_reader.detect_and_decode(image=img)
    cv2_out = cv2_reader.detectAndDecode(img=img)[0]
    pyzbar_out = pyzbar_reader(image=img)
    # Read the content of the pyzbar output (double decoding will save you from a lot of wrongly decoded characters)
    pyzbar_out = tuple(out.data.data.decode('utf-8').encode('shift-jis').decode('utf-8') for out in pyzbar_out)

    # Print the results
    print(f"Image: {img_path} -> QReader: {qreader_out}. OpenCV: {cv2_out}. pyzbar: {pyzbar_out}.")

The output of the previous code is:

Image: test_mobile.jpeg -> QReader: ('https://github.com/Eric-Canas/QReader'). OpenCV: . pyzbar: ().
Image: test_draw_64x64.jpeg -> QReader: ('https://github.com/Eric-Canas/QReader'). OpenCV: . pyzbar: ().

Note that QReader internally uses pyzbar as decoder. The improved detection-decoding rate that QReader achieves comes from the combination of different image pre-processing techniques and the YOLOv8 based QR detector that is able to detect QR codes in harder conditions than classical Computer Vision methods.

Running tests

The tests can be launched via pytest. Make sure you install the test version of the package

python -m pip install --editable ".[test]"

Then, you can run the tests with

python -m pytest tests/

Benchmark

Rotation Test

Rotation Test

                             

Method Max Rotation Degrees
Pyzbar 17º
OpenCV 46º
QReader 79º

qreader's People

Contributors

eric-canas avatar jandom avatar michaelcurrie avatar scito avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

qreader's Issues

Additional Transformation to Increase QR Detection Rate

Hi,

This is very cool work. I'd like to suggest a few additional transformations that you could add to increase the QR decoding rate.

I used your code, and it failed to decode after detecting the QR area. Below are some good transformations that I empirically validated (with dozens of QR codes) that helped increase decoding rate. You can consider adding them to the decode function in https://github.com/Eric-Canas/QReader/blob/main/qreader.py. If you think they add too much overhead, feel free to add a switch for them (i.e., only turn them on upon setting a flag).

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
advanced_gray = cv2.cvtColor(cv2.filter2D(src=image, ddepth=-1, kernel=_SHARPEN_KERNEL), cv2.COLOR_RGB2GRAY)
_, thresholded_gray = cv2.threshold(gray, thresh=0, maxval=255, type=cv2.THRESH_BINARY + cv2.THRESH_OTSU)
_, thresholded_advanced_gray = cv2.threshold(advanced_gray, thresh=0, maxval=255, type=cv2.THRESH_BINARY + cv2.THRESH_OTSU)
    
blurred1 = cv2.GaussianBlur(advanced_gray, (3, 3), 0)
blurred2 = cv2.GaussianBlur(gray, (5, 5), 0)
blurred3 = cv2.GaussianBlur(gray, (7, 7), 0)

Also, running decode on all of the variations helped.

Error decode UTF-8 character 'â'

I have a problem when I try using pyzbar to decode a QR image. But I had given result don't match data which I using qrcode make before.
this is my code:

from qreader import QReader
from PIL import Image
import qrcode

image_path = "my_image.png"
data = 'â'
print(f'data = {data}')
img = qrcode.make(data)

img.save(image_path)
img = cv2.imread(image_path)
result = qreader.detect_and_decode(image=img)
print(f"result = {result[0]}")

Bar code support

Hi Eric,

I have used your package for a small test project, and I have massive respect as it worked for me brilliantly.
I'd like to request support to detect and decode Bar codes as currently it doesn't. The pyzbar decoding inherently allows to decode bar codes.
I'm assuming the thought might have crossed your mind as well. right?

TypeError: slice indices must be integers or None or have an __index__ method

Hello.

Always getting the following error:

in __deep_detect_and_decode_with_resize
cropped_img = image[y1:y2, x1:x2]
TypeError: slice indices must be integers or None or have an index method

with the following code:

from qreader import QReader
import cv2


qreader = QReader()

image = cv2.imread("test_mobile.jpeg")

decoded_text = qreader.detect_and_decode(image=image)

If i turn off deep_search, the error goes away.

double free or corruption (!prev)

I get this error "double free or corruption (!prev)" when reading some images converted from pdf, opened wih cv2.

def read_qr_page(page_path, fix):
    try:
        qreader_reader = QReader()
        page = cv2.imread(page_path)
        cv2.imwrite('logs/page.png', page)
        decoded_objects = qreader_reader.detect_and_decode(image=page)
        
        (...)

Segmentation seems to be broken

Here're the results I got for model_size = 'l' and confidence level 0.8 ; the results for model_size = 's' are identical
grafik
Where is my mistake? It decoded nothing.

import cv2
from qreader import QReader
import os
from tqdm.auto import tqdm
detector = QReader(model_size='l', min_confidence=0.8)
path = "decode_video-frames"
seg_path = "seg_video-frames_model_l"
files = os.listdir(path)
result = []
for i, f in tqdm(enumerate(files)): 
    image = cv2.imread(filename=os.path.join(path, f))
    detections = detector.detect(image=image, is_bgr=True)
    # Draw the detections
    for detection in detections:
        x1, y1, x2, y2 = detection['bbox_xyxy']
        x1, y1, x2, y2 = int(x1), int(x2), int(y1), int(y2)
        confidence = detection['confidence']
        if confidence >= 0.8:
            result.append((os.path.join(seg_path, f"seg_{i}.png"), detector.detect_and_decode(image=img)))
        cv2.rectangle(image, (x1, y1), (x2, y2), color=(0, 255, 0), thickness=2)
        cv2.putText(image, f'{confidence:.2f}', (x1, y1 - 10), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                    fontScale=1, color=(0, 255, 0), thickness=2)
    # Save the results
    cv2.imwrite(filename=os.path.join(seg_path, f"seg_{i}.png"), img=image)

the second dimension of the result - list is empty

FileNotFoundError: .yolov7_qrdetdo7bgp8r.tmp

Traceback (most recent call last):
File "/home/shubankar/Desktop/barcode/undetected.py", line 4, in
qreader=QReader()
File "/home/shubankar/Desktop/envdori/lib/python3.10/site-packages/qreader.py", line 26, in init
self.detector = QRDetector()
File "/home/shubankar/Desktop/envdori/lib/python3.10/site-packages/qrdet.py", line 27, in init
self.model = Yolov7Detector(weights=_WEIGHTS_PATH, img_size=(640, 640), agnostic_nms=True, traced=False)
File "/home/shubankar/Desktop/envdori/lib/python3.10/site-packages/yolov7_package/model_utils.py", line 147, in init
self.model = attempt_load([weights], map_location=self.device) # load FP32 model
File "/home/shubankar/Desktop/envdori/lib/python3.10/site-packages/yolov7_package/models/experimental.py", line 241, in attempt_load
attempt_download(w)
File "/home/shubankar/Desktop/envdori/lib/python3.10/site-packages/yolov7_package/utils/google_utils.py", line 26, in attempt_download
wget.download(f'https://github.com/WongKinYiu/yolov7/releases/download/v0.1/{file.name}', out=str(file.parent))
File "/home/shubankar/Desktop/envdori/lib/python3.10/site-packages/wget.py", line 506, in download
(fd, tmpfile) = tempfile.mkstemp(".tmp", prefix=prefix, dir=".")
File "/usr/lib/python3.10/tempfile.py", line 480, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/usr/lib/python3.10/tempfile.py", line 395, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/home/shubankar/desktop/envdori/lib/python3.10/site-packages/.yolov7_qrdetdo7bgp8r.tmp'

Error loading Yolov7 weight

I have installed QReader using Python 3.8 on Ubuntu 20.04. When I run the test script using the example provided, I get a runtime error -
FileNotFoundError: [Errno 2] No such file or directory: '/test-example/venv/lib/python3.8/site-packages/.yolov7_qrdetl0nxf.tml'. Stack trace shows when attempt_load is called in model_utils.py which then calls attempt_download(w) in experimental.py which calls wget.download in google_utils.py.

Please help as this works really well on my development machine which is Windows 11 and Python 3.10 and does a neat job in detecting QR codes that OpenCV and PyZbar fail to do so.

Thanks!

TypeError due to exchanged bbox, found

Exception has occurred: TypeError
cannot unpack non-iterable bool object
File "QReader/qreader.py", line 71, in decode
x1, y1, x2, y2 = bbox
File "QReader/qreader.py", line 105, in detect_and_decode
return self.decode(image=image, bbox=bbox)
File "QReader/main_test.py", line 14, in
qreader_out = qreader_reader.detect_and_decode(img, False)

Reproduce with:

from qreader import QReader
import cv2
from pyzbar.pyzbar import decode

if __name__ == '__main__':
    # Initialize the three tested readers (QRReader, OpenCV and pyzbar)
    qreader_reader, cv2_reader, pyzbar_reader = QReader(), cv2.QRCodeDetector(), decode

    for img_path in (['documentation/resources/logo_square.png']):
        # Read the image
        img = cv2.imread(img_path)

        # Try to decode the QR code with the three readers
        qreader_out = qreader_reader.detect_and_decode(img, False)
        cv2_out = cv2_reader.detectAndDecode(img=img)[0]
        pyzbar_out = pyzbar_reader(image=img)
        # Read the content of the pyzbar output
        pyzbar_out = pyzbar_out[0].data.decode('utf-8') if len(pyzbar_out) > 0 else ""

        # Print the results
        print(f"Image: {img_path} -> QReader: {qreader_out}. OpenCV: {cv2_out}. pyzbar: {pyzbar_out}.")

able to detect but not decode this qr code, but both iOS and Android can

Hello,

I am working on challenging, distorted QR code detect and decode. In this particular example attached below, it seems that QReader is able to detect but not decode the QR code, which is not really surprising since the QR code quality is low and distorted.

However, it seems that both iphone and android camera is able to detect and decode this example, if I display it on a computer monitor. Wondering if there are additional preprocessing you have in mind that may help in this case?

Below is the sample code i used:

from qreader import QReader
import cv2


file_path = 'qrcode_test.png'
# Create a QReader instance
qreader = QReader()

# Get the image that contains the QR code
image = cv2.cvtColor(cv2.imread(file_path), cv2.COLOR_BGR2RGB)

# Use the detect_and_decode function to get the decoded QR data
decoded_text = qreader.detect_and_decode(image=image)
print(decoded_text)

qreader_test

Downloading weights library in real time

This is a issue while using AWS lambda as it doesn't allow write permissions on /var/lang/lib/python3.11/site-packages/qrdet/.model'. Can we change the directory of download to a custom path? Like i want to change it to /tmp
[Errno 30] Read-only file system: '/var/lang/lib/python3.11/site-packages/qrdet/.model'

Unable to use pyinstaller with QReader

Hi, I found QReader to be the best solution to catch hard to read QR codes, now I can't live without it.
The problem is I tried tu use pyinstaller with it but no success. I use Python 3.9 on Windows 10. My example code is:

main.py


import cv2
import qreader
imgread = cv2.imread('imgqr.png')
det = qreader.QReader().detect_and_decode(image=imgread)
print(det)

It works with no issues inside PyCharm, but when I try to run the exe made with pyinstaller I get this:

C:\Users\Filip\PycharmProjects\testpyinstallqr\dist\main>main.exe
torchvision\io\image.py:13: UserWarning: Failed to load image Python extension:
torch\_jit_internal.py:839: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001EEE768F550>.
  warnings.warn(
torch\_jit_internal.py:839: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x000001EEE768F790>.
  warnings.warn(
Downloading weights...: 100%|████████████████████████████████| 74.8M/74.8M [00:08<00:00, 8.94MiB/s]
Traceback (most recent call last):
  File "main.py", line 14, in <module>
  File "qreader.py", line 26, in __init__
  File "qrdet.py", line 27, in __init__
  File "yolov7_package\model_utils.py", line 147, in __init__
  File "yolov7_package\models\experimental.py", line 242, in attempt_load
  File "torch\serialization.py", line 789, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "torch\serialization.py", line 1131, in _load
    result = unpickler.load()
  File "torch\serialization.py", line 1124, in find_class
    return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'models'
[1940] Failed to execute script 'main' due to unhandled exception!

I'm a beginner so every help would be greatly appreciated!

Nice work

@Eric-Canas Hello, this is excellent work. I understand that you didn't use barcode data during training, which is not very helpful for my current project. My project involves detecting and decoding barcodes, so I was wondering if you could share your training code. I would greatly appreciate it! My email is [email protected].

Error executing example

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/user/.local/lib/python3.8/site-packages/qreader.py", line 100, in detect_and_decode
    bboxes = self.detect(image=image)
  File "/home/user/.local/lib/python3.8/site-packages/qreader.py", line 39, in detect
    detections = self.detector.detect(image=image, return_confidences=True, as_float=False)
  File "/home/user/.local/lib/python3.8/site-packages/qrdet.py", line 50, in detect
    dets = self.model.detect(image)
  File "/home/user/.local/lib/python3.8/site-packages/yolov7_package/model_utils.py", line 187, in detect
    img_inf = cv2.resize(img, self.img_size)
SystemError: new style getargs format but argument is not a tuple

I have this error pop up when I execute the example. I'm using torch 2.0, and the yolo7 the one it installed by default
Any ideas whats going on here?

No detection of orientation ?

Hello,

Thanks for this great software, it is really powerful !

I would like to get the orientation of the detected QR codes. I know pyzbar has an 'orientation' property that you can retrieve ('UP', 'LEFT', ...), but I don't see this in the output of QReader. Is there any way to get this information ? I have the impression that the bounding box is always given starting with the lowest x and y coordinates.

Micro QR support

I have tried running it on micro QRs but couldn't get it to decode.
Are you planning to add micro QR / ArUco / AprilTags support anytime soon?
Micro_QR_Example

detector = QReader()
img = cv2.cvtColor(cv2.imread('Micro_QR_Example.png'), cv2.COLOR_BGR2RGB)
QRs = detector.detect_and_decode(image=img)
for QR in QRs:
    print(QR)

returns None

creating QReader instance produces debugging messages on sdout

This code:

import qreader
qr = qreader.QReader()

produces this output:

Fusing layers...
IDetect.fuse

In a production environment these messages can be really annoying since they don't actually follow the proper logging paradigm, so they'll stick out like a sore thumb. To suppress them I have to do this:

class StdoutSuppressor:
    def __enter__(self):
        self.stdout = sys.stdout
        sys.stdout = self

    def __exit__(self, exception_type, value, traceback):
        sys.stdout = self.stdout
        if exception_type is not None:
            # Do normal exception handling
            raise Exception(f"Got exception: {exception_type} {value} {traceback}")

    def write(self, x):
        pass

import qreader
with StdoutSuppressor:
    qr = qreader.QReader()

Better is if these statements were actually coming from a logging DEBUG streamer which could be disabled via:

import logging
logging.getLogger("qreader").setLevel(logging.INFO)

Thanks!

Improvements in the code

In the latest version, the "QReader.decode" function has changed, now the presence of "detection_result" is required, can it be done as before to only serve the image as input? I have a linear algorithm for finding squares that may contain a QR code and this algorithm works faster than YoloV8 without a GPU

ImportError due to possible circular import

    from qreader import QReader
ImportError: cannot import name 'QReader' from partially initialized module 'qreader' (most likely due to a circular import)

I am using the example code on an example image:

from qreader import QReader
import cv2


# Create a QReader instance
qreader = QReader()

# Get the image that contains the QR code
image = cv2.cvtColor(cv2.imread("./image.png"), cv2.COLOR_BGR2RGB)

# Use the detect_and_decode function to get the decoded QR data
decoded_text = qreader.detect_and_decode(image=image)

Problem with loading weights

Using the example code in the readme.md on my workstation has shown that QReader succeeds where other python codes fail (congrats on that!). However, when moving the same small code into a venv on the server where it is meant to be included into a django project (and where hence django is installed), it fails by suddenly landing in django-land (after line 1124 in serialization.py) although django is not invoked in the example code.

This is confusing and kind of above my pay grade...

Traceback (most recent call last):
  File "/webapps/roggu/roggu/rogguapp/q1.py", line 15, in <module>
    detector = QReader()
  File "/webapps/roggu/lib/python3.10/site-packages/qreader.py", line 26, in __init__
    self.detector = QRDetector()
  File "/webapps/roggu/lib/python3.10/site-packages/qrdet.py", line 28, in __init__
    self.model = Yolov7Detector(weights=_WEIGHTS_PATH, img_size=None, agnostic_nms=True, traced=False)
  File "/webapps/roggu/lib/python3.10/site-packages/yolov7_package/model_utils.py", line 147, in __init__
    self.model = attempt_load([weights], map_location=self.device)  # load FP32 model
  File "/webapps/roggu/lib/python3.10/site-packages/yolov7_package/models/experimental.py", line 242, in attempt_load
    ckpt = torch.load(w, map_location=map_location)  # load
  File "/webapps/roggu/lib/python3.10/site-packages/torch/serialization.py", line 789, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "/webapps/roggu/lib/python3.10/site-packages/torch/serialization.py", line 1131, in _load
    result = unpickler.load()
  File "/webapps/roggu/lib/python3.10/site-packages/torch/serialization.py", line 1124, in find_class
    return super().find_class(mod_name, name)
  File "/webapps/roggu/roggu/rogguapp/models.py", line 3, in <module>
    class Beleg(models.Model):
  File "/webapps/roggu/lib/python3.10/site-packages/django/db/models/base.py", line 127, in __new__
    app_config = apps.get_containing_app_config(module)
  File "/webapps/roggu/lib/python3.10/site-packages/django/apps/registry.py", line 260, in get_containing_app_config
    self.check_apps_ready()
  File "/webapps/roggu/lib/python3.10/site-packages/django/apps/registry.py", line 137, in check_apps_ready
    settings.INSTALLED_APPS
  File "/webapps/roggu/lib/python3.10/site-packages/django/conf/__init__.py", line 92, in __getattr__
    self._setup(name)
  File "/webapps/roggu/lib/python3.10/site-packages/django/conf/__init__.py", line 72, in _setup
    raise ImproperlyConfigured(
django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.