Giter VIP home page Giter VIP logo

object-detection-api's Introduction

Yolov3 Object Detection with Flask and Tensorflow 2.0 (APIs and Detections)

Yolov3 is an algorithm that uses deep convolutional neural networks to perform object detection. This repository implements Yolov3 using TensorFlow 2.0 and creates two easy-to-use APIs that you can integrate into web or mobile applications.

example

Getting started

Conda (Recommended)

# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate yolov3-cpu

# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate yolov3-gpu

Pip

# TensorFlow CPU
pip install -r requirements.txt

# TensorFlow GPU
pip install -r requirements-gpu.txt

Nvidia Driver (For GPU, if you haven't set it up already)

# Ubuntu 18.04
sudo apt-add-repository -r ppa:graphics-drivers/ppa
sudo apt install nvidia-driver-430
# Windows/Other
https://www.nvidia.com/Download/index.aspx

Downloading official pretrained weights

For Linux: Let's download official yolov3 weights pretrained on COCO dataset.

# yolov3
wget https://pjreddie.com/media/files/yolov3.weights -O weights/yolov3.weights

# yolov3-tiny
wget https://pjreddie.com/media/files/yolov3-tiny.weights -O weights/yolov3-tiny.weights

For Windows: You can download the yolov3 weights by clicking here and yolov3-tiny here then save them to the weights folder.

Using Custom trained weights

Learn How To Train Custom YOLOV3 Weights Here: https://www.youtube.com/watch?v=zJDUhGL26iU

Add your custom weights file to weights folder and your custom .names file into data/labels folder.

Saving your yolov3 weights as a TensorFlow model.

Load the weights using load_weights.py script. This will convert the yolov3 weights into TensorFlow .ckpt model files!

# yolov3
python load_weights.py

# yolov3-tiny
python load_weights.py --weights ./weights/yolov3-tiny.weights --output ./weights/yolov3-tiny.tf --tiny

After executing one of the above lines, you should see .tf files in your weights folder.

Running the Flask App and Using the APIs

Now you can run a Flask application to create two object detections APIs in order to get detections through REST endpoints.

If you used custom weights and classes then you may need to adjust one or two of the following lines within the app.py file before running it. app

You may also want to configure IOU threshold (how close two of the same class have to be in order to count it as one detection), the Confidence threshold (minimum detected confidence of a class in order to count it as a detection), or the maximum number of classes that can be detected in one image and all three can be adjusted within the yolov3-tf2/models.py file. models

Initialize and run the Flask app on port 5000 of your local machine by running the following command from the root directory of this repo in a command prompt or shell.

python app.py

You should see the following appear in the command prompt if the app is successfully running. app

While app.py is running the first available API is a POST routed to /detections on port 5000 of localhost. This endpoint takes in images as input and returns a JSON response with all the detections found within each image (classes found within the images and the associated confidence)

You can test out the APIs using Postman or through Curl commands (both work fine). You may have to download them if you don't already have them.

Accessing Detections API with Postman (RECOMMENDED)

Access the /detections API through Postman by doing the following. postman Note that the body has to have key "images of type "form-data" set to file. When uploading files hold CTRL button and click to choose multiple photos.

The response should look similar to this. response

Accessing Detections API with Curl

To access and test the API through Curl, open a second command prompt or shell (may have to run as Administrator). Then cd your way to the root folder of this repository (Object-Detection-API) and run the following command.

curl.exe -X POST -F images=@data/images/dog.jpg "http://localhost:5000/detections"

The JSON response should be outputted to the commmand prompt if it worked successfully.

While app.py is running the second available API is a POST routed to /image on port 5000 of localhost. This endpoint takes in a single image as input and returns a string encoded image as the response with all the detections now drawn on the image.

Accessing Detections API with Postman (RECOMMENDED)

Access the /image API through Postman by configuring the following. postman

The uploaded image should be returned with the detections now drawn. postman

Accessing Detections API with Curl

To access and test the API through Curl, open a second command prompt or shell (may have to run as Administrator). Then cd your way to the root folder of this repository (Object-Detection-API) and run the following command.

curl.exe -X POST -F images=@data/images/dog.jpg "http://localhost:5000/image" --output test.png

This will save the returned image to the current folder as test.png (can't output the string encoded image to command prompt)

NOTE: As a backup both APIs save the images with the detections drawn overtop to the /detections folder upon each API request.

These are the two APIs I currently have created for Yolov3 Object Detection and I hope you find them useful. Feel free to integrate them into your applications as needed.

Running just the TensorFlow model

The tensorflow model can also be run not using the APIs but through using detect.py script.

Don't forget to set the IoU (Intersection over Union) and Confidence Thresholds within your yolov3-tf2/models.py file

Usage examples

Let's run an example or two using sample images found within the data/images folder.

# yolov3
python detect.py --images "data/images/dog.jpg, data/images/office.jpg"

# yolov3-tiny
python detect.py --weights ./weights/yolov3-tiny.tf --tiny --images "data/images/dog.jpg"

# webcam
python detect_video.py --video 0

# video file
python detect_video.py --video data/video/paris.mp4 --weights ./weights/yolov3-tiny.tf --tiny

# video file with output saved (can save webcam like this too)
python detect_video.py --video path_to_file.mp4 --output ./detections/output.avi

Then you can find the detections in the detections folder.
You should see these two images saved for running the first command.

detection1.jpg

demo

detection2.jpg

demo

Video example

demo

Command Line Args Reference

load_weights.py:
  --output: path to output
    (default: './weights/yolov3.tf')
  --[no]tiny: yolov3 or yolov3-tiny
    (default: 'false')
  --weights: path to weights file
    (default: './weights/yolov3.weights')
  --num_classes: number of classes in the model
    (default: '80')
    (an integer)

detect.py:
  --classes: path to classes file
    (default: './data/labels/coco.names')
  --images: path to input images as a string with images separated by ","
    (default: 'data/images/dog.jpg')
  --output: path to output folder
    (default: './detections/')
  --[no]tiny: yolov3 or yolov3-tiny
    (default: 'false')
  --weights: path to weights file
    (default: './weights/yolov3.tf')
  --num_classes: number of classes in the model
    (default: '80')
    (an integer)

detect_video.py:
  --classes: path to classes file
    (default: './data/labels/coco.names')
  --video: path to input video (use 0 for webcam)
    (default: './data/video/paris.mp4')
  --output: path to output video (remember to set right codec for given format. e.g. XVID for .avi)
    (default: None)
  --output_format: codec used in VideoWriter when saving video to file
    (default: 'XVID)
  --[no]tiny: yolov3 or yolov3-tiny
    (default: 'false')
  --weights: path to weights file
    (default: './weights/yolov3.tf')
  --num_classes: number of classes in the model
    (default: '80')
    (an integer)

Acknowledgments

object-detection-api's People

Contributors

dependabot[bot] avatar theaiguyscode avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

object-detection-api's Issues

Very weak results with Tiny YOLOv3 weights

Hello!
I got very bad detection result for tiny YOLO model, it feels like model is highly underfitted.
I used detection.py file and changed strings 14 - 16 to use tiny version like:

flags.DEFINE_string('weights', './weights/yolov3-tiny.tf',
                    'path to weights file')
flags.DEFINE_boolean('tiny', True, 'yolov3 or yolov3-tiny')

The weights of models were downloaded by links from READ.ME file and were converted to tensorflow format without any errors.

Examples with YOLOv3 and Tiny YOLO v3:
dog_yolo
dog_tiny_yolo

crowd_yolo
crowd_tiny_yolo

smartphones_yolo
smartphones_tiny_yolo

Error Message When Loading My Weights

I was able to follow the entire video but getting the error messag below when loading this,

(yolov3-gpu) C:\Users\rob26\Desktop\Object-Detection-API>python load_weights.py

(As suggested, using Anaconda)...

(yolov3-gpu) C:\Users\rob26\Desktop\Object-Detection-API>python load_weights.py
2020-03-22 14:12:34.123439: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
2020-03-22 14:12:36.164552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-03-22 14:12:36.190347: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce GTX 1660 major: 7 minor: 5 memoryClockRate(GHz): 1.83
pciBusID: 0000:01:00.0
2020-03-22 14:12:36.196564: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2020-03-22 14:12:36.201758: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-22 14:12:36.210883: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-03-22 14:12:36.216014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce GTX 1660 major: 7 minor: 5 memoryClockRate(GHz): 1.83
pciBusID: 0000:01:00.0
2020-03-22 14:12:36.221990: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2020-03-22 14:12:36.226487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-03-22 14:12:36.784693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-22 14:12:36.788336: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0
2020-03-22 14:12:36.791547: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N
2020-03-22 14:12:36.796796: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4630 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1660, pci bus id: 0000:01:00.0, compute capability: 7.5)
Model: "yolov3"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input (InputLayer)              [(None, None, None,  0
__________________________________________________________________________________________________
yolo_darknet (Model)            ((None, None, None,  40620640    input[0][0]
__________________________________________________________________________________________________
yolo_conv_0 (Model)             (None, None, None, 5 11024384    yolo_darknet[1][2]
__________________________________________________________________________________________________
yolo_conv_1 (Model)             (None, None, None, 2 2957312     yolo_conv_0[1][0]
                                                                 yolo_darknet[1][1]
__________________________________________________________________________________________________
yolo_conv_2 (Model)             (None, None, None, 1 741376      yolo_conv_1[1][0]
                                                                 yolo_darknet[1][0]
__________________________________________________________________________________________________
yolo_output_0 (Model)           (None, None, None, 3 4984063     yolo_conv_0[1][0]
__________________________________________________________________________________________________
yolo_output_1 (Model)           (None, None, None, 3 1312511     yolo_conv_1[1][0]
__________________________________________________________________________________________________
yolo_output_2 (Model)           (None, None, None, 3 361471      yolo_conv_2[1][0]
__________________________________________________________________________________________________
yolo_boxes_0 (Lambda)           ((None, None, None,  0           yolo_output_0[1][0]
__________________________________________________________________________________________________
yolo_boxes_1 (Lambda)           ((None, None, None,  0           yolo_output_1[1][0]
__________________________________________________________________________________________________
yolo_boxes_2 (Lambda)           ((None, None, None,  0           yolo_output_2[1][0]
__________________________________________________________________________________________________
yolo_nms (Lambda)               ((None, 100, 4), (No 0           yolo_boxes_0[0][0]
                                                                 yolo_boxes_0[0][1]
                                                                 yolo_boxes_0[0][2]
                                                                 yolo_boxes_1[0][0]
                                                                 yolo_boxes_1[0][1]
                                                                 yolo_boxes_1[0][2]
                                                                 yolo_boxes_2[0][0]
                                                                 yolo_boxes_2[0][1]
                                                                 yolo_boxes_2[0][2]
==================================================================================================
Total params: 62,001,757
Trainable params: 61,949,149
Non-trainable params: 52,608
__________________________________________________________________________________________________
I0322 14:12:41.026230  8916 load_weights.py:19] model created
I0322 14:12:41.028251  8916 utils.py:47] yolo_darknet/conv2d bn
I0322 14:12:41.031241  8916 utils.py:47] yolo_darknet/conv2d_1 bn
I0322 14:12:41.033211  8916 utils.py:47] yolo_darknet/conv2d_2 bn
I0322 14:12:41.036204  8916 utils.py:47] yolo_darknet/conv2d_3 bn
I0322 14:12:41.039179  8916 utils.py:47] yolo_darknet/conv2d_4 bn
I0322 14:12:41.042149  8916 utils.py:47] yolo_darknet/conv2d_5 bn
I0322 14:12:41.045166  8916 utils.py:47] yolo_darknet/conv2d_6 bn
I0322 14:12:41.047483  8916 utils.py:47] yolo_darknet/conv2d_7 bn
I0322 14:12:41.050596  8916 utils.py:47] yolo_darknet/conv2d_8 bn
I0322 14:12:41.052566  8916 utils.py:47] yolo_darknet/conv2d_9 bn
I0322 14:12:41.057577  8916 utils.py:47] yolo_darknet/conv2d_10 bn
I0322 14:12:41.060544  8916 utils.py:47] yolo_darknet/conv2d_11 bn
I0322 14:12:41.065555  8916 utils.py:47] yolo_darknet/conv2d_12 bn
I0322 14:12:41.067526  8916 utils.py:47] yolo_darknet/conv2d_13 bn
I0322 14:12:41.071515  8916 utils.py:47] yolo_darknet/conv2d_14 bn
I0322 14:12:41.074537  8916 utils.py:47] yolo_darknet/conv2d_15 bn
I0322 14:12:41.079493  8916 utils.py:47] yolo_darknet/conv2d_16 bn
I0322 14:12:41.082505  8916 utils.py:47] yolo_darknet/conv2d_17 bn
I0322 14:12:41.086475  8916 utils.py:47] yolo_darknet/conv2d_18 bn
I0322 14:12:41.089491  8916 utils.py:47] yolo_darknet/conv2d_19 bn
I0322 14:12:41.094455  8916 utils.py:47] yolo_darknet/conv2d_20 bn
I0322 14:12:41.097445  8916 utils.py:47] yolo_darknet/conv2d_21 bn
I0322 14:12:41.101435  8916 utils.py:47] yolo_darknet/conv2d_22 bn
I0322 14:12:41.104452  8916 utils.py:47] yolo_darknet/conv2d_23 bn
I0322 14:12:41.109441  8916 utils.py:47] yolo_darknet/conv2d_24 bn
I0322 14:12:41.112406  8916 utils.py:47] yolo_darknet/conv2d_25 bn
I0322 14:12:41.116420  8916 utils.py:47] yolo_darknet/conv2d_26 bn
I0322 14:12:41.129360  8916 utils.py:47] yolo_darknet/conv2d_27 bn
I0322 14:12:41.132377  8916 utils.py:47] yolo_darknet/conv2d_28 bn
I0322 14:12:41.144320  8916 utils.py:47] yolo_darknet/conv2d_29 bn
I0322 14:12:41.148334  8916 utils.py:47] yolo_darknet/conv2d_30 bn
I0322 14:12:41.160302  8916 utils.py:47] yolo_darknet/conv2d_31 bn
I0322 14:12:41.163483  8916 utils.py:47] yolo_darknet/conv2d_32 bn
I0322 14:12:41.175453  8916 utils.py:47] yolo_darknet/conv2d_33 bn
I0322 14:12:41.178956  8916 utils.py:47] yolo_darknet/conv2d_34 bn
I0322 14:12:41.189929  8916 utils.py:47] yolo_darknet/conv2d_35 bn
I0322 14:12:41.194916  8916 utils.py:47] yolo_darknet/conv2d_36 bn
I0322 14:12:41.205785  8916 utils.py:47] yolo_darknet/conv2d_37 bn
I0322 14:12:41.209749  8916 utils.py:47] yolo_darknet/conv2d_38 bn
I0322 14:12:41.220768  8916 utils.py:47] yolo_darknet/conv2d_39 bn
I0322 14:12:41.224793  8916 utils.py:47] yolo_darknet/conv2d_40 bn
I0322 14:12:41.236654  8916 utils.py:47] yolo_darknet/conv2d_41 bn
I0322 14:12:41.239645  8916 utils.py:47] yolo_darknet/conv2d_42 bn
I0322 14:12:41.251588  8916 utils.py:47] yolo_darknet/conv2d_43 bn
I0322 14:12:41.305469  8916 utils.py:47] yolo_darknet/conv2d_44 bn
I0322 14:12:41.312425  8916 utils.py:47] yolo_darknet/conv2d_45 bn
I0322 14:12:41.363314  8916 utils.py:47] yolo_darknet/conv2d_46 bn
I0322 14:12:41.370305  8916 utils.py:47] yolo_darknet/conv2d_47 bn
I0322 14:12:41.421196  8916 utils.py:47] yolo_darknet/conv2d_48 bn
I0322 14:12:41.428506  8916 utils.py:47] yolo_darknet/conv2d_49 bn
I0322 14:12:41.480397  8916 utils.py:47] yolo_darknet/conv2d_50 bn
I0322 14:12:41.488376  8916 utils.py:47] yolo_darknet/conv2d_51 bn
I0322 14:12:41.538240  8916 utils.py:47] yolo_conv_0/conv2d_52 bn
I0322 14:12:41.545222  8916 utils.py:47] yolo_conv_0/conv2d_53 bn
I0322 14:12:41.597058  8916 utils.py:47] yolo_conv_0/conv2d_54 bn
I0322 14:12:41.603073  8916 utils.py:47] yolo_conv_0/conv2d_55 bn
I0322 14:12:41.655930  8916 utils.py:47] yolo_conv_0/conv2d_56 bn
I0322 14:12:41.663909  8916 utils.py:47] yolo_output_0/conv2d_57 bn
I0322 14:12:41.713775  8916 utils.py:47] yolo_output_0/conv2d_58 bias
I0322 14:12:41.717764  8916 utils.py:47] yolo_conv_1/conv2d_59 bn
I0322 14:12:41.719759  8916 utils.py:47] yolo_conv_1/conv2d_60 bn
I0322 14:12:41.722752  8916 utils.py:47] yolo_conv_1/conv2d_61 bn
I0322 14:12:41.732725  8916 utils.py:47] yolo_conv_1/conv2d_62 bn
I0322 14:12:41.735716  8916 utils.py:47] yolo_conv_1/conv2d_63 bn
I0322 14:12:41.746687  8916 utils.py:47] yolo_conv_1/conv2d_64 bn
I0322 14:12:41.749615  8916 utils.py:47] yolo_output_1/conv2d_65 bn
I0322 14:12:41.760585  8916 utils.py:47] yolo_output_1/conv2d_66 bias
I0322 14:12:41.762580  8916 utils.py:47] yolo_conv_2/conv2d_67 bn
I0322 14:12:41.764575  8916 utils.py:47] yolo_conv_2/conv2d_68 bn
I0322 14:12:41.766757  8916 utils.py:47] yolo_conv_2/conv2d_69 bn
I0322 14:12:41.769778  8916 utils.py:47] yolo_conv_2/conv2d_70 bn
I0322 14:12:41.771775  8916 utils.py:47] yolo_conv_2/conv2d_71 bn
I0322 14:12:41.777057  8916 utils.py:47] yolo_conv_2/conv2d_72 bn
I0322 14:12:41.779037  8916 utils.py:47] yolo_output_2/conv2d_73 bn
I0322 14:12:41.782189  8916 utils.py:47] yolo_output_2/conv2d_74 bias
I0322 14:12:41.783190  8916 load_weights.py:22] weights loaded
2020-03-22 14:12:41.800300: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-03-22 14:12:43.102478: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows
Relying on driver to perform ptx compilation. This message will be only logged once.
2020-03-22 14:12:43.215831: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found
2020-03-22 14:12:43.220370: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_INTERNAL_ERROR
2020-03-22 14:12:43.224648: W tensorflow/stream_executor/stream.cc:1919] attempting to perform BLAS operation using StreamExecutor without BLAS support
Traceback (most recent call last):
  File "load_weights.py", line 34, in <module>
    app.run(main)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\absl\app.py", line 299, in run
    _run_main(main, args)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\absl\app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "load_weights.py", line 25, in main
    output = yolo(img)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
    outputs = self.call(cast_inputs, *args, **kwargs)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 708, in call
    convert_kwargs_to_constants=base_layer_utils.call_context().saving)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 860, in _run_internal_graph
    output_tensors = layer(computed_tensors, **kwargs)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
    outputs = self.call(cast_inputs, *args, **kwargs)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 708, in call
    convert_kwargs_to_constants=base_layer_utils.call_context().saving)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 860, in _run_internal_graph
    output_tensors = layer(computed_tensors, **kwargs)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
    outputs = self.call(cast_inputs, *args, **kwargs)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\layers\convolutional.py", line 197, in call
    outputs = self._convolution_op(inputs, self.kernel)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 1134, in __call__
    return self.conv_op(inp, filter)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 639, in __call__
    return self.call(inp, filter)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 238, in __call__
    name=self.name)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 2010, in conv2d
    name=name)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py", line 1031, in conv2d
    data_format=data_format, dilations=dilations, name=name, ctx=_ctx)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py", line 1130, in conv2d_eager_fallback
    ctx=_ctx, name=name)
  File "C:\Users\rob26\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
    six.raise_from(core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : m=25600, n=32, k=64 [Op:Conv2D]

ValueError: When using data tensors as input to a model, you should specify the `steps` argument.

I entered this code in git bash and i saw these error
I didn't touch anything. Just i followed your step on youtube
https://www.youtube.com/watch?v=p44G9_xCM4I

$ python detect_video.py --video 'data/video/video.mp4' --output 'data/video/output.avi'
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\ops\init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W0403 02:33:27.644036 21832 deprecation.py:506] From C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\ops\init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
2020-04-03 02:33:39.667521: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0403 02:33:47.461905 21832 detect_video.py:36] weights loaded
I0403 02:33:47.462905 21832 detect_video.py:39] classes loaded
Traceback (most recent call last):
File "detect_video.py", line 94, in
app.run(main)
File "C:\Users\USER.conda\envs\yolov3-cpu\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\USER.conda\envs\yolov3-cpu\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "detect_video.py", line 76, in main
boxes, scores, classes, nums = yolo.predict(img_in)
File "C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 1060, in predict
x, check_steps=True, steps_name='steps', steps=steps)
File "C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 2509, in _standardize_user_data
training_utils.check_steps_argument(x, steps, steps_name)
File "C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training_utils.py", line 990, in check_steps_argument
input_type=input_type_str, steps_name=steps_name))
ValueError: When using data tensors as input to a model, you should specify the steps argument.
(yolov3-cpu)

================================================================

USER@DESKTOP-GT9682B MINGW64 ~/Desktop/학교/프로젝트/yolo/AIGuys_yolo/Object-Detection-API (master)
$ python detect_video.py --video 0 --output 'data/video/output.avi'
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\ops\init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W0403 02:40:38.728249 15728 deprecation.py:506] From C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\ops\init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
2020-04-03 02:40:47.488561: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
I0403 02:40:55.117871 15728 detect_video.py:36] weights loaded
I0403 02:40:55.119870 15728 detect_video.py:39] classes loaded
Traceback (most recent call last):
File "detect_video.py", line 94, in
app.run(main)
File "C:\Users\USER.conda\envs\yolov3-cpu\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\USER.conda\envs\yolov3-cpu\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "detect_video.py", line 76, in main
boxes, scores, classes, nums = yolo.predict(img_in)
File "C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 1060, in predict
x, check_steps=True, steps_name='steps', steps=steps)
File "C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 2509, in _standardize_user_data
training_utils.check_steps_argument(x, steps, steps_name)
File "C:\Users\USER\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training_utils.py", line 990, in check_steps_argument
input_type=input_type_str, steps_name=steps_name))
ValueError: When using data tensors as input to a model, you should specify the steps argument.
(yolov3-cpu)

what should i do??

Unable to convert files to tf but no error is displayed

after i run python load_weights,
this is the output:
2020-06-09 21:12:35.418124: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
but it times out after a couple of seconds and no .tf tile is present after

ImportError: No module named yolov3_tf2.models

Hey guys, I'm trying to convert my custom weights files to TensorFlow format but keep running into the same error.
I've renamed my files to yolov3.weights and placed then in the same folder as the official pretrained weights. I've also changed the coco.names file accordingly.
I have also changed the number of classes to match in the app.py, detect_video.pi and detect.py
Please let me know what I'm doing wrong!

(yolov3-cpu) MacBook-Pro-Milan-2:Object-Detection-API Milan$ python load_weights.py Traceback (most recent call last): File "load_weights.py", line 4, in <module> from yolov3_tf2.models import YoloV3, YoloV3Tiny ImportError: No module named yolov3_tf2.models

Milan

DuplicateFlagError: The flag 'classes' is defined twice. First from /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py, Second from /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py. Description from first occurrence: path to classes file

I was running this application in colab. When I ran the load_weights.py file, it showed FATAL Flags parsing error: Unknown command line flag 'f' . But the code ran, so I moved forward and ran the detect_video.py with my weights file as /content/yolov3.weights and classes file as /content/Object-Detection-API/data/labels/coco.names. It showed DuplicateFlagError: The flag 'classes' is defined twice. First from /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py, Second from /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py. Description from the first occurrence: path to classes file
My colab notebook :- https://colab.research.google.com/drive/1bESeiPhNPbRGO7xWDK0X6uzlo3gu6p3C#scrollTo=mGWhScK0jAeQ

what happened to the generated label .txt files?

after running the convert_annotations.py file got txt files for each of the images indicating its labels and annotations. But shouldn't I list those files in the obj.data file? How did my model get the labels?

ValueError: cannot reshape array of size 324670 into shape (512,256,3,3)

Hi,
when I'm running python load_weights.py --weights ./weights/yolov3-tiny.weights --output ./weights/yolov3-tiny.tf --tiny
on my Jetson Nano (Jetpack 4.3, tensorflow 2.1.0) an error occurs.

Error:

ValueError: cannot reshape array of size 324670 into shape (512,256,3,3)

Can somebody help me with that?
Best regards an thanks in advance.

Below you will find the complete output after running the python file:

(aiguyyolotest1) christopher@ccz:~/aiguyyolotest1/Object-Detection-API$ python load_weights.py --weights ./weights/yolov3-tiny.weights --output ./weights/yolov3-tiny.tf --tiny
2020-09-18 08:33:58.451661: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-09-18 08:34:03.677829: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-09-18 08:34:03.724180: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
2020-09-18 08:34:18.304032: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-09-18 08:34:18.352799: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:18.353007: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-09-18 08:34:18.353174: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-09-18 08:34:18.353300: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-09-18 08:34:18.435096: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-09-18 08:34:18.543380: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-09-18 08:34:18.661994: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-09-18 08:34:18.731548: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-09-18 08:34:18.732000: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-09-18 08:34:18.733151: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:18.733627: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:18.733735: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-09-18 08:34:18.843884: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-09-18 08:34:18.844908: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3ce9c5e0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-18 08:34:18.844972: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-09-18 08:34:18.945611: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:18.945924: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3ce01a20 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-18 08:34:18.945975: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2020-09-18 08:34:18.946674: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:18.946802: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-09-18 08:34:18.946871: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-09-18 08:34:18.946920: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-09-18 08:34:18.947109: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-09-18 08:34:18.947211: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-09-18 08:34:18.947296: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-09-18 08:34:18.947378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-09-18 08:34:18.947418: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-09-18 08:34:18.947742: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:18.948078: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:18.948156: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-09-18 08:34:18.948256: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-09-18 08:34:31.465761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-18 08:34:31.465973: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-09-18 08:34:31.466015: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-09-18 08:34:31.482722: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:31.483740: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-09-18 08:34:31.489128: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 279 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
Model: "yolov3_tiny"


Layer (type) Output Shape Param # Connected to

input (InputLayer) [(None, None, None, 0


yolo_darknet (Model) ((None, None, None, 6298480 input[0][0]


yolo_conv_0 (Model) (None, None, None, 2 263168 yolo_darknet[1][1]


yolo_conv_1 (Model) (None, None, None, 3 33280 yolo_conv_0[1][0]
yolo_darknet[1][0]


yolo_output_0 (Model) (None, None, None, 3 1312511 yolo_conv_0[1][0]


yolo_output_1 (Model) (None, None, None, 3 951295 yolo_conv_1[1][0]


yolo_boxes_0 (Lambda) ((None, None, None, 0 yolo_output_0[1][0]


yolo_boxes_1 (Lambda) ((None, None, None, 0 yolo_output_1[1][0]


yolo_nms (Lambda) ((None, 100, 4), (No 0 yolo_boxes_0[0][0]
yolo_boxes_0[0][1]
yolo_boxes_0[0][2]
yolo_boxes_1[0][0]
yolo_boxes_1[0][1]
yolo_boxes_1[0][2]

Total params: 8,858,734
Trainable params: 8,852,366
Non-trainable params: 6,368


I0918 08:34:40.257712 547578310672 load_weights.py:19] model created
I0918 08:34:40.266925 547578310672 utils.py:47] yolo_darknet/conv2d bn
I0918 08:34:40.282309 547578310672 utils.py:47] yolo_darknet/conv2d_1 bn
I0918 08:34:40.295813 547578310672 utils.py:47] yolo_darknet/conv2d_2 bn
I0918 08:34:40.312008 547578310672 utils.py:47] yolo_darknet/conv2d_3 bn
I0918 08:34:40.321672 547578310672 utils.py:47] yolo_darknet/conv2d_4 bn
I0918 08:34:40.345241 547578310672 utils.py:47] yolo_darknet/conv2d_5 bn
Traceback (most recent call last):
File "load_weights.py", line 34, in
app.run(main)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "load_weights.py", line 21, in main
load_darknet_weights(yolo, FLAGS.weights, FLAGS.tiny)
File "/home/christopher/aiguyyolotest1/Object-Detection-API/yolov3_tf2/utils.py", line 68, in load_darknet_weights
conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 324670 into shape (512,256,3,3)

ValueError: cannot reshape array of size 4607 into shape (18,256,1,1)

I am trying to convert this yolov3 custom model: https://drive.google.com/drive/folders/17jysPykGMkNw66lDMd0kryybCvGOesKi?usp=sharing into tensorflow format

in the load_weights.py I changed line 10 to:
flags.DEFINE_integer('num_classes', 1, 'number of classes in the model')

and in the ./data/labels/coco.names file I changed it to only "mice" (it detects mice for video analysis purposes)

however it begins converting until it produces the error in the title of this issue after this line in the cmd:
Screenshot (15)

please if someone could help that would be much appreciated

modulenofounderror

I have nvidia driver 384.130 version, cuda 9.0 ver, cudnn 7.6.4
I tried python app.py , an error occurs “tensorflow.python.framework.errors_impl.InternalError.cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version.
so I downgraded tensorflow-gpu version from 2.1 to 1.14.
I run python app.py agian. and then another error occurred “ModuleNotFoundERror : No module named ‘tensorflow.keras’”
I downloaded tensorflow 2.1 ver again..
How can I fix it? :(
Thanks.

API for YOLOv4

@theAIGuysCode

Kindly make a repo for yolov4 deployment as well.
Same/similar approach is not working for it.

How shall I make an API for YOLOv4?

Production server error

    if tiny:
        yolo = YoloV3Tiny(classes=num_classes)
    else:
        yolo = YoloV3(classes=num_classes)

    yolo.load_weights(weights_path).expect_partial()
    print('weights loaded')

    class_names = [c.strip() for c in open(classes_path).readlines()]
    print('classes loaded')

This code throw error on server startup. "Truncated or oversized response headers received from daemon process on production server"

ValueError: 'images' contains no shape.

I have pip installed requirements-gpu and am trying to run custom object detection with yolov3. I have followed all the instructions and after uploading an image through postman I am getting this error:
image

Unable to convert weights file into tf

my custom weights file is junglecamp0.6.weights, based on yolov3 (not yolov3-tiny)
31 classes

When I run the command " python load_weights.py --weights ./weights/junglecamp0.6.weights --output ./weights/junglecamp0.6.tf " it gives me the following error message:

ValueError: cannot reshape array of size 76070 into shape (256,128,3,3)

How to add method when the object was detected ?

Good morning,

I'm stuck today because I don't know how to add a python method on YoloV3 or YoloV4,

My goal is to trigger an alarm (to send an e-mail for example) when an object will be detected,
example:

When my Yolov3 program detects it will send an alert email or a message (sms)

Detect video error

ive ran everything correctly and it worked for awhile. then, this happened....

Traceback (most recent call last):
  File "C:\Users\finnx\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\training\py_checkpoint_reader.py", line 95, in NewCheckpointReader
    return CheckpointReader(compat.as_bytes(filepattern))
RuntimeError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./weights/yolov3.tf

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "detect_video.py", line 94, in <module>
    app.run(main)
  File "C:\Users\finnx\anaconda3\envs\yolov3-gpu\lib\site-packages\absl\app.py", line 303, in run
    _run_main(main, args)
  File "C:\Users\finnx\anaconda3\envs\yolov3-gpu\lib\site-packages\absl\app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "detect_video.py", line 35, in main
    yolo.load_weights(FLAGS.weights)
  File "C:\Users\finnx\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 234, in load_weights
    return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
  File "C:\Users\finnx\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 1187, in load_weights
    py_checkpoint_reader.NewCheckpointReader(filepath)
  File "C:\Users\finnx\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\training\py_checkpoint_reader.py", line 99, in NewCheckpointReader
    error_translator(e)
  File "C:\Users\finnx\anaconda3\envs\yolov3-gpu\lib\site-packages\tensorflow_core\python\training\py_checkpoint_reader.py", line 35, in error_translator
    raise errors_impl.NotFoundError(None, None, error_message)
tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ./weights/yolov3.tf

PLEASE HELP!!!!

Production Server Error

Dear Author,
I was running the Object Detection API code on Google Colab. But when I am running app.py code, it gives me this error at the production WSGI Server. **http://0.0.0.0:5000/ **
Error:
The webpage at http://0.0.0.0:5000/ might be temporarily down or it may have moved permanently to a new web address.

Kindly suggest me the steps to solve this issue.

error when i run "app.py" cuda error. I'm in a hurryyyy!!!!!!!!!!!!!!!

Hi. I'm trying object-detection-API.
I run all code. finally I run app.py but error occurred.

failed to allocate 120.19M (126025728 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2020-06-14 01:30:23.044898: F tensorflow/stream_executor/cuda/cuda_driver.cc:175] Check failed: err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: out of memory

It is result when i run nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.130 Driver Version: 384.130 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 840M Off | 00000000:0A:00.0 Off | N/A |
| N/A 39C P5 N/A / N/A | 249MiB / 2002MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1065 G /usr/lib/xorg/Xorg 119MiB |
| 0 1822 G compiz 79MiB |
| 0 2280 G /opt/teamviewer/tv_bin/TeamViewer 9MiB |
| 0 4531 G ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files 39MiB |
+-----------------------------------------------------------------------------+

I should fix it by Monday. plz help me plz
pleaseeeeee......
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.