yehengchen / object-detection-and-tracking Goto Github PK
View Code? Open in Web Editor NEWObject Detection and Multi-Object Tracking
License: MIT License
Object Detection and Multi-Object Tracking
License: MIT License
Why is this?
if len(pts[track.track_id]) != None:
UnboundLocalError: local variable 'track' referenced before assignment
请问您使用deep sort的时候自己训练了车辆模型吗?能否分享一下,谢谢。
以及deepsort 能否改成weights檔
Hi, thanks for your great project.
In your project "YOLOv3 + Deep_SORT", I find that your first git demo can track the 'car' and 'person' at same time(the first video here). I really want to know how to do that, and would you mind show me this code?
Hi,
I ran the code of deep_sort_yolov3 with instruction as python3 main.py -c person -i data/cam1.avi
I met the following error which seems to be caused by the shape mismatch.
Do you have any suggestion how to fix this?
And can you provide a testing video?
Thank you.
InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 312375 values, but the requested shape requires a multiple of 40425 [[Node: Reshape_3 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](conv2d_59/BiasAdd, Reshape_3/shape)]]
Is there any function to know if a vehicle is static or moving?
I could not find good video for testing.
When running the project yolov3+sort, the video cannot be output, and the runtime results are as follows:
ascend@ubuntu:~/ascend/Demo_Yolov3_Sort/yolov3_sort$ python3 main.py --input input/TownCentreXVID.avi --output output/TownCentreXVID_Result.avi --yolo yolo-obj
/home/ascend/.local/lib/python3.5/site-packages/sklearn/utils/linear_assignment_.py:22: FutureWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
FutureWarning)
[INFO] loading YOLO from disk...
[INFO] 7500 total frames in video
[ERROR:0] global /io/opencv/modules/videoio/src/cap.cpp (415) open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.2.0) /io/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): output/TownCentreXVID_Result.avi in function 'icvExtractPattern'
[INFO] single frame took 0.5864 seconds
[INFO] estimated total time to finish: 4398.2953
/home/ascend/.local/lib/python3.5/site-packages/sklearn/utils/linear_assignment_.py:128: FutureWarning: The linear_assignment function is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
FutureWarning)
please give me some advice. Thanks.
你好 看见您最新b站投稿,对车辆进行了跟踪计数,我比较困惑的是只需要重新训练车辆yolov3.weights吗 然后更改predicted_class != 'car':吗 这个网址 https://github.com/nwojke/cosine_metric_learning还需要训练吗
After testing my model on this code, the bounding boxes for the objects became odd looking. For example, whenever it detects a car, it shows the box looking like this:
Is there any way I can fix the way the boxes are generated, so that it only surrounds the object?
Another thing I would like to ask about:
How do I display a separate counter for object 'car', and another one for object 'tree'?
Hi yehengchen,
Can you share file 'model_data/market1501.pb'?
Best regards,
PeterPham
How could I track cars and pedestrians at the same time? Exactly like what you got in the demo gifs.
Thank you for reply.
您好:
關於deep_sort_yolov3,我使用2080TI 與E3-1225 [email protected]
實際影片都只能跑十幾的FPS速度,想問您跑的效能如何?
而模型是否可以置換yolov3-tiny
謝謝
I did well with yolov3. But I need more fps.
I wanna use it in deep_sort yolov3.
Converting was well-done. But when I implement main.py then error comes out with below statement.
Traceback (most recent call last):
File "main.py", line 247, in
main(YOLO())
File "/home/user/object/OneStage/yolo/deep_sort_yolov3/yolo.py", line 54, in init
self.boxes, self.scores, self.classes = self.generate()
File "/home/user/object/OneStage/yolo/deep_sort_yolov3/yolo.py", line 94, in generate
score_threshold=self.score, iou_threshold=self.iou)
File "/home/user/object/OneStage/yolo/deep_sort_yolov3/yolo3/model.py", line 169, in yolo_eval
_boxes, _box_scores = yolo_boxes_and_scores(yolo_outputs[l],
IndexError: list index out of range
Could you help me? plz
请问画轨迹线的demo_path,py这个文件在哪里啊?
已经按照以下步骤执行:
Download the code to your computer.
Download [yolov3.weights] and place it in yolov3_sort/yolo-obj/
[yolov3_sort/main.py] Change the Path to labelsPath / weightsPath / configPath. - 更换main.py中的路径
Run the yolov3 counter:
$ python3 main.py --input input/test.mp4 --output output/test.avi --yolo yolo-obj
最终output文件夹里图片均未出框也未计数
我的path:
labelsPath = os.path.sep.join([args["yolo"],"yolo3_object.names"])
weightsPath = os.path.sep.join([args["yolo"],"yolov3.weights"])
configPath = os.path.sep.join([args["yolo"],"yolov3_1.cfg"])
是路径的问题吗?感谢!好人一生平安!
我已经在win10/7上完美的实现效果。但迷惑不知道如何保存检测的视频?求解!!
I also tried other methods. I guess it would be difficult to get it working outside China. Anyone has google drive, mega links of the same? the file name is yolo-spp.h5 and baidu exctraction code is t13k
我是小白刚入门,最近想做个行人检测的项目,如果要用yolo实现您这个项目这样的行人检测,具体步骤是怎样的呢?是修改配置让模型只检测行人就可以了?模型需要修改吗?需要用到迁移学习之类的吗
Traceback (most recent call last):
File "main.py", line 172, in
main(YOLO())
File "/home/xbot/Documents/Object-Detection-and-Tracking-master/OneStage/yolo/deep_sort_yolov3/yolo.py", line 51, in init
self.boxes, self.scores, self.classes = self.generate()
File "/home/xbot/Documents/Object-Detection-and-Tracking-master/OneStage/yolo/deep_sort_yolov3/yolo.py", line 89, in generate
boxes, scores, classes = yolo_eval(self.yolo_model.output, self.anchors,
NameError: global name 'yolo_eval' is not defined
您好!我在使用deep_sort_yolov3,运行main时出现如上错误。咋办?
请问yolov3_sort里面是怎么确认检测的是行人的呢
python convert.py yolov3.cfg yolov3.weights model_data/yolo6.h5
Using TensorFlow backend.
Loading weights.
Weights Header: 0 2 0 [32013312]
Parsing Darknet config.
aaa <_io.BytesIO object at 0x7f5a9a4f5308>
Traceback (most recent call last):
File "convert.py", line 242, in
_main(parser.parse_args())
File "convert.py", line 81, in _main
cfg_parser.read_file(unique_config_file)
File "/opt/anaconda3/lib/python3.7/configparser.py", line 717, in read_file
self._read(f, source)
File "/opt/anaconda3/lib/python3.7/configparser.py", line 1030, in _read
if line.strip().startswith(prefix):
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
如题 谢谢回复
I see that this function is implemented in SORT but not in Deep SORT
Feature: A feature vector that describes the object contained in this image. But what does this parameter represent? What is the meaning of the numbers in the array Feature?
For example: feature= [ 0.13135917 0.16252236 -0.02983991 0.1078044 0.18693653 0.222241
-0.07257525 -0.07441376 0.08326894 0.0541355 0.0026187 -0.07955563
-0.05495477 -0.05914702 -0.12457883 -0.04439378 -0.09501398 -0.12134293
0.02070528 -0.10250711 -0.02945639 -0.06187392 -0.11262443 -0.1067707
-0.12805346 -0.03683051 -0.01223208 -0.04542194 0.04128917 -0.09476566
0.13891572 -0.03608855 -0.08671472 0.17196652 -0.11042515 -0.09581856
-0.04645975 0.08574935 -0.09971982 -0.1007655 -0.11324123 0.09481143
-0.13796814 0.02062852 -0.03129434 0.07220582 -0.06094034 0.0997282
0.09703507 -0.11823436 -0.13719977 -0.1204574 0.0124543 -0.09292966
-0.03511186 -0.11681295 -0.06900121 -0.12038399 0.03251497 0.03251189
-0.0476766 -0.08023439 -0.00856623 0.06366589 -0.01062125 -0.07592105
-0.11542226 -0.01129758 -0.09487565 0.1098093 -0.03944121 0.15487668
0.06516567 -0.1064471 -0.11643744 -0.02732632 -0.11259694 0.00147146
0.04236715 -0.05228471 -0.07022281 -0.01324428 -0.10091629 -0.10923474
-0.04395907 -0.09888252 -0.08017209 -0.08089618 0.00113982 0.21809936
-0.08897579 -0.112206 0.0176195 -0.11458117 -0.00677603 0.0769835
-0.08030172 -0.01627187 0.02889497 -0.0008515 0.06290624 0.01271521
0.09568803 0.01536036 0.05172337 -0.09787396 -0.10839746 -0.069438
0.06256402 -0.0726283 -0.0296073 -0.10417195 -0.0954337 -0.09770833
0.0043749 -0.11574724 -0.00393125 0.05078177 -0.08470584 -0.12741965
-0.11512636 -0.0954428 0.01246825 0.06667607 -0.01297721 -0.14558725
-0.01562111 0.00888595]
Traceback (most recent call last):
File "main.py", line 172, in
main(YOLO())
File "main.py", line 158, in main
if len(pts[track.track_id]) != None:
UnboundLocalError: local variable 'track' referenced before assignment
Why would this happened?
Using deep sort yolov3
Hi,
1- Is it possible to track more than one classes at the same time? for example my model is trained on 6 classes how i can track all classes at the same time?
I am able to track one class successfully but unable to track all.
I have tried this method
if predicted_class != 'person' and predicted_class != 'car':
print(predicted_class)
continue
but it's tracking same by default class.
2-If we track more than one classes , will it effect the speed?
Hello @yehengchen, I am running YOLOv3 + Deep_SORT
example, but I can't find the files required to run this example, how do I create them or can you provide them? That'd be great help.
deep_sort_yolov3/counter.txt
deep_sort_yolov3/num.txt
model_data/market1501.pb
训练 yolov3 spp net 网络命令?
What is the meaning of the numbers in the array feature? What does that represent?
hi
請問YOLOv3 + Deep_SORT裡readme寫說
2. Download [yolov3.weights] and place it in yolov3_sort/yolo-obj/
但run時缺少model_data/yolo_cc_0612.h5
請問這邊的model_data/yolo_cc_0612.h5是否是直接Download [yolov3.weights] and place it in deep_sort_yolov3/model_data/
還是是您另外再train過weight?
运行时的错误提示:
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Input to reshape is a tensor with 43095 values, but the requested shape requires a multiple of 5577
[[{{node Reshape_3}}]]
[[concat_12/_2923]]
(1) Invalid argument: Input to reshape is a tensor with 43095 values, but the requested shape requires a multiple of 5577
[[{{node Reshape_3}}]]
之前以为是视频的问题,调成640*480依然没有解决
代码定位应该是在yolo.py中我这边116行 out_boxes,out _scores这行开始
hi
我的環境是
rtx2070, cuda10, cudnn7.4, tensorflow-gpu 1.14
但無法work
會出現
Failed precondition: Attempting to use uninitialized value batch_normalization_61/beta
想請問你的執行環境,或者有沒有人在rtx或tensorflow-gpu 1.14上執行過。
.h5 及 .pb均有重新convert過。
How to calculate the respective time in Yolo+deepsort project?
呃...
if intersect(p0, p1, line[0], line[1]) and LABELS[classID[i]]=='car':
counter += 1
我在这里做了更改,希望只计数车辆,但是
text = "{}: {:.4f}".format(LABELS[classIDs[i]], confidences[i])
cv2.putText(frame, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
这样的话,测试我的视频,会发现将骑车的person识别为car甚至是traffic light- -
请问这是什么原因啊
As mentioned above, I would like to use my own training set. Can you provide training code for detection and tracking?
您好:
我在win10底下,使用python main.py --input input/test.mp4 --output output/test.avi --yolo yolo-obj
出現以下訊息
[INFO] loading YOLO from disk...
[INFO] 1283 total frames in video
[INFO] single frame took 1.0242 seconds
[INFO] estimated total time to finish: 1314.0222
CPU會接近100%,但是不會跳出影像的視窗,GPU也沒有工作
請問您知道這可能是什麼問題嗎?
Thanks for your sharing. Is there some project about real time detection on high resolution video ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.