Comments (14)
Sure. The same code can be applied to person scenario. You just need to change the object detector to output person locations.
from 2018aicity_teamuw.
Thank you for the response. But, I believe I need to retrain the whole system, Detector, Re-Id and tracking on specific person datasets, right?
And, as you mentioned I need to change the object detector to output person locations, that means I need to make changes into the detection/tools/infer_simple_txt.py to achieve it as that is been called from run.sh, right?
from 2018aicity_teamuw.
The provided pre-trained models for YOLOv2 cannot be used to detect people. You can use the pre-trained models on ImageNet or MS COCO instead. All you need to do is to extract on detected people from the results. We suggest you to try more advanced object detectors like YOLOv3 and Faster R-CNN. The pre-trained models provided with them should be accurate enough. The ReID and tracking parts are not dependent on the object types.
from 2018aicity_teamuw.
Okay, great. So, it means if I am able to change the detector to output person detections, I don't need to make further changes to ReID and Tracker as they follow the input from detection, right.
Any help where I have to make these changes? A more detailed reference would be appreciated
Also, I believe I can use this code for real-time tracking, right
from 2018aicity_teamuw.
For object detection, there is nothing much you need to change. All you need is to use the pre-trained models to generate detection results and extract the people objects from them. For ReID, we used transfer learning, i.e., using the pre-trained model to extract features, so there is no need for training, however, we found that using metric learning will lead to better performance. You can refer to our latest paper in CVPR 2019 about the CityFlow dataset. We also have a better single-camera tracker that you can find here: https://github.com/ipl-uw/2019-CVPR-AIC-Track-1-UWIPL.
Since our code has been divided to separate components, you may need to integrate them into a standalone pipeline for real-time tracking.
from 2018aicity_teamuw.
Can you please provide me the actual repository for Multiple camera Vehicle tracking code? This is because the Track 3/1_Multi-Camera Vehicle Tracking and Re-identification folder does not have code in it except a Readme.md
from 2018aicity_teamuw.
You can find the link to all the repositories we used here: https://github.com/zhengthomastang/2018AICity_TeamUW/tree/master/Track3
The main repository is this one: https://github.com/AlexXiao95/Multi-camera-Vehicle-Tracking-and-Reidentification
from 2018aicity_teamuw.
Okay, thank you soo much for the response. One more thing, Is it possible to train and run this on CPU only without any GPU support?
from 2018aicity_teamuw.
Yes. It is possible to extract features with CPU only. You can also try more advanced pre-trained models in PyTorch, which are probably easier for inference on CPU.
from 2018aicity_teamuw.
@KunalArora Have you been able to get the multi-camera tracker work? I am working on a similar problem and want to know what modifications are needed to get this working asap(apart from changing the backend detector).
from 2018aicity_teamuw.
Is this the result of your demo? If yes, can you please confirm whether I can extend it to multiple targets? The demo shows only 1 target vehicle which it is tracking across multiple cameras. Can you also please comment of fps that you are getting?
from 2018aicity_teamuw.
Yes. The demo was generated using the code in this repository. You can use it to extend to multiple targets. For the work in the 2018 challenge, we only selected the targets with the highest confidence because there are too many false positives. We didn't compute the FPS because the pipeline was broken down into multiple modules. There still needs further work to combine them into an end-to-end framework.
from 2018aicity_teamuw.
@haroonrashid235
I am still working on this task of making it work for people and developing the end-to-end pipeline from detection till tracking.
@zhengthomastang
I would really appreciate your help in letting me know what could be done to develop the end-to-end pipeline. A general guideline or idea would help a lot.
from 2018aicity_teamuw.
@KunalArora You can refer to my paper to get an idea of the workflow of multi-target multi-camera (MTMC) tracking: https://zhengthomastang.github.io/publications/CityFlow/
from 2018aicity_teamuw.
Related Issues (20)
- CUDA installation HOT 2
- sequence naming of images in TC_Tracker HOT 1
- Track1/6_SPD_EST issues HOT 9
- Code in C or C++ HOT 1
- FPS HOT 1
- How to run the yolo detection with the model provided? HOT 1
- Step 2- Camera calibration old method vs new method HOT 1
- Tracklet clustering HOT 2
- Unable to mark vanishing lines HOT 9
- Arguments 3_YOLO_VEH in file track1.sh
- Core dumped
- one difference between the provided code and the paper HOT 1
- Height of vehicles effect on real distance between 2 frames HOT 2
- Python implementation
- Can we use this on Windows? HOT 1
- Some questions about 2_CAM_CAL HOT 1
- Back projection matrix construction HOT 2
- Coordinate Systems Ambiguity HOT 6
- Inquiries Regarding the Integration of Semantic Features for Enhanced Vehicle Tracking HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from 2018aicity_teamuw.