- This implemantation is based on official AlphaPose Pose Estimation Algorithm.
- It is βAlphaPoseβ & βXGBOOSTβ based βSuspicious-Activity-Detection-Using-Pose Estimationβ project.
- Purpose of this project is to make a system which can detect if someone is trying to Climb a house compound wall, Climbing on Fence, Climbing on gate & trying to do some suspicious activity.
- This model will detect this activities accurately & helps to prevent those kind of activities by giving real time feedback.
- Download the object detection model manually : yolov3-spp.weights file from following Drive Link
- https://drive.google.com/file/d/1h2g_wQ270_pckpRCHJb9K78uDf-2PsPd/view?usp=sharing
- Download the weight file and Place it into " detector/yolo/data/ " folder.
- For pose tracking, download the object tracking model manually: " JDE-1088x608-uncertainty " from following Drive Link
- https://drive.google.com/file/d/1oeK1aj9t7pTi1u70nSIwx0qNVWvEvRrf/view?usp=sharing
- Download the file and Place it into " detector/tracker/data/ ". folder.
- Download the " fast.res50.pth " file from following Drive Link
- https://drive.google.com/file/d/1WrvycZnVWwltSa6cjeTznEFOyNAwHEZu/view?usp=sharing
- Download the file and Place it into " pretrained_models/ ". folder.
- Python 3.5+
- Cython
- PyTorch 1.1+
- torchvision 0.3.0+
- Linux
- GCC<6.0, check facebookresearch/maskrcnn-benchmark#25
- Install PyTorch :-
$ !pip3 install torch==1.1.0 torchvision==0.3.0
- Git Clone :-
$ !git clone https://github.com/akshaykadam771/Suspicious-Activity-Detection-Using-Pose-Estimation.git
- Install :-
$ !export PATH=/usr/local/cuda/bin/:$PATH
$ !export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
$ !pip install cython
$ !sudo apt-get install libyaml-dev
$ !python setup.py build develop --user
$ !python -m pip install Pillow==6.2.1
$ !pip install -U PyYAML
- Testing with Images ( Put test images in AlphaPose/examples/demo/ ) :-
$ !python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/ --save_img
-
Output Images & json file will save in bydefault AlphaPose/examples/res folder.
-
Testing with Videos :-
$ !python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --video examples/video/demo5.mp4 --outdir examples/res --save_video --gpus 0
- If it is giving memory error during Videos testing you can add --sp argument in command which enable Single processing :-
$ !python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --video examples/video/demo5.mp4 --outdir examples/res --save_video --gpus 0 --sp
- Drive Link :- https://drive.google.com/file/d/1sTJkWBmuE6iBi_mCAs1DJ-KR6MnoZD7-/view?usp=sharing
- This CSV file contaning 17 keypoints (Total 17x2=34 ) of Human body part as columns for each individual person while performing this 2 activities :- 1) Climbing 2)Standing
- Action :- 0 = Climbing & 1 = Standing