gajuuzz / human-falling-detect-tracks Goto Github PK
View Code? Open in Web Editor NEWAlphaPose + ST-GCN + SORT.
AlphaPose + ST-GCN + SORT.
Traceback (most recent call last):
File "D:/installedProjects/PyCharm_Projects/Human-Falling-Detect-Tracks/main.py", line 12, in
from PoseEstimateLoader import SPPE_FastPose
File "D:\installedProjects\PyCharm_Projects\Human-Falling-Detect-Tracks\PoseEstimateLoader.py", line 5, in
from SPPE.src.main_fast_inference import InferenNet_fast, InferenNet_fastRes50
ImportError: cannot import name 'InferenNet_fastRes50'
FutureWarning: Pass display_labels=['none', 'throw rubbish'] as keyword args. From version 1.0 (renaming of 0.25) passing these as positional arguments will result in an error
"will result in an error", FutureWarning)
The full error is :
Traceback (most recent call last):
File "E:/Human-Falling-Detect-Tracks-master/main.py", line 106, in
frame = cam.getitem()
File "E:\Human-Falling-Detect-Tracks-master\CameraLoader.py", line 58, in getitem
frame = self.frame.copy()
AttributeError: 'NoneType' object has no attribute 'copy'
How do I fix this error? Any help would be gladly appreciated.
When I run the train.py file, I get such an error. How can I solve it?
Hi! ths for your great repo.
but when i use camera to detection fall down, the accuracy is little low. so how can i increase the accuracy? retrain the posemodel or st-gcn model?
Is there a way to export this to ONNX format?
Hi! Awesome work!However I found you use BCELoss as your loss in train.py? Can you explain why BCE but not CrossEntropyLoss? Thanks.
torch.save(model.state_dict(), PATH)
torch.save(model_object, PATH)
When I change save model parameters to save model parameters and structure information, this error is reported when I run train.py
Hi Guys. How to use the parser --save_out in main.py?
from pose_utils import motions_map
ImportError: cannot import name 'motions_map'
Can't play the video it generated
great repo. I ran some tests and found that NaN may appear in the code, speciffically, in the bbox calculation and iou function.
Hi thanks for this wonderful repo, I have a problem when using web camera. Upon running ' python main.py -C 0 ' there are no bounding boxes displayed or it doesn't detect. How can I fix it? Thanks in advance
HI!
I want to ask that is the fast_res50_256x192.pth weight file is trained by alphapose? tsstg-model.pth trained by st-gcn or mmskeleton, best_ model.pth weight file trained by Yolov3? because I want to train my own weight file,thanks!
Hi GajuuzZ, Im really appreciated for your dataset code.
I have a question about seq_label_smoothing code.
Can I understand that sequentual action frames influence model training??
For example, if I want to train about the fight action, it will be trained action frames from before to after the fists are stretched?
Thank you for your replying in advance.
the strategy: Spatial Configuration
Here the average coordinate of all joints in the skeleton at a frame is treated as its gravity center(this is the original sentence in the paper)
but the code about how to define gravity center is ‘self.center = 13’
When I run this project in my computer,I dont think the result is good . Will this setup affect the final accuracy ?
change "inp_h, inp_w" into "resnet50".
Originally posted by @liqian180501 in #32 (comment)
How to change it? And when I did that, it became AssertionError: 256 backbone is not support yet!
from Visualizer import plot_graphs, plot_confusion_metrix
ModuleNotFoundError: No module named 'Visualizer'
The official yolov3-tiny trained on coco dataset often fails to detect people who fall, so I want to know how you trained?Any data augmentation method?Thanks!
your alphapose is 17 points,but st-gcn train need 18points(in openpose)
when I load yoloV3 ,I meet this error:
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.
I have tried torch 1.4 1.5 and 1.6 , but have same error,
can you give me some help. thank you
when I run train.py.help me,please.
Hi,
I am very interested in your work and want to make this work run, but it seems that the dataset link is invalid. Can you provide a link to share the dataset?
dear friend. i want train this stgcn use my own dataset.but i can not use Data/create_dataset_2.py.Beacuse i can not create annot_folder = '../Data/falldata/Home/Annotation_files' # bounding box annotation for each frame.My question is how can i create this file. Is Yolo model generated (use Detection/Models)?or i show find other method? if you know.please give me some advices
please. this project is very importent for me.
Seems like you use SORT to track, are there any references about that ,Thanks!
I want the data to train st-gcn,such as
'../Data/Coffee_room_new-set(labelXscrw).pkl',
'../Data/Home_new-set(labelXscrw).pkl'
Can you share your data with me.
I don't know how to mark it, or can I have a look at your file data template.
Thanks very much.
If ok ,please send me an email to [email protected].
First thank you for your sharing. I got questions in running train.py.
data_files = ['../Data/Coffee_room_new-set(labelXscrw).pkl',
'../Data/Home_new-set(labelXscrw).pkl']
I'm wondering what's the difference between the two pkl files. What the 'Coffee' means? Are the two pkl file create from one video or from different videos?
I can already run 1-3 create_dataset code.
I'm really appreciate if you can give me some advice on running train.py.
Hi! becasue the dataset link http://le2i.cnrs.fr/Fall-detection-Dataset?lang=en is death, so how to make my own dataset? I will appreciate u if you can give me some advice.
Hi , I have a problem confused me.
’‘’
Loading pose model from ./Models/sppe/fast_res50_256x192.pth
Process on: video (1).avi
Process on: video (10).avi
Traceback (most recent call last):
File "/home/wangliu/.local/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 2898, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 101, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1675, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1683, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'video'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "create_dataset_2.py", line 142, in
frames_label = annot[annot['video'] == vid].reset_index(drop=True)
File "/home/wangliu/.local/lib/python3.6/site-packages/pandas/core/frame.py", line 2906, in getitem
indexer = self.columns.get_loc(key)
File "/home/wangliu/.local/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 2900, in get_loc
raise KeyError(key) from err
KeyError: 'video'
‘’‘
When I run the Python file, it occured some times that in the program, then I debug it. Then I found that the first video is normal, the second video do not work.
I'd appreciate any information someone could give me.
xs@xs:~/Human-Falling-Detect-Tracks/Data$ python3 create_dataset_2.py
Loading pose model from ../Models/sppe/fast_res50_256x192.pth
Process on: video (1).avi
Traceback (most recent call last):
File "create_dataset_2.py", line 93, in
torch.tensor([[1.0]]))
File "../PoseEstimateLoader.py", line 38, in predict
result = pose_nms(bboxs, bboxs_scores, xy_img, scores)
File "../pPose_nms.py", line 102, in pose_nms
preds_pick[j], ori_pose_preds[merge_id], ori_pose_scores[merge_id], ref_dists[pick[j]])
File "../pPose_nms.py", line 226, in p_merge_fast
mask = (dist <= ref_dist)
RuntimeError: Expected object of scalar type Float but got scalar type Long for argument #2 'other'
Hello, if you want to retrain models of different types of actions, can you provide training files? In addition, I request to introduce the label format requirements of the training data set.
HI ,
sorry that i am first time to try on , I want to test your program using " python main.py $office_demo.avi " but it can't to run and showing below:
usage: main.py [-h] [-C CAMERA] [--detection_input_size DETECTION_INPUT_SIZE]
[--pose_input_size POSE_INPUT_SIZE]
[--pose_backbone POSE_BACKBONE] [--show_detected]
[--show_skeleton] [--save_out SAVE_OUT] [--device DEVICE]
main.py: error: unrecognized arguments: .avi
could you guide me how to ? Thanks!
Wilson
What's the difference between these two pkl files, please?
when generate the training data of the TSTGG,there is no the related ‘’../Data/falldata/Home/Annotation_files”, can you share that?thanks
Thank you for this wonderful repo. I found that the yolo sometimes lost people track in my implementation. could u
plz tell me the dataset you used when u train the yolo?
I do not have test.txt in Annotation_files
Data/falldata/Home/Annotation_files/test.txt
Can you give me dataset
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.