Giter VIP home page Giter VIP logo

fokhruli / stgcn-rehab Goto Github PK

View Code? Open in Web Editor NEW
22.0 1.0 6.0 8.96 MB

This repository provides training and evaluation code for paper titled "Graph Convolutional Networks for Assessment of Physical Rehabilitation Exercises" (accepted in IEEE TNSRE)

Home Page: https://fokhruli.github.io/STGCN-rehab/

License: MIT License

Python 100.00%
tensorflow deep-learning assessment-project graph-convolutional-networks

stgcn-rehab's Introduction

Graph Convolutional Networks for Assessment of Physical Rehabilitation Exercises

This code is the official implementation of the following works (train + eval):

intro-1

Figure 1: Overview of existing vs. the proposed method. (a) The existing deep learning method applies CNN to the grid structure of stacked skeleton (body-joints) data. It performs consistently only with fixed-length input and ignores spatio-temporal topological structure from interaction among neighborhood joints. (b) Our proposed method employs STGCN to address the issues mentioned above. We offer extensions to STGCN using LSTM to extract rich spatio-temporal features and attend to different body-joints (as illustrated in colored joints) based on their role in the given exercise. It enables our method to guide users for better assessment scores.

sk-1

Figure 2: GCN based end-to-end models using (a-b) vanilla STGCN and (c-d) extended STGCN for rehabilitation exercise assessment. 'TC', \oplus and \odot denote temporal convolution, concatenation and element-wise multiplication, respectively. (b) and (d) illustrate the detailed components of the green STGCN block of (a) and (c), respectively.

Data Preparation

We experimented on two skeleton based rehabilitation datasts: KIMORE and UI-PRMD. Before training and testing, for the convenience of fast data loading, the datasets should be converted to the proper format. Please download the pre-processed data from GoogleDrive and extract files with

cd st-gcn
unzip <path to Dataset.zip>

Requirements

Files

  • train.py : to perform training on Physical rehabilitation exercise
  • data_preprocessing.py : preproces the data collected from dataset. It is mandatory to do some preprocessing before feeding it network.
  • graph.py : It will generate skeleton graph from given data
  • stgcn_lstm.py : build propose ST-GCN method
  • demo.py : perform a demo inference for given sample.

Running instructions

You can use the following commands to run the demo.

python demo.py [--skeleton data ${PATH_TO_DATA}] [--label ${PATH_TO_Label}]

# Alternative way
python demo.py

The output is the predicted label for the demo exercise.

To train the model you have to first download the dataset from above link. The data and labels of an exercise have to be inside a folder. Then run the train.py file to train the model. You can change the optimizer, learning rate and other parameters by editing train.py. The total number of training epoch is 2000; the learning rate is initialized as 0.0001. You can train the model following command.

python train.py --ex Kimore_ex5 --epoch 2000 --batch_size 10

Notes on experiment

guidence_vis-1

Citation

If you find our research helpful, please consider cite this work:

@ARTICLE{deb-2022-graph,
  author={Deb, Swakshar and Islam, Md Fokhrul and Rahman, Shafin and Rahman, Sejuti},
  journal={IEEE Transactions on Neural Systems and Rehabilitation Engineering}, 
  title={Graph Convolutional Networks for Assessment of Physical Rehabilitation Exercises}, 
  year={2022},
  volume={30},
  number={},
  pages={410-419},
  doi={10.1109/TNSRE.2022.3150392}}

Acknowledgment

We thank the authors and contributors of original GCN implementation.

Contact

For any question, feel free to contact @

Swakshar Deb     : [email protected]
Md Fokhrul Islam : [email protected]

stgcn-rehab's People

Contributors

fokhruli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

stgcn-rehab's Issues

Different divisions lead to different results

Dear fokhruli:
Thanks for your code and it really was an excellant project. But when I divide the dataset Kimore_ex5\TrainX.csv differently and train it with model ST-GCN, I get different results. I think it beacause of lack of data. For example, There are only 4 motions labeled 12.66666666.if I divide them all to the test set ,it will get a bad train result. I don't know if I am right,so I am looking forword to your reply. Have a good day!

What are the specific reqiurements for train.py?

I've installed the libraries mentioned in the requirement. But when I run the train.py, the speed is too slow to endure. Specifically, it needs around 60s to finish an epoch, and it requires >12h to complete one training session. I check the GPU and find that the GPU-util is around 20% which is relatively stable. (Fig. below)

状态

I'm wondering whether you had experienced this, and if so, how do you cope with it? If not, could you tell me the specific requirements for train.py, etc. the version of Cuda, Cudnn, the num of GPU, CPU, and any other specific computer configuration needed?

Thank you!

General questions

  • How is the data preprocessing for the KIMORE dataset done? Would like to know so we can adapt it for other data if possible
  • I noticed from data_preprocessing.py that it imports Train_X.csv and Train_Y.csv, possible to share the link to the processed-data.zip?
  • The GoogleDrive link on github is not working
  • For X_train do you only use the joint positions?
  • In the KIMORE dataset there are 5 exercises, does this mean that for each exercise there is a separate trained GCN model (total 5 model for 5 exercise)?
  • May I know what the pretrained model (best_model.hdf5) provided in the github was trained on? (All the intances in KIMORE dataset for 1 exercise?)

Why use Standardnorm towards the y_label(score) and use the linear function as the activation in the final dense layer?

I am puzzled by your use of Standard Normalization on the y_label (score), as well as the linear activation function in the final dense layer (as shown in the picture below). Given that this is a score prediction problem, and scores are limited to [0,1] on the UI-PRIMD dataset or [0,50] on the Kimore dataset, it is possible that the final predicted score could exceed these limits. Is there a reason for this implementation that I may have missed?

Dense
standardnorm

self_attention

I want to know where the self-attention was added, and the self-attention matrix graph, is it calculated using the bias matrix

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.