Giter VIP home page Giter VIP logo

srep's Introduction

Gesture Recognition by Instantaneous Surface EMG Images

This repo contains the code for the experiments in the paper: Weidong Geng, Yu Du, Wenguang Jin, Wentao Wei, Yu Hu, Jiajun Li. "Gesture recognition by instantaneous surface EMG images." Scientific Reports 6 (2016).

Please see http://zju-capg.org/myo for details.

Requirements

  • A CUDA compatible GPU
  • Ubuntu 14.04 or any other Linux/Unix that can run Docker
  • Docker
  • Nvidia Docker

Quick Usage

Following commands will (1) pull docker image (see docker/Dockerfile for details); (2) train ConvNets on the training sets of NinaPro DB1, CapgMyo DB-a and CSL-HDEMG, respectively; and (3) test trained ConvNets on the test sets.

Navigate to the downloaded version of this repo then run the following commands to use images from Docker Hub

mkdir .cache
# put NinaPro DB1 in .cache/ninapro-db1
# put CapgMyo DB-a in .cache/dba
# put CSL-HDEMG in .cache/csl
docker pull lif3line/sigr:latest
sudo scripts/trainsrep.sh
sudo scripts/testsrep.sh

Building from Source

Alternatively you can rebuild the images from the dockerfiles in this repo:

mkdir .cache
# put NinaPro DB1 in .cache/ninapro-db1
# put CapgMyo DB-a in .cache/dba
# put CSL-HDEMG in .cache/csl
cd docker/mxnet
sudo docker build .
# Note #ID of image on completion
cd ..

Edit the Dockerfile in this directory replacing FROM lif3line/mxnet:latest with FROM #ID:latest

sudo docker build .
# Note #ID of this new image
cd ..
cd scripts

Edit runsrep replacing lif3line/sigr:latest with #ID:latest

cd ..
sudo scripts/trainsrep.sh
sudo scripts/testsrep.sh

Notes

Training on NinaPro and CapgMyo will take 1 to 2 hours depending on your GPU. Training on CSL-HDEMG will take several days. You can accelerate training and testing by distribute different folds on different GPUs with the gpu parameter.

The NinaPro DB1 should be segmented according to the gesture labels and stored in Matlab format as follows. .cache/ninapro-db1/data/sss/ggg/sss_ggg_ttt.mat contains a field data (frames x channels) represents the trial ttt of gesture ggg of subject sss. Numbers are starting from zero. Gesture 0 is the rest posture. For example, .cache/ninapro-db1/data/000/001/000_001_000.mat is the 0th trial of 1st gesture of 0th subject, and .cache/ninapro-db1/data/002/003/002_003_004.mat is the 4th trial of 3th gesture of 2nd subject. You can download the prepared dataset from http://zju-capg.org/myo/data/ninapro-db1.zip or prepare it by yourself.

License

Licensed under an GPL v3.0 license.

Misc

Thanks DMLC team for their great MxNet!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.