Giter VIP home page Giter VIP logo

combined-gesture-recognition-using-wearable-devices-with-inertial-measurement-units's Introduction

Combined-Gesture-Recognition-using-Wearable-Devices-with-Inertial-Measurement-Units

Introduction:


事先定義好幾個基本動作,並且藉由這些基本動作組成不同的 複合型手勢。理論上,我們只需要做每個基本手勢一次就可以了。
複合型手勢 則會將其切割成好幾個基本手勢再做辨識。
切割手勢的方式 Threshold 以及 angle
辨識的方法使用 DTW(Dynamic Time Wrapping)

*切割的方法 : 如果用deep learning 可能會需要 複合型手勢 切割的時間點的ground truth(不易取得,可以利用攝影機,webcam),且讓計算成本更高了。我們定義出來的基本手勢是簡單的,因此採用threshold base 以及angle 來做切割。

*辨識應該可以換成其他的 ML model,是因為DTW適合比對不同長度的sequence data.如果換成其他ML,有可能得收集更多的資料,要做基本手勢好幾次。

Requirement:


*** Run on python 2.7
sudo apt-get install python-xlib
pip install pynput==1.2
sudo apt-get install python-pip libglib2.0-dev
sudo pip install bluepy
sudo apt-get install libv4l-dev
pip install v4l2capture
pip install Pillow
sudo apt-get install python-imaging
sudo apt-get install python-opencv
sudo apt-get install python-qt4
pip install pyqtgraph

File Description:


CGRrealtime.py : Recognize gesture in real-time.Load the DTW pattern: JayDTW.dat

QTRealLine.py : Draw the Real-Time curve using pyqt & pyqtgraph for accelerometer, gyroscope data and Euler angle(Yaw,Roll Pitch)

QTRealTimeScatter.py : Draw the scatter data to see if the magnetometer is calibrated successfully

QTwebcam.py : Draw the Real-Time curve and captrue the video by webcam

openglEuler.py : Combined OpenGL and PYQT. Can demo the Gimbal Lock problem
It can show data transformed from sensor frame to be a global frame by multiplied rotation matrix

Mahony.py : a python library for Madgwick and Mahony. It repharses from C x-io Technologies

*** Maybe I will write a document to elaborate on Madgwick and the math of rotation.
The template of drawing real-time can be find in the other github repository

The Offline:


Offline/Jay/ : save all motion data


QTofflineRealLine.py : To create the DTW pattern, and test the accuracy of the motion data


Issue:


For some unknow reason the webcam can not execute successfully.
Maybe we can try github: python-v4l2capture
, instead of using pip to install the v4l2capture package.: Not tried yet
the example code: http://www.morethantechnical.com/2016/03/04/python-opencv-capturing-from-a-v4l2-device/

Some idea what I want to do:


利用攝影機做連續的動作切割,並且標出切割的時間點。利用時間點,做為ground truth.
拿sensor 的資料做為Input,利用LSTM 來training,讓LSTM能只靠sensor的資料就能切割出連續動作

難點: sensor 資料如何與影像同步

combined-gesture-recognition-using-wearable-devices-with-inertial-measurement-units's People

Contributors

jaylion321 avatar nthuepl avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.