Giter VIP home page Giter VIP logo

emotionchallenge's Introduction

Citation

If you use these models or code in your research, please cite:

@inproceedings{guo2017multi,
  title={Multi-modality Network with Visual and Geometrical Information for Micro Emotion Recognition},
  author={Guo, Jianzhu and Zhou, Shuai and Wu, Jinlin and Wan, Jun and Zhu, Xiangyu and Lei, Zhen and Li, Stan Z},
  booktitle={Automatic Face \& Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on},
  pages={814--819},
  year={2017},
  organization={IEEE}
}

@article{guo2018dominant,
  title={Dominant and Complementary Emotion Recognition from Still Images of Faces},
  author={Guo, Jianzhu and Lei, Zhen and Wan, Jun and Avots, Egils and Hajarolasvadi, Noushin and Knyazev, Boris and Kuharenko, Artem and Jacques, Julio CS and Bar{\'o}, Xavier and Demirel, Hasan and others},
  journal={IEEE Access},
  year={2018},
  publisher={IEEE}
}

For final evaluation

Submission

First, you should generate the crop and aligned data on test Chanllenge dataset. Just change to crop_align dir

python landmark.py

The crop and aligned of test data(final evaluation phase data) of 224x224 will place in $ROOT/data/face_224

Then change to cnn dir, just type

python extract.py

It will load data preprocessed and caffe model to generate labels named predictions.txt and predictions.zip for test data. All details were considered.

In this repo, some directory path may be confused, just be careful, contact me if any questions occured.

The trained caffe model is just a experiement model, it may not has the best perfomance in this challenge.

Just upload predictions.zip to submit window then.

Introduction

We use Dlib to do face and landmark detection, and use landmark to do face cropping and alignment, then we use Caffe to with landmark and cropping image to train a cnn model to do the face expression recognition task.

Pipline

Preproces

First, run landmark.py to get all the origin image landmark, then build the crop_align binary, and run crop_align.py to get all the 224x224 size image.

Build crop_align

cd crop_align
mkdir build
cd build
cmake ..
make

All the preprocessed data except the images are in data dir.

Training

Change to cnn dir, run prepare_data.py to prepare training, validation and test data. Then run train_val.sh to start training.

Extract(Test)

Just run extract.py to generate the result, the input is the test image and its landmark offset info.

Method

We use the landmark offset and image info to do this task. In detail, the landmark offset is calculated by substraction of 224x224 image landmark and each id's mean landmark, and we concact this feature to modified alexnet's last output feature. We change softmax loss to hinge loss to get a little better result.

More detail is in the fact_sheet.tex.

emotionchallenge's People

Contributors

cleardusk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

emotionchallenge's Issues

FG2020 Compound Emotion Challenge participant

我是FG2020复合表情的参赛者,请问你有上一届比赛的论文模板吗?另外最终测试集的解压密码也没有公布,想请问一下上一届的比赛最终测试阶段是什么流程?

SyntaxError: invalid syntax:: caffe train -solver solver.prototxt -gpu 3 2>&1 | tee final_03051346.log

So, I have a couple of questions and I really appreciate if you could help:

  1. Do you have pretrained model in which I could simply feed in an image and get the emotion out of your system? if that is the case where should I feed in the image?

  2. How are you doing the training process? I don't get where is the past to the training dataset? It is not mentioned in your Read.me neither it is mentioned which dataset you are using for training. Can you please share those information?

Thanks a lot,
Mona

Here are all the steps I took but really don't make any sense because I don't even know how to feed in a test image or how to set the path to training images

[jalal@goku cs585]$ cd EmotionChallenge/
[jalal@goku EmotionChallenge]$ ls
cnn  crop_align  data  fact_sheets.pdf  fact_sheets.tex  models  readme.md
[jalal@goku data]$ file train_data
train_data: broken symbolic link to `/data/gjz/Training'
[jalal@goku data]$ cd ..
[jalal@goku EmotionChallenge]$ cd crop_align/
[jalal@goku crop_align]$ mkdir build
[jalal@goku crop_align]$ cd build/
[jalal@goku build]$ cmake ..
-- The C compiler identification is GNU 4.8.5
-- The CXX compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- OpenCV:         /scratch/sjn-p2/anaconda/anaconda2/include/opencv/scratch/sjn-p2/anaconda/anaconda2/include
-- Configuring done
-- Generating done
-- Build files have been written to: /scratch/mona/download/cs585/EmotionChallenge/crop_align/build
[jalal@goku build]$ make
Scanning dependencies of target crop_align
[ 33%] Building CXX object CMakeFiles/crop_align.dir/crop_align.cpp.o
[ 66%] Building CXX object CMakeFiles/crop_align.dir/imtransform.cpp.o
[100%] Linking CXX executable crop_align
[100%] Built target crop_align
[jalal@goku build]$ cd ..
[jalal@goku crop_align]$ ls
build  CMakeLists.txt  crop_align.cpp  crop_align.py  imtransform.cpp  imtransform.h  __init__.py  landmark.py  util.py
[jalal@goku crop_align]$ cd ..
[jalal@goku EmotionChallenge]$ cd cnn
[jalal@goku cnn]$ ls
data_layer.py  deploy.prototxt  extract.py  landmark_224.py  net.prototxt  prepare_data.py  solver.prototxt  train_val.sh  util.py
[jalal@goku cnn]$ python prepare_data.py 
[jalal@goku cnn]$ python train_val.sh 
  File "train_val.sh", line 3
    caffe train -solver solver.prototxt -gpu 3 2>&1 | tee final_03051346.log
              ^
SyntaxError: invalid syntax

Additionally, I am not sure what the line of code that is causing error is trying to do and what is the fix to it.

TypeError: object of type 'map' has no len()

Using Python2.7 I get this error:

(py2emotion) [jalal@goku crop_align]$ /scratch/sjn-p2/anaconda/anaconda2/bin/python landmark.py
Traceback (most recent call last):
  File "landmark.py", line 7, in <module>
    import dlib
  File "/scratch/sjn-p2/anaconda/anaconda2/lib/python2.7/site-packages/dlib/__init__.py", line 1, in <module>
    from .dlib import *
ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory
(py2emotion) [jalal@goku crop_align]$ which conda
/scratch/anaconda3/envs/py2emotion/bin/conda
(py2emotion) [jalal@goku crop_align]$ source deactivate py2emotion
[jalal@goku crop_align]$ /scratch/sjn-p2/anaconda/anaconda2/bin/python landmark.py
Traceback (most recent call last):
  File "landmark.py", line 7, in <module>
    import dlib
  File "/scratch/sjn-p2/anaconda/anaconda2/lib/python2.7/site-packages/dlib/__init__.py", line 1, in <module>
    from .dlib import *
ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory
[jalal@goku crop_align]$ /scratch/sjn-p2/anaconda/anaconda2/bin/conda install cudnn=6.0
Fetching package metadata .............
Solving package specifications: .

UnsatisfiableError: The following specifications were found to be in conflict:
  - caffe-gpu -> cudnn ==5.1
  - cudnn 6.0*
Use "conda info <package>" to see the dependencies for each package.

[jalal@goku crop_align]$

Using Python 3, I get this other error. No error on python landmark.py but in the command following it:

[jalal@goku crop_align]$ python landmark.py
[jalal@goku crop_align]$ cd ..
[jalal@goku EmotionChallenge]$ ls
cnn  crop_align  data  fact_sheets.pdf  fact_sheets.tex  models  readme.md
[jalal@goku EmotionChallenge]$ cd cnn/
[jalal@goku cnn]$ ls
data_layer.py  deploy.prototxt  extract.py  landmark_224.py  net.prototxt  prepare_data.py  __pycache__  solver.prototxt  train_val.sh  util.py
[jalal@goku cnn]$ python extract.py
Traceback (most recent call last):
  File "extract.py", line 78, in <module>
    submit()
  File "extract.py", line 74, in submit
    extract('../models/final.caffemodel')
  File "extract.py", line 58, in extract
    assert len(landmark) == 136
TypeError: object of type 'map' has no len()

If you could help to solve preferrably the one that gives an error for Python 3 and if not the one for Python 2.7. Thanks.

crop and align doesn't work

So, I get this error for Python 3:

[jalal@goku cnn]$ python extract.py
Traceback (most recent call last):
  File "extract.py", line 78, in <module>

▽
    submit()
  File "extract.py", line 74, in submit
    extract('../models/final.caffemodel')
  File "extract.py", line 61, in extract
    pred = classifier.predict(img, landmark)
  File "extract.py", line 32, in predict
    self.net.blobs['data'].data[0, ...] = image_array.transpose((2, 0, 1))

▽
AttributeError: 'NoneType' object has no attribute 'transpose'

Looking at the code:

def extract(model_file):
    classifier = Classifier(model_file=model_file,
                            deploy_file='deploy.prototxt')
    records = open('../data/test_ld.txt').read().strip().split('\n')
    # print(records[:1])
    predicts = []
    for rec in records:
        rec = rec.split()
        fp = rec[0]
        # print(fp)
        landmark = rec[1:]
        landmark = map(float, landmark)
        # print(landmark)
        assert len(list(landmark)) == 136

It is looking at this file:
../data/test_ld.txt

Looking at this file:
I have something like
../data/face_224/104/Sample_104_001.JPG
I expect to see a folder named 104 in the face_224 folder. However, there is no folder here and only a readme.md file which says 224x224 size image stores here.

[jalal@goku data]$ cd face_224/
[jalal@goku face_224]$ ls
readme.md

So I am confused what has gone wrong and how to fix it?
Any suggestion is really appreciated. I had no problem with the first step which was done in the crop_align folder:

[jalal@goku crop_align]$ python landmark.py
[jalal@goku crop_align]$

From what I understand the crop didn't work because you are stating that you expect the cropped folder to be in data/face_224 folder in which face_224 is the name of my image however, there's nothing in that folder.

Any help is really appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.