Giter VIP home page Giter VIP logo

Comments (22)

lunzueta avatar lunzueta commented on July 30, 2024

Hi @ktak199 I just did the same thing today and I also saw that the error was much higher than in the case of FLIC. I guess we should check more in detail what parameters are used in the original paper (https://arxiv.org/pdf/1312.4659v3.pdf). I'm now training with MPII using the same default parameters as with FLIC, to see what happens, but I'll retake the training/testing with LSP afterwards. I'll tell you if I get better results after tuning the parameters. Please, let me know too if you are luckier after tuning the parameters too.

from deeppose.

kazunaritakeichi avatar kazunaritakeichi commented on July 30, 2024

Hi @lunzueta Thank you! Ok, I'll also try and tell you!

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

Hi @ktak199. I've tested this time with MPII and the default parameters. The tests have less error in general than in the case of LSP, but they still are quite bad compared to FLIC. So, I guess that in both cases specific parameters should be used. During the training I observed that in both cases it tended to overfitting quite quickly.

from deeppose.

kazunaritakeichi avatar kazunaritakeichi commented on July 30, 2024

Hi @lunzueta. One way against overfittin may be tuning dropout parameters. http://stats.stackexchange.com/questions/109976/in-convolutional-neural-networks-how-to-prevent-the-overfitting

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

@ktak199 The dropout is already considered in the implementation, with the same value mentioned in the paper:
h = F.dropout(F.relu(self.fc6(h)), train=self.train, ratio=0.6)
h = F.dropout(F.relu(self.fc7(h)), train=self.train, ratio=0.6)

Now, I'm training LSP with the following parameter changes:

  • crop_pad_sup=1.0 -> I think this is the σ parameter mentioned in the paper, which is set to 1 for LSP
  • lcn=0 -> in other contexts I found that using this kind of contrast was worse than not using it, so I'm deactivating this, to see what happens
  • lr=0.0001 -> in the paper they say this is the most important parameter to be set. I'm changing this value in a similar way as I've done in other contexts, to see what happens
    For now, in this training I'm getting the following graphic:
    log
    It looks a bit better than in the previous training, but not good enough yet...

from deeppose.

kazunaritakeichi avatar kazunaritakeichi commented on July 30, 2024

@lunzueta
σ parameter is set to 1.0 for FLIC and 2.0 for LSP in the paper and lr parameter is set to 0.0005 for both datasets, isn't it?
I don't yet know which lcn should be 0 or 1.

I'm testing with the following parameter.
cropping 0 "For LSP we use the full image as initial bounding box since the humans are relatively tightly cropped by design."

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

Hi @ktak199. Yes, you are right about σ, I said it wrong. I've continued doing some more tests changing the parameters (crop vs no-crop, local contrast vs no-local-contrast, etc), but I'm not getting... let's say... "normal" results with LSP. The result I normally get in the tests is a very small avatar (compared to the actual body size) in the middle of the image. I'm a bit stuck with this too. Now, I was just trying to do this same training using the caffe branch instead the master branch, to see if this could be something related to the deep learning framework. I'll let you know. Good luck with your tests too, I hope we can get something closer to the expected results.

from deeppose.

yutuofish2 avatar yutuofish2 commented on July 30, 2024

Hi @lunzueta
I am running on MPII by setting the dropout ratio as 0.9. The other parameters are left as default. Currently the test loss has started to converge, however it is still at a high loss.

image

image

from deeppose.

kazunaritakeichi avatar kazunaritakeichi commented on July 30, 2024

@lunzueta
This is log.png (cropping is 0).
test loss is increasing...
log

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

Thanks for sharing this @yutuofish2. I see you are training for more than 600 epochs. I wonder how many should be a good number, but I see that your training looks much better than what I was getting.

from deeppose.

yutuofish2 avatar yutuofish2 commented on July 30, 2024

@ktak199
You would need to modify the function fliplr() in transform.py. The authors have fixed this problem about 10 hours ago. However, it seems that there are still some bugs ...

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

This time I trained a model with LSP, just changing the optimizer to 'MomentumSGD', and maintaining the rest of parameters the same way. I got the following results, which still aren't good enough:
log
Good to know that there have been some new fixes in the code. I'll try them next. Thanks for that @mitmul!

from deeppose.

kazunaritakeichi avatar kazunaritakeichi commented on July 30, 2024

I tried newer version(shell/train_lsp.sh).
Below is the result.
log

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

@ktak199 I was also doing the same thing, but I still was at Epoch 200, and I'm getting a similar graphic:
log
So, what do you think it might be happening? Maybe it's too early and we should we wait till Epoch 1000? Just in case, meanwhile, I'm going to train with FLIC again in another PC to see if it still trains as before.

from deeppose.

mitmul avatar mitmul commented on July 30, 2024

Sorry for inconvenience, there are some fatal bugs maybe in data processing part. I'm trying to find them now and will update if I could fix them and confirm the training can be done correctly. So please wait or try to find bugs and send PRs. Thanks.

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

Thank you very much for taking care of this issue @mitmul. I'm learning a lot from all this :-)

from deeppose.

kazunaritakeichi avatar kazunaritakeichi commented on July 30, 2024

Thank you so much @mitmul !
I'll learn the paper and the code so that I can contribute.

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

Could the problem be, in the case of LSP, that there are some joint positions with negative values (expressing that they are occluded) and these make the training get crazy? I say this because I've retrained with FLIC for a few epochs and it looked to be converging normally. The only difference I see are those negative values.

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

Well... I started a new training with MPII, which has all the body joint positions marked on the image, and after about 130 epochs of training I got this graphic, with a strange outlier and which doesn't seem to converge:
log
And this kind of results from it, which are always the same pose:
test_130_tiled_pred
So, certainly, I guess we should review in detail how the data is processed.

from deeppose.

kazunaritakeichi avatar kazunaritakeichi commented on July 30, 2024

I tried with FLIC dataset.
I got the similar result with MPII @lunzueta.

from deeppose.

lunzueta avatar lunzueta commented on July 30, 2024

Hi guys. Based on the code provided in the caffe branch, I've done some tests with MPII (I attach the caffe net and solver files I've used for that), and after training for some hundred of epochs it seems to give some responses that make more sense (not always the same mean pose as shown above). In order to generate the LMDB format data, I used the same functions provided in this code (cropping, etc), but without applying the local contrast (because this wasn't possible to reproduce in Caffe), so I don't think that the failure is there. The AlexNet architecture defined in Chainer format also seems to be correct. So, taking this into account, where could the failure be? (I still couldn't find it)

deeppose.zip

from deeppose.

aspenlin avatar aspenlin commented on July 30, 2024

Hi @lunzueta @yutuofish2 may I ask which python program you use to plot the images with joints positions on them? The only one I can find is evaluate_flic.py but it still doesn't seem right.

from deeppose.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.