Comments (11)
I checked the result and there is no problem for me. Can you tell me what did change before running my code?
from tf-simplehumanpose.
I have made no change to the repository. Tried it on two different computers after cloning the github and setting it up with your models and am getting similar results on both computer.
Moreover, when i look at my results and the results you've uploaded, they are very different.
Are you evaluating using the model you're offering to download ?
Thanks.
from tf-simplehumanpose.
I am using 'snapshot_140.ckpt' downloaded from https://cv.snu.ac.kr/research/TF-SimpleHumanPose/COCO/model/256x192_resnet50.zip
I am using the COCO val2017 dataset for validation. It contains 5000 images.
I am using human_detection.json downloaded from https://cv.snu.ac.kr/research/TF-SimpleHumanPose/COCO/det_result/human_detection_val2017.json
No flags have been change I am using the same code from the GitHub
Furthermore the PoseTrack and COCO models downloaded from your web page appear to contain the identical models.
https://cv.snu.ac.kr/research/TF-SimpleHumanPose/PoseTrack/model/256x192_resnet50.zip
https://cv.snu.ac.kr/research/TF-SimpleHumanPose/COCO/model/256x192_resnet50.zip
both of these files unzip to 'snapshot_140.ckpt' with identical results
I wonder if you have somehow uploaded the wrong models to your web site?
The pixel co-ordinates produced by these models are definitely different from you results.json file downloaded from: https://cv.snu.ac.kr/research/TF-SimpleHumanPose/COCO/pose_result/person_keypoints_256x192_resnet50_val2017_results.json
from tf-simplehumanpose.
I double checked and the provided model and pose result is as follows.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.703
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.886
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.778
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.670
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.769
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.762
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.930
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.830
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.719
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.824
Did you place the model at the correct place? It should be $POSE_ROOT/output/model_dump/COCO/snapshot_140
.
from tf-simplehumanpose.
I am having the same problem.
I know I am using the correct validation dataset because I get mAP 0.78 on the tf-CPN code which has quite similar code to yours.
I am quite sure that your code is correct, but I suspect that there is a problem with the 'snapshot_140.ckpt' file (created 7th Jan)
I have checked your result.json file and compared it with my result.json file and the model is definitely finding the joints in different places for the same detection boxes on the same image.
Also the two models on your site (PoseTrack/model/256x192_resnet50.zip and COCO/model/256x192_resnet50.zip) are identical - so at least one of these files are the wrong weights.
from tf-simplehumanpose.
@jinkos Hi,
I checked the result of provided model and provided pose result is slightly different. Below is the provided pose result.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.704
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.886
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.778
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.670
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.769
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.762
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.930
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.830
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.719
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.824
I updated the pose result with that of the above one.
And why do you guys think those two files are identical? They have different last modified time and
diff -q file1 file2 > /dev/null && echo 'equal' || echo 'different'
gives different
.
Also, pose result of the provided posetrack model on the COCO dataset is like below.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.277
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.666
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.170
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.290
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.285
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.341
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.726
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.271
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.324
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.365
As you can see, they are not the same file.
from tf-simplehumanpose.
I checked again the provided model for the COCO dataset, and it gives
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.704
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.886
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.778
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.670
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.769
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.762
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.930
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.830
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.719
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.824
from tf-simplehumanpose.
@mks0601,
I am definitely getting exact same results as your PoseTrack model. This sounds like it might be a mirroring problem. I'll re-download everything.
Thank you for your patience.
from tf-simplehumanpose.
Hope you solve this issue :)
from tf-simplehumanpose.
Bingo - I am getting all the right results.
Can I respectfully request that you give your different models different names? If this was my mistake, then I apologise, but it would almost certainly not have happened if the models were names differently.
Yesterday, I very deliberately downloaded the different weights multiple times.
But today it works and I am happy - you have written a great implementation - thank you.
from tf-simplehumanpose.
Good to hear. Ok. I'll change the name of the zip file for each dataset. Close this issue.
from tf-simplehumanpose.
Related Issues (20)
- Final validation and training loss? HOT 2
- Training stuck with no verbose logs? HOT 4
- How much AP does NMS add? HOT 1
- is it possible to convert this model to tensorflow lite? HOT 2
- where to place mpii_human_pose_v1_u12_1 HOT 4
- which human_detection.json to use? HOT 2
- NMS modules information HOT 2
- Transfer Learning for new/custom dataset HOT 4
- Human detector HOT 2
- Question about using my own data HOT 17
- some queries
- human_detection.json
- Predict keypoints on a single image
- Confuse about get_affine_transform function HOT 1
- Name of the output node
- Queries regarding my generated Test_info and Human_detection JSON files HOT 2
- tensorflow.contrib error in train.py HOT 1
- Confusion about Metrices! Need Clarification HOT 1
- Convert mpii to coco format HOT 2
- Posetrack18 dataset can not download.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tf-simplehumanpose.