Giter VIP home page Giter VIP logo

litepose's Issues

Error on unpicking for eval script

Hello, I am trying to run evaluation on COCO L model from your results. I use command -
``python valid.py --cfg experiments/coco/mobilenet/mobile.yaml --superconfig mobile_configs/search-L.json TEST.MODEL_FILE /workplace/efsdrive/users/sgattupa/pose/litepose/models/pose/result_coco_L/data.pkl`

I have installed all requirements. I get below error:
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.

Let me know if any suggestions. Thanks! Would love to try these models.

Training on a custom dataset

I am trying to train on my own dataset in coco format, but I am not able to train successfully.
If you have any procedures for training on a custom dataset, I would appreciate it if you could share them with us.

Also, I would like to use the publicly available result-models as a pre-training model and retrain with a custom dataset.
Is it possible to re-train with a custom dataset?

Thank you,

How to Fine-Tuning the model on my custom dataset with same keypoints

I would like to use the pretrained weights to initialize the model, then Fine-Tuning on my custom dataset with same keypoints.
I assume I would to do the following:

  1. Set the configuration: PRETRAINED: 'premodels/LitePose-Auto-L.pth.tar' in mobile.yaml.
  2. Set the LR: 0.0004 in mobile.yaml.
  3. use the following script:
    python dist_train.py --cfg experiments/coco/mobilenet/mobile.yaml --superconfig mobile_configs/search-L.json DATASET.INPUT_SIZE 512 DATASET.OUTPUT_SIZE "[128, 256]"

If you have any procedures for Fine-Tuning on a custom dataset, I would appreciate it.

test Inference on video

Hey thank you for sharing your work, It looks really promising.
I want to test the model with webcam and custom video, which file should I use for inferencing ?

Thank you.

the app dose not work

  1. Crash occured once the app launched.
  2. I notice that the output dimension of the tflite in the app is [1, 2]. But shouldn't the output be a heatmap?
    Thanks for your great work. ^^

No module named 'scheduler'

in sidt_train.py
line 46
from scheduler import WarmupMultiStepLR

python3
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

from scheduler import WarmupMultiStepLR
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'scheduler'

where is this shceduler package come from?

Jetson Nano inference 速度慢

你好,最近我用过Litepose COCO还有见几个问题。所以有时间的话请你们帮我。
我看你们的LitePose-Auto-M结果, 就是144 Latency(ms)
然后我有LitePose-Auto-S,还有我得了300 Latency(ms)。 太慢了。你们知不知道为什么怎么慢。。。
我觉得如果我用'TVM变换',可能快一点。
请你们帮我怎么可以解决这个问题。

Questions about reasoning time

What is your reasoning time only measure the model feedforward time without flip test, not including the data loading time (read image and tranform)?
Or your reasoning time includes the time for image loading and conversion?

missing 1 required argument "num_spatial_dims"

Hi, when I run the first command to train the supermobile model, threre is an error as shown in the pict. Maybe it is caused by the wrong torch version, my version is 1.12, could u ple have a check?
微信图片_20221103095942

nano_demo inference get error : ../nano_demo/checkpoints/lite_pose_nano/lib0.o: error adding symbols: File in wrong format

Hi.
I tried to run nano_demo/start.py in the order written on this link(https://github.com/mit-han-lab/litepose/tree/main/nano_demo).

But, I got this error:

Traceback (most recent call last):
File "start.py", line 76, in
executor, gmod, device = get_model_executor()
File "/workspace/litepose/nano_demo/core/init.py", line 94, in get_model_executor
lib = tvm.runtime.load_module('/workspace/litepose/nano_demo/checkpoints/lite_pose_nano.tar')
File "/anaconda3/envs/litepose/lib/python3.8/site-packages/tvm/runtime/module.py", line 610, in load_module
_cc.create_shared(path + ".so", files)
File "/anaconda3/envs/litepose/lib/python3.8/site-packages/tvm/contrib/cc.py", line 79, in create_shared
_linux_compile(output, objects, options, cc, compile_shared=True)
File "/anaconda3/envs/litepose/lib/python3.8/site-packages/tvm/contrib/cc.py", line 247, in _linux_compile
raise RuntimeError(msg)
RuntimeError: Compilation error:
/usr/bin/ld:/workspace/litepose/nano_demo/checkpoints/lite_pose_nano/lib0.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld:/workspace/litepose/nano_demo/checkpoints/lite_pose_nano/lib0.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /workspace/litepose/nano_demo/checkpoints/lite_pose_nano/lib0.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /workspace/litepose/nano_demo/checkpoints/lite_pose_nano/lib0.o: Relocations in generic ELF (EM: 183)
/usr/bin/ld: /workspace/litepose/nano_demo/checkpoints/lite_pose_nano/lib0.o: Relocations in generic ELF (EM: 183)
/workspace/litepose/nano_demo/checkpoints/lite_pose_nano/lib0.o: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status

Command line: /usr/bin/g++ -shared -fPIC -o /workspace/litepose/nano_demo/checkpoints/lite_pose_nano.tar.so /workspace/litepose/nano_demo/checkpoints/lite_pose_nano/lib0.o /workspace/litepose/nano_demo/checkpoints/lite_pose_nano/devc.o

When I run nano_demo/start.py, one folder(nano_demo/checkpoint/lite_pose_nano/) generated.
However, the files(devc.o, lib0.o) in the folder that were created seem to be causing errors.
What can I do?

(ps. I'm using a conda virtual environment. Does C++ have to be installed in my conda virtual environment?)

It may be a basic question, but I need help.

Help me...

Thank you!

代码

请问什么时候开放源代码

mismatch between different persons

Hi author,

Thanks for sharing your great work. I found an issue during inference on some videos where some mismatch happened between different persons. For instance, one key point in one person was connected to the key point in a different person. Such issue exists on XS/S/M pretrained models. Any comments or suggestions? Appreciate your feedback.

Compare with Lite-HRNet ?

Does this paper comper with "Lite-HRNet: A Lightweight High-Resolution Network" (CVPR 2021) performance and speed ?

Issue with unpacking pretrained models

I tried to unpack some pretrained model that you provide

tar -xf LitePose-Auto-XS.pth.tar

however, it gives me an Error

tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors

What is the correct way to unpack the above mentioned file? (I'm using Ubuntu)

1

11

Jetson Nano inference speed is not same

hello,
I tested your COCO and CROWDPOSE path.tar files using litepose/valid.py

but in my experience result, when using COCO trained LightPose-Auto-S, inference speed was 2 FPS.

is there some ways to speed up inference speed on Jetson Nano?

or...did I missed something? (like converting torch models to tvm)

when I tested litepose/nano_demo/start.py, using weight <lite_pose_nano.tar>, FPS was almost 7.

help/ crowdposetools missing and once-for-all lib

Hi there,
I am trying to clone the GitHub repo and run inference using your model. It seems one file is missing from the repo which causes an error 'No module named 'crowdposetools''. I found some links like 'HRNet/HigherHRNet-Human-Pose-Estimation#26' but could not solve this problem.
I run the model in Google Colab, Can you guide me on how to solve this problem in this environment?
in addition, you used once-for-all lib, would you explain which function uses this library?

Thank you for sharing this professional code and for your guidance in advance,

How to get LitePose-Auto-L model with a coco dataset?

get the LitePose-Auto-L model coco dataset (map 62.5) is through Normal Training or Super-net Training?

To train network with search-L architecture with coco dataset, I use the following script:
python dist_train.py --cfg experiments/coco/mobilenet/mobile.yaml --superconfig mobile_configs/search-L.json

compare the provided LitePose-Auto-L coco model (map 62.5) with my model after training,the effect of my model is poor.

pretrained model can not be download

thanks for your great job, I have tried to download the checkpoint of LitePose_XS trained on CrowdPose, but fail, it seems that there are some wrong with the file on the cloud.

LitePose-Auto-XS COCO model

Hi, Could you please provide the LitePose-Auto-XS model for the COCO database? The speed of LitePose-Auto-XS CrowedPose model is much faster than that of LitPose-Auto-S CrowedPose model. I wonder how the speed of the LitePose-Auto-XS COCO model compared with LitPose-Auto-S COCO model and how the accuracy of the LitePose-Auto-XS COCO model. Thank you.

Typo in `do_train`

Hi,
it's so late but congrats on great work!

In the below,

for idx in range(cfg.LOSS.NUM_STAGES):
writer.add_scalar(
'train_stage{}_heatmaps_loss'.format(i),
heatmaps_loss_meter[idx].val,
global_steps
)
writer.add_scalar(
'train_stage{}_push_loss'.format(idx),
push_loss_meter[idx].val,
global_steps
)
writer.add_scalar(
'train_stage{}_pull_loss'.format(idx),
pull_loss_meter[idx].val,
global_steps
)

i at 'train_stage{}_heatmaps_loss'.format(i), seems to be changed to idx.

Or, just in case, is there any specific reason for logging in each frequency?

Really appreciate any help you can provide :)

how to reproduce XS results on COCO?

Hi~

I wanna reproduce your XS results on COCO then do some improve experiments.
And I tried the Normal Training said in README like this:

python dist_train.py --cfg experiments/coco/mobilenet/mobile.yaml --superconfig mobile_configs/search-XS.json

python valid.py --cfg experiments/coco/mobilenet/mobile.yaml --superconfig mobile_configs/search-XS.json TEST.MODEL_FILE output/coco_kpt/pose_mobilenet/mobile/model_best.pth.tar

I got only 0.332 mAP, which is 40.6 on your paper.

Did I miss something?
Or should I try the Super-net Training then do Weight Transfer?

Thx.

help

Hello, I have tested the coco test image, but what should I do if I want to test my own image and want to get its key points to see the effect?

Group.py

Hello, I would like to ask what is the function of the group.py in the core folder?I haven't quite understood it. Can you give me some help, please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.