Giter VIP home page Giter VIP logo

epipolar-transformers's Introduction

Epipolar Transformers

Screen_Shot_2022-10-17_at_5 46 06_PM

GitHub - yihui-he/epipolar-transformers: Epipolar Transformers (CVPR 2020)

Epipolar Transformers

Yihui He, Rui Yan, Katerina Fragkiadaki, Shoou-I Yu (Carnegie Mellon University, Facebook Reality Labs)

CVPR 2020, CVPR workshop Best Paper Award

Oral presentation and human pose demo videos (playlist):

https://www.youtube.com/embed/nfb0kfVWjcs

https://www.youtube.com/embed/ig5c-qTaYkg

Models

config MPJPE (mm) model & log
https://www.notion.soconfigs/benchmark/keypoint_h36m.yaml 45.3 https://github.com/yihui-he/epipolar-transformers/releases/download/outputs/outs.benchmark.keypoint_h36m_afterfix.zip
https://www.notion.soconfigs/epipolar/keypoint_h36m_zresidual_fixed.yaml 33.1 https://github.com/yihui-he/epipolar-transformers/releases/download/outputs/outs.epipolar.keypoint_h36m_fixed.zip
https://www.notion.soconfigs/epipolar/keypoint_h36m_zresidual_aug.yaml 30.4 https://github.com/yihui-he/epipolar-transformers/releases/download/outputs/outs.epipolar.keypoint_h36m_fixed_aug.zip
https://www.notion.soconfigs/epipolar/keypoint_h36m_resnet152_384_pretrained_8gpu.yaml 19

We also provide 2D to 3D lifting network implementations for these two papers:

Setup

Requirements

Python 3, pytorch > 1.2+ and pytorch < 1.4

pip install -r requirements.txtconda install pytorch cudatoolkit=10.0 -c pytorch

Pretrained weights download

mkdir outscd datasets/bash get_pretrained_models.sh

Please follow the instructions in datasets/README.md for preparing the dataset

Training

python main.py --cfg path/to/configtensorboard --logdir outs/

Testing

Testing with latest checkpoints

python main.py --cfg configs/xxx.yaml DOTRAIN False

Testing with weights

python main.py --cfg configs/xxx.yaml DOTRAIN False WEIGHTS xxx.pth

Visualization

Epipolar Transformers Visualization

https://raw.githubusercontent.com/yihui-he/epipolar-transformers/master/assets/et_vis.png

  • Download the output pkls for non-augmented models and extract under outs/
  • Make sure outs/epipolar/keypoint_h36m_fixed/visualizations/h36m/output_1.pkl exists.
  • Use [scripts/vis_hm36_score.ipynb](https://github.com/yihui-he/epipolar-transformers/blob/master/scripts/vis_hm36_score.ipynb)
    • To select a point, click on the reference view (upper left), the source view along with corresponding epipolar line, and the peaks for different feature matchings are shown at the bottom left.

Human 3.6M input visualization

https://raw.githubusercontent.com/yihui-he/epipolar-transformers/master/assets/h36m_vis.png

python main.py --cfg configs/epipolar/keypoint_h36m.yaml DOTRAIN False DOTEST False EPIPOLAR.VIS True  VIS.H36M True SOLVER.IMS_PER_BATCH 1
python main.py --cfg configs/epipolar/keypoint_h36m.yaml DOTRAIN False DOTEST False VIS.MULTIVIEWH36M True EPIPOLAR.VIS True SOLVER.IMS_PER_BATCH 1

Human 3.6M prediction visualization

https://www.youtube.com/embed/ig5c-qTaYkg

# generate images
python main.py --cfg configs/epipolar/keypoint_h36m_zresidual_fixed.yaml DOTRAIN False DOTEST True VIS.VIDEO True DATASETS.H36M.TEST_SAMPLE 2
# generate images
python main.py --cfg configs/benchmark/keypoint_h36m.yaml DOTRAIN False DOTEST True VIS.VIDEO True DATASETS.H36M.TEST_SAMPLE 2
# use https://github.com/yihui-he/multiview-human-pose-estimation-pytorch to generate images for ICCV 19
python run/pose2d/valid.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml 
# set test batch size to 1 and PRINT_FREQ to 2
# generate video
python scripts/video.py --src outs/epipolar/keypoint_h36m_fixed/video/multiview_h36m_val/

Citing Epipolar Transformers

If you find Epipolar Transformers helps your research, please cite the paper:

@inproceedings{epipolartransformers,
  title={Epipolar Transformers},
  author={He, Yihui and Yan, Rui and Fragkiadaki, Katerina and Yu, Shoou-I},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={7779--7788},
  year={2020}
}

FAQ

Please create a new issue:

Issues · yihui-he/epipolar-transformers

epipolar-transformers's People

Contributors

ethanhe42 avatar yre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

epipolar-transformers's Issues

Training HyperParameters for Epipolar Transformer

Hi, thank you for sharing such an awesome project. I am a little confused about your training hyperparameters.
Based on your README, we should use keypoint_h36m_zresidual_fixed.yaml to train the epipolar transformer to achieve 33.1 MPJPE. In that file, the max epoch is just 4. However, in your paper, it is said that "The networks are trained for 20 epochs with batch size 16 and Adam optimizer [18]. Learning rate decays were set at 10 and 15 epochs.". Thus, which one is correct?

BTW, do I need to train the simple baseline on H36M firstly by:
python main.py --cfg configs/benchmark/keypoint_h36m.yaml
Then I can train the epipolar transformer with only 4 epochs by:
python main.py --cfg configs/epipolar/keypoint_h36m_zresidual_fixed.yaml.
Or I could directly train it from scratch and achieve your performance by:
python main.py --cfg configs/epipolar/keypoint_h36m_zresidual_fixed.yaml.

Look forward to your reply, thanks!

Can you report the inference speed of the algorithm (in terms of fps)?

Thanks for the great work! In your results, you provide the number of parameters and MAC, however, it is unclear how this affects the inference speed (fps) of your algorithm. Can you report some numbers? Also, have you compared it with other methods except for cross-view fusion?

H36M dataset permission

copied from email:

thanks for making your work available.

  1. I'd like to evaluate but do not have access to h36m dataset(I've tried signing up but haven't been sent link since I do not have academia background)
  2. is it possible to test without ?

AttributeError: 'str' object has no attribute 'keys' during execution

Hello,
I really liked your work and would like to test the network !

Thank you for making it available.
I had a small problem which I think might be a bug, everything loads fine at the beginning, then it crashes saying that some string, isn't a dictionnary.
Here is an output from my terminal.

I have one more question, where are the test files stored ? If I want to imput my own picture, where do I put it, and where do I tell you algorithm the path to it ?

Thank you very much in advance for your answers !

Best regards,

python main.py --cfg configs/epipolar/keypoint_h36m_resnet152_384_pretrained_8gpu.yaml DOTRAIN False WEIGHTS outs/pose_resnet_4.5_pixels_human36m.pth
2020-06-01 01:04:41,493 kp INFO: Namespace(cfg='configs/epipolar/keypoint_h36m_resnet152_384_pretrained_8gpu.yaml', opts=['DOTRAIN', 'False', 'WEIGHTS', 'outs/pose_resnet_4.5_pixels_human36m.pth'])
2020-06-01 01:04:41,493 kp INFO: Loaded configuration file configs/epipolar/keypoint_h36m_resnet152_384_pretrained_8gpu.yaml
2020-06-01 01:04:41,493 kp INFO: 
DATASETS:
    TRAIN: ('multiview_h36m_train',)
    #TRAIN: ('h36m_val',)
    TEST: ('multiview_h36m_val', )
    TASK: multiview_keypoint
    DATA_FORMAT: jpg
    IMAGE_SIZE: (384, 384)
    IMAGE_RESIZE: 1. #3.90625 #1000. / 256 
    PREDICT_RESIZE: 1. 
    H36M:
        TRAIN_SAMPLE: 0
        MAPPING: False
DATALOADER:
    NUM_WORKERS: 20
BACKBONE:
    ENABLED: True
    BODY: epipolarposeR-152
    PRETRAINED: True
    PRETRAINED_WEIGHTS: datasets/pose_resnet_4.5_pixels_human36m.pth
    DOWNSAMPLE: 4
    SYNC_BN: True
SOLVER:
    OPTIMIZER: adam
    BASE_LR: 0.001
    STEPS: (2, 3)
    MAX_EPOCHS: 4
    IMS_PER_BATCH: 32
    CHECKPOINT_PERIOD: 1
EPIPOLAR:
    TOPK: 1
    MERGE: late
    SHARE_WEIGHTS: True 
    ATTENTION: avg
    PARAMETERIZED: ('z',)
    PRETRAINED: False
    ZRESIDUAL: True
    SAMPLESIZE: 64
    USE_CORRECT_NORMALIZE: True
TEST:
    IMS_PER_BATCH: 1
KEYPOINT:
    HEATMAP_SIZE: (96, 96)
    SIGMA: 12.
    NUM_PTS: 17
    TRIANGULATION: epipolar
    CONF_THRES: .85
    RANSAC_THRES: 35
    LOSS: joint
    LOSS_PER_JOINT: False
VIS:
    MULTIVIEW: True
OUTPUT_DIR: outs/epipolar/dgx/keypoint_h36m_resnet152_384_8gpu
EVAL_FREQ: 1

2020-06-01 01:04:41,493 kp INFO: Running with config:
BACKBONE:
  BN_MOMENTUM: 0.1
  BODY: epipolarposeR-152
  DOWNSAMPLE: 4
  ENABLED: True
  PRETRAINED: True
  PRETRAINED_WEIGHTS: datasets/pose_resnet_4.5_pixels_human36m.pth
  SYNC_BN: True
DATALOADER:
  BENCHMARK: False
  NUM_WORKERS: 20
  PIN_MEMORY: True
DATASETS:
  CAMERAS: ()
  COMPLETENESS: 1.0
  CROP_AFTER_RESIZE: False
  CROP_SIZE: (512, 320)
  DATA_FORMAT: jpg
  H36M:
    FILTER_DAMAGE: True
    MAPPING: False
    REAL3D: True
    TEST_SAMPLE: 64
    TRAIN_SAMPLE: 0
  IMAGE_RESIZE: 1.0
  IMAGE_SIZE: (384, 384)
  INCLUDE_GREY_IMGS: True
  PREDICT_RESIZE: 1.0
  ROT_FACTOR: 0
  SCALE_FACTOR: 0.0
  TASK: multiview_keypoint
  TEST: ('multiview_h36m_val',)
  TRAIN: ('multiview_h36m_train',)
  WRIST_COORD: False
DEVICE: cuda
DOTEST: True
DOTRAIN: False
EPIPOLAR:
  ATTENTION: avg
  BOTTLENECK: 1
  FIND_CORR: feature
  MERGE: late
  MULTITEST: False
  OTHER_GRAD: ('other1', 'other2')
  OTHER_ONLY: False
  PARAMETERIZED: ('z',)
  POOLING: False
  PRETRAINED: False
  PRIOR: False
  PRIORMUL: False
  REPROJECT_LOSS_WEIGHT: 0.0
  SAMPLESIZE: 64
  SHARE_WEIGHTS: True
  SIMILARITY: dot
  SIM_LOSS_WEIGHT: 0.0
  SOFTMAXBETA: True
  SOFTMAXSCALE: 0.125
  SOFTMAX_ENABLED: True
  TOPK: 1
  TOPK_RANGE: (1, 2)
  USE_CORRECT_NORMALIZE: True
  VIS: False
  WARPEDHEATMAP: False
  ZRESIDUAL: True
EVAL_FREQ: 1
FOLDER_NAME: outs/epipolar/dgx/keypoint_h36m_resnet152_384_8gpu/01-Jun-at-01-04-41
KEYPOINT:
  CONF_THRES: 0.85
  ENABLED: False
  HEATMAP_SIZE: (96, 96)
  LOSS: joint
  LOSS_PER_JOINT: False
  NFEATS: 256
  NUM_CAM: 0
  NUM_PTS: 17
  RANSAC_THRES: 35
  ROOTIDX: 0
  SIGMA: 12.0
  TRIANGULATION: epipolar
LIFTING:
  AVELOSS_KP: False
  CROP_SIZE: 256
  ENABLED: False
  FLIP_ON: False
  IMAGE_SIZE: 320
  MULTIVIEW_MEDIUM: True
  MULTIVIEW_UPPERBOUND: False
  VIEW_ON: False
LOG_FREQ: 100
OUTPUT_DIR: outs/epipolar/dgx/keypoint_h36m_resnet152_384_8gpu
PICT_STRUCT:
  DEBUG: False
  FIRST_NBINS: 16
  GRID_SIZE: 2000
  LIMB_LENGTH_TOLERANCE: 150
  PAIRWISE_FILE: datasets/h36m/pairwise.pkl
  RECUR_DEPTH: 10
  RECUR_NBINS: 2
  SHOW_CROPIMG: False
  SHOW_HEATIMG: False
  SHOW_ORIIMG: False
  TEST_PAIRWISE: False
SEED: 0
SOLVER:
  BASE_LR: 0.001
  BATCH_MUL: 1
  CHECKPOINT_PERIOD: 1
  FINETUNE: False
  FINETUNE_FREEZE: True
  GAMMA: 0.1
  IMS_PER_BATCH: 32
  MAX_EPOCHS: 4
  MOMENTUM: 0.9
  OPTIMIZER: adam
  SCHEDULER: multistep
  STEPS: (2, 3)
  WEIGHT_DECAY: 0.0
TENSORBOARD:
  COMMENT: 
  USE: True
TEST:
  EPEMEAN_MAX_DIST: 150
  IMS_PER_BATCH: 1
  MAX_TH: 20
  PCK: True
  RECOMPUTE_BN: False
  THRESHOLDS: (1, 2, 5, 10, 20, 30, 40, 50, 60, 80, 100)
  TRAIN_BN: False
VIS:
  AUC: False
  CURSOR: False
  DOVIS: True
  EPIPOLAR_LINE: False
  FLOPS: False
  H36M: False
  MULTIVIEW: True
  MULTIVIEWH36M: False
  POINTCLOUD: False
  SAVE_PRED: False
  SAVE_PRED_FREQ: 100
  SAVE_PRED_LIMIT: -1
  SAVE_PRED_NAME: predictions.pth
  VIDEO: False
  VIDEO_GT: False
WEIGHTS: outs/pose_resnet_4.5_pixels_human36m.pth
WEIGHTS_ALLOW_DIFF_PREFIX: False
WEIGHTS_LOAD_OPT: True
WEIGHTS_PREFIX: module.
WEIGHTS_PREFIX_REPLACE: 
2020-06-01 01:04:41,847 resnet INFO: => loading pretrained model from web
2020-06-01 01:04:41,847 resnet INFO: => init deconv weights from normal distribution
2020-06-01 01:04:41,848 resnet INFO: => init 0.weight as normal(0, 0.001)
2020-06-01 01:04:41,848 resnet INFO: => init 0.bias as 0
2020-06-01 01:04:41,894 resnet INFO: => init 1.weight as 1
2020-06-01 01:04:41,894 resnet INFO: => init 1.bias as 0
2020-06-01 01:04:41,894 resnet INFO: => init 3.weight as normal(0, 0.001)
2020-06-01 01:04:41,894 resnet INFO: => init 3.bias as 0
2020-06-01 01:04:41,900 resnet INFO: => init 4.weight as 1
2020-06-01 01:04:41,900 resnet INFO: => init 4.bias as 0
2020-06-01 01:04:41,900 resnet INFO: => init 6.weight as normal(0, 0.001)
2020-06-01 01:04:41,900 resnet INFO: => init 6.bias as 0
2020-06-01 01:04:41,906 resnet INFO: => init 7.weight as 1
2020-06-01 01:04:41,906 resnet INFO: => init 7.bias as 0
2020-06-01 01:04:41,906 resnet INFO: => init final conv weights from normal distribution
2020-06-01 01:04:41,906 resnet INFO: => init 8.weight as normal(0, 0.001)
2020-06-01 01:04:41,906 resnet INFO: => init 8.bias as 0
Traceback (most recent call last):
  File "main.py", line 75, in <module>
    main()
  File "main.py", line 68, in main
    test(cfg)
  File "/home/USER/Bureau/posEstimation/epipolar-transformers/engine/tester.py", line 31, in test
    model = Modelbuilder(cfg)
  File "/home/USER/Bureau/posEstimation/epipolar-transformers/modeling/model.py", line 34, in __init__
    self.reference = registry.BACKBONES[cfg.BACKBONE.BODY](cfg)
  File "/home/USER/Bureau/posEstimation/epipolar-transformers/modeling/backbones/resnet.py", line 515, in get_pose_net
    model.init_weights(cfg.BACKBONE.PRETRAINED_WEIGHTS)
  File "/home/USER/Bureau/posEstimation/epipolar-transformers/modeling/backbones/resnet.py", line 471, in init_weights
    load_state_dict(self, pretrained_state_dict, strict=False, ignored_layers=['final_layer.bias', 'final_layer.weight'], prefix=cfg.WEIGHTS_PREFIX, prefix_replace=cfg.WEIGHTS_PREFIX_REPLACE)
  File "/home/USER/Bureau/posEstimation/epipolar-transformers/utils/model_serialization.py", line 80, in load_state_dict
    if 'model' in loaded_state_dict.keys():
AttributeError: 'str' object has no attribute 'keys'

```

How to test on our own dataset ?

Thanks a lot for the great work and open-sourcing the intense code repo. Could you please give a few pointers on how to test a pretrained model on our own custom dataset?

Have you tried more views?

I mean if you use 3 or 4 views in epipolar sample, like fuse features one more time by the 3rd view, you may get better results I guess.
I'm appreciated if you reply.

Question about number of views

Hi, thank you for sharing such an amazing work. I have some questions about the number of views, and I sincerely hope you could help me with them.
1 How can I adjust your code to obtain the results in Figure 7? It seems that you only use one source view in both training and testing, as the Top K is always 1. Thus I am curious on how to obtain the results with only 2 views.

2 Based on Table 6, Resnet50 on 256 * 256 + triangulate already achieves 48.7 mm. However, based on Figure 7, the Crossview + triangulate[28] achieves more than 70 mm in two views, which is even worse than the baseline without any fusion. This is a little strange, could you please share some details about this?

Look forward to your reply, thanks in advance!

Questions about the number of view

Hi, thanks for your work! When I run your code from four views, the results look good. But I when run the code with two views it seems bad! I was confused. Could you give me some advice?

Test on 4 Views
image

Test on 2 Views
image

Question about the extrinsic matrix of human3.6M

Thanks for your code! I just came into contact with the dataset of human 3.6. I noticed that the R and t of camera parameters in the dataset will change in different actors. At the same time, I also calculated the external parameters between different cameras and found that they will change between different actors. Is this because the positions of the four cameras did change during their data collection, or because I misunderstood?

pretrained models missing

All of the downloads are spitting the same message

bash ../datasets/get_pretrained_models.sh
--2020-05-12 14:05:34--  https://github.com/yihui-he/epipolar-transformer/releases/download/outputs/resnet50-19c8e357.pth
Resolving github.com (github.com)... 140.82.118.4
Connecting to github.com (github.com)|140.82.118.4|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-12 14:05:44 ERROR 404: Not Found.

zipreader in undistort_h36m.py

In undistort_h36m, after Process using the H36M-Toolbox, I don't get the images.zip@. So I can change the "zipreader.imread" to "cv2.imread"?

Code of Cross View Fusion

Hello. Thanks for your work.
I saw that the comparison between your algorithm and the one in "Cross View Fusion for 3D Human Pose Estimation". However, the official implementation of it doesn't contain the training code. Will your release this part?

Results of InterHand2.6M

Hi,

Thanks for your awesome work. I noticed the InterHand2.6M dataset used in the paper has 257K 2D hands in the training set, however, the latest released InterHand2.6M has 528K manually labeled 2D hands for training. So I guess your reported results on InterHand2.6M are based on a subset of the latest released InterHand2.6M. Is it correct? If so, could you please update the results of InterHand2.6M based on the public released version?

Thanks a lot.

Question about the number of views and MPJPE

Hi.
I think the MPJPE will be improved if you use all of views of h36m(4 views). but you use just 1 other views(keypoint_h36m_resnet152_384_pretrained_8gpu.yaml).

Can you explain Could you explain why you didn't use 4 views?

Train on RHD dataset error

Thanks for your great work! But when i tried train on RHD dataset, I met the error like this:
image

I use python main.py --cfg configs/lifting/lifting_direct.yaml to run the main.py.
Could you give me some advice?

Loading pretrain model

Hi. I am trying to load your provided pre-trained models "resnet50-19c8e357.pth" and "pose_resnet_4.5_pixels_human36m.pth" to test, but I failed.
2021-07-08 10:47:24,038 checkpointer INFO: Loading checkpoint from datasets/resnet50-19c8e357.pth
Traceback (most recent call last):
File "main.py", line 75, in
main()
File "main.py", line 68, in main
test(cfg)
File "/media/hkuit155/Windows/research/epipolar-transformers/engine/tester.py", line 34, in test
_ = checkpointer.load(cfg.WEIGHTS)
File "/media/hkuit155/Windows/research/epipolar-transformers/utils/checkpoint.py", line 63, in load
self._load_model(checkpoint, prefix=prefix, prefix_replace=prefix_replace)
File "/media/hkuit155/Windows/research/epipolar-transformers/utils/checkpoint.py", line 102, in _load_model
load_state_dict(self.model, checkpoint.pop("model"), prefix=prefix, prefix_replace=prefix_replace)
KeyError: 'model'

I wonder what's happening, and what's there corresponding config files?

InterHand Dataset

Hello, is the InterHand dataset not publicly available?
Are there other similar multi-view datasets that can be used for hand tasks?
Best regards!

get_preprocessed_H36M.sh fails with 404s

I managed to set up the project and visualize the Epipolar Transformer Visualization in a notebook.

However I could not visualize human 3.6M. It couldn't find the files.

~/epipolar-transformers/datasets$ ./get_preprocessed_H36M.sh
--2020-05-25 15:43:04--  https://github.com/yihui-he/epipolar-transformer/releases/download/dataset/pairwise.pkl
Resolving github.com (github.com)... 140.82.118.3
Connecting to github.com (github.com)|140.82.118.3|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-25 15:43:04 ERROR 404: Not Found.

--2020-05-25 15:43:04--  https://github.com/yihui-he/epipolar-transformer/releases/download/dataset/h36m_validation.pkl
Resolving github.com (github.com)... 140.82.118.3
Connecting to github.com (github.com)|140.82.118.3|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-25 15:43:05 ERROR 404: Not Found.

--2020-05-25 15:43:05--  https://github.com/yihui-he/epipolar-transformer/releases/download/dataset/h36m_train.pklac
Resolving github.com (github.com)... 140.82.118.3
Connecting to github.com (github.com)|140.82.118.3|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-25 15:43:05 ERROR 404: Not Found.
--2020-05-25 15:43:05--  https://github.com/yihui-he/epipolar-transformer/releases/download/dataset/h36m_images.zip.partba
Resolving github.com (github.com)... 140.82.118.3
Connecting to github.com (github.com)|140.82.118.3|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-25 15:43:05 ERROR 404: Not Found.

memory consumption

Hi,

I'm reproducing training H36m with config keypoint_h36m_zresidual_fixed.yaml. The program kept eating up RAM during the progress. I use pytorch 1.3.0 following the setup instructions. Did you observe anything alike? Any pointer or thought would be more than welcome. Thanks in advance!

dataset dir

Hello,
I really liked your work and would like to test the network !

I have one more question, where should be changed after using H36M-Toolbox. I cannot find the zip file.

Thank you very much in advance for your answers !

Best regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.