Giter VIP home page Giter VIP logo

spin's People

Contributors

anuragranj avatar geopavlakos avatar ikvision avatar nkolot avatar t04glovern avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spin's Issues

Real time Demo

Thank you for awesome work.

Is it possible to demo in real time?

24 joint gt superset

I would like to pre-process my dataset into the a preprocessed format as in
https://github.com/nkolot/SPIN/blob/master/datasets/preprocess/README.md

As mentioned in the constant file the ground truth annotation needs to be converted to 24 joints:
https://github.com/nkolot/SPIN/blob/277892a91e74af004c74457cd294bd102972f9b0/constants.py#L10

But it is not clear to me what is the order and the joints name of those 24 joint, I tried to reverse engineer without too much success the coco convertor:
https://github.com/nkolot/SPIN/blob/master/datasets/preprocess/coco.py#L11

Can you point me to the superset 24 joint ordered list names

Thank you

input to camera_fitting_loss

Hello,

I was going trough the code for camera_fitting_loss.py and was wondering why this was defined as it is:

reprojection_error_op = (joints_2d[:, op_joints_ind, :-1] - projected_joints[:, op_joints_ind, :]) ** 2

specifically, why are only the x-values of joints_2d used here (:-1). I believe joints_2d has shape (batch, #joints, 2) --> (x,y), as in simplify.py the input to the loss is defined as:

#Get joint confidence
joints_2d = keypoints_2d[:, :, :2]
joints_conf = keypoints_2d[:, :, -1]

Thx!

Training without human3.6

When running training:
train.py --name train_example --pretrained_checkpoint=data/model_checkpoint.pt --run_smplify --ignore_3d
FileNotFoundError: [Errno 2] No such file or directory: 'data/dataset_extras/h36m_single_train_openpose.npz'

It is not clear which train option to chose to avoid the need of h3.6m data
https://github.com/nkolot/SPIN/blob/master/utils/train_options.py

Do you have an estimate of the decrease performance in eval if I train without h3.6m?

Issue running demo.py

Hi! Thank you for sharing your code! I have been trying to run your demo but unfortunately haven't been successful. When I try to execute it, I get the following error:

python3 -u demo.py --checkpoint=data/model_checkpoint.pt --img=examples/im1010.jpg --bbox=examples/im1010_bbox.json 
Traceback (most recent call last):
  File "demo.py", line 122, in <module>
    pred_output = smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
  File "/home/zal/anaconda3/envs/spin/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/zal/Devel/SPIN/models/smpl.py", line 23, in forward
    smpl_output = super(SMPL, self).forward(*args, **kwargs)
  File "/home/zal/anaconda3/envs/spin/lib/python3.7/site-packages/smplx/body_models.py", line 374, in forward
    self.lbs_weights, dtype=self.dtype)
  File "/home/zal/anaconda3/envs/spin/lib/python3.7/site-packages/smplx/lbs.py", line 195, in lbs
    pose_offsets = torch.matmul(pose_feature, posedirs) \
RuntimeError: size mismatch, m1: [1 x 639], m2: [207 x 20670] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:268

Got any idea of what could be happening? Thank you very much!

import neural_renderer Error

Hi, when I tried to run the eval.py, which imports "neural_renderer", I got the following error.

Traceback (most recent call last):
  File "eval.py", line 32, in <module>
    from utils.part_utils import PartRenderer
  File "/home/BaseCode/utils/part_utils.py", line 3, in <module>
    import neural_renderer as nr
  File "/usr/local/lib/python3.6/dist-packages/neural_renderer/__init__.py", line 3, in <module>
    from .load_obj import load_obj
  File "/usr/local/lib/python3.6/dist-packages/neural_renderer/load_obj.py", line 8, in <module>
    import neural_renderer.cuda.load_textures as load_textures_cuda
ImportError: /usr/local/lib/python3.6/dist-packages/neural_renderer/cuda/load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6$affe26detail37_typeMetaDataInstance_preallocated_32E

Does anyone know how to fix this?

smpl-x model

It's planned in future the use of smpl-x model instead only smpl? Smpl-x looks have better performance

Docker

@nkolot Hi, thanks so much for such a nice work. I tried to run using virtual environment but I'm unable to run. I want to run it in Docker but I'm new in Docker. Docker is working and I already fetched your repository. What is next procedure to run the demo? I'm very sorry for this very basic question.
Thanks for your support.

Warnning issues with pytorch1.3

Hello, thanks a lot for your contribution.
When I run the train.py code, I see a lot of warning message like:

/pytorch/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead.

However, I have no idea where the warnings could be from, I will really appreciate that if you can give me any advice about that, thank you!

Render error when running demo.py

Hi! Thank you for sharing your code! I guess that I installed the necessary environment and put SMPL models into the right path. While I experienced this error when I run the demo.py on my ssh terminal:

Traceback (most recent call last):
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyrender/platforms/pyglet.py", line 32, in init_context
width=1, height=1)
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyglet/window/xlib/init.py", line 170, in init
super(XlibWindow, self).init(*args, **kwargs)
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyglet/window/init.py", line 573, in init
display = pyglet.canvas.get_display()
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyglet/canvas/init.py", line 95, in get_display
return Display()
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyglet/canvas/xlib.py", line 119, in init
raise NoSuchDisplayException('Cannot connect to "%s"' % name)
pyglet.canvas.xlib.NoSuchDisplayException: Cannot connect to "None"

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "demo.py", line 115, in
renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl.faces)
File "/home/zwliu/workspace/human_3d_modeling_work/SPIN/utils/renderer.py", line 17, in init
point_size=1.0)
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyrender/offscreen.py", line 134, in _create
self._platform.init_context()
File "/home/zwliu/local/spin/lib/python3.6/site-packages/pyrender/platforms/pyglet.py", line 38, in init_context
'internal error message was "{}"'.format(e)
ValueError: Failed to initialize Pyglet window with an OpenGL >= 3+ context. If you're logged in via SSH, ensure that you're running your script with vglrun (i.e. VirtualGL). The internal error message was "Cannot connect to "None""

I have no idea to deal with that error. I will really appreciate if I can get any help. Thanks a lot !

ubuntu 16.0.4 , python2.7, smplx can't be installed.

hello, I install pyopengl in win10 with python3.6, install successfully,but it can't be load. So I try this project in Ubuntu 16.0.4, the pyopengl can't be install in python3.x, but python2.7 is ok , so I use python2.7. But I have a new question, the smplx is OK in python3.X, could you please help me solve this problem? Thanks!

Rotation Matrix to euler Angle

Hi,great works for your guys.
As your prediction results of your network for pose parameters is rotation matrix,I would like to convert it to euler angle for doing some simply filtering algorithm for video sequence output,But when I convert rotation matrix to euler angle,the reuslt for smpl is completely wrong,I dont know where I went wrong.here is my rotate_to_euler code:
def rotationMatrixToEulerAngles(R) :

assert(isRotationMatrix(R))

sy = math.sqrt(R[0,0] * R[0,0] +  R[1,0] * R[1,0])
singular = sy < 1e-6
if  not singular :
    x = math.atan2(R[2,1] , R[2,2])
    y = math.atan2(-R[2,0], sy)
    z = math.atan2(R[1,0], R[0,0])
else :
    x = math.atan2(-R[1,2], R[1,1])
    y = math.atan2(-R[2,0], sy)
    z = 0
return np.array([x, y, z])

Looking forward to your response!
thx

Issue with smpl model

I use the docker image that you offer, and run the fetch_data.sh script, but when I run the demo, it says that 'AssertionError: Path data/smpl does not exist!'

unknown parameters of SMPLify

self.smplify = SMPLify(step_size=1e-2, batch_size=self.options.batch_size, num_iters=self.options.num_smplify_iters, focal_length=self.focal_length, prior_mul=0.1, conf_thresh=self.conf_thresh)

There seem no such parameters: "prior_mul" and "conf_thresh". Are they typos?

Thanks!

loss for camera parameters

Hi,

I have a question regarding the loss function.

In trainer.py, there is no explicit loss function for the predicted camera parameters. Since you are maintaining the "opt_camera_t" in the dictionary, it looks more reasonable and straightforward to add a loss to explicitly restrict the camera parameters (similar to "End-to-end recovery of human shape and pose", CVPR18).

Have you tried this in your experiments? Will it improve performance or not?

Thanks!

typo in constants.py

I see the following line in constants.py:

J24_TO_J17 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 18, 14, 16, 17]

should 18 be 13 instead?

dockerfile might be helpful

I am struggling in using the provided docker with gpu, can you publish the dockerfile you used to create the docker image.
It might help me and other to build on top of it a newer version

Thank you

Multi-person key points correspondence

It seems to me that in the coco pre_processed data there are some mismatch between ground_truth_keypoints and open_pose_keypoint.

I would love to know if it is a known issue (that is difficult to eliminate without manual review) or a bug on my side.

gt_coco_img_keypoints -right person is annotated
image
gt_img_keypoints_open_pose - left person is targeted by the prediction
image

This mis-match is not very frequent, as most coco image have a single person, but it might affect the 2d re-projection loss and therefore might me the reason for

--openpose_train_weight', default=0

About the optimization part in the training time of Huamn3.6M

Hi. It's a great job, but I'm a little confused, In your code, the optimized parameters are replaced with the ground truth parameters, if they are available. So, dose that mean, that the optimization part of the code dosen't work during training on the Huamn3.6M?

Render Error during Training

Traceback (most recent call last): | 99/4877 [09:26<6:57:33, 5.24s/it]
File "train.py", line 7, in
trainer.train()
File "/home/project/SPIN/utils/base_trainer.py", line 72, in train
self.train_summaries(batch, *out)
File "/home/project/SPIN/train/trainer.py", line 305, in train_summaries
images_pred = self.renderer.visualize_tb(pred_vertices, pred_cam_t, images)
File "/home/project/SPIN/utils/renderer.py", line 29, in visualize_tb
rend_img = torch.from_numpy(np.transpose(self.call(vertices[i], camera_translation[i], images_np[i]), (2,0,1))).float()
File "/home/project/SPIN/utils/renderer.py", line 46, in call
mesh.apply_transform(rot)
File "/usr/local/lib/python3.6/dist-packages/trimesh/base.py", line 2038, in apply_transform
triangle_pre = self.vertices[self.faces[:5]]
IndexError: index 1 is out of bounds for axis 0 with size 0

Questions about training options.

Hi Nikos,

When I check your Supplementary Material, I find the lr is 3e-5, and the maximum number of optimization iterations is 50. But in your code, lr=5e-5, num_smplify_iters=100. So which of them are used in your training?

And you also mentioned "The model with limited access to 3D ground truth (“paired”) was initialized with a model pretrained on Human3.6M [4] using full 3D pose and shape ground truth. Pretraining in this case was useful, such that the model provides better initial 3D shape estimates for the iterative fitting." How do you configure this pretraining? Do you use the same loss and how many iterations does it need?

Another question is, the openpose detections are only used by simplify and will not be considered when you compute the 2D keypoints loss for the network, right?

how to choose SMPL model

hi , its me again. I still study the evaluation. I fill confused in when evaluate, which model I need choose to get predict smpl vertices and ground truth smpl vertices from man, women or neutral ? I noticed that when generate predict value you choose neutral and when get gt vertices you don't use neural if we have the gender value. So does it mean we can choose gender if we can predice it, or we just use neutral?
Sorry for my too many question.

dim size mismatch error running demo.py

I came across similar issue as #36

I downloaded "Version 1.0" of SMPL-X a few days ago, and tried with both Python3.6 and 3.7 in virtualenv as indicated in issue #36 . However, it still reports dimension mismatch errors.

 Traceback (most recent call last):
  File "demo.py", line 129, in <module>
    pred_output = smpl(betas=pred_betas, body_pose=pred_rotmat[:,1:], global_orient=pred_rotmat[:,0].unsqueeze(1), pose2rot=False)
  File "/home/co/work/SPIN/spin-venvpy36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/co/work/SPIN/models/smpl.py", line 23, in forward
    smpl_output = super(SMPL, self).forward(*args, **kwargs)
  File "/home/co/work/SPIN/spin-venvpy36/lib/python3.6/site-packages/smplx/body_models.py", line 376, in forward
    self.lbs_weights, pose2rot=pose2rot, dtype=self.dtype)
  File "/home/co/work/SPIN/spin-venvpy36/lib/python3.6/site-packages/smplx/lbs.py", line 179, in lbs
    v_shaped = v_template + blend_shapes(betas, shapedirs)
  File "/home/co/work/SPIN/spin-venvpy36/lib/python3.6/site-packages/smplx/lbs.py", line 265, in blend_shapes
    blend_shape = torch.einsum('bl,mkl->bmk', [betas, shape_disps])
  File "/home/co/work/SPIN/spin-venvpy36/lib/python3.6/site-packages/torch/functional.py", line 211, in einsum
    return torch._C._VariableFunctions.einsum(equation, operands)
RuntimeError: size of dimension does not match previous size, operand 1, dim 2

Versions of dependencies are:

Package                 Version    
----------------------- -----------
absl-py                 0.9.0      
cachetools              4.0.0      
certifi                 2019.11.28 
chardet                 3.0.4      
chumpy                  0.69       
cycler                  0.10.0     
decorator               4.4.1      
ffnet                   0.8.4      
freetype-py             2.1.0.post1
future                  0.18.2     
google-auth             1.10.0     
google-auth-oauthlib    0.4.1      
grpcio                  1.26.0     
h5py                    2.10.0     
idna                    2.8        
imageio                 2.6.1      
kiwisolver              1.1.0      
Markdown                3.1.1      
matplotlib              3.1.2      
networkx                2.2        
neural-renderer-pytorch 1.1.3      
numpy                   1.18.1     
oauthlib                3.1.0      
opencv-python           4.1.2.30   
Pillow                  6.1.0      
pip                     19.3.1     
protobuf                3.11.2     
pyasn1                  0.4.8      
pyasn1-modules          0.2.8      
pyglet                  1.4.0b1    
PyOpenGL                3.1.0      
pyparsing               2.4.6      
pyrender                0.1.33     
python-dateutil         2.8.1      
PyWavelets              1.1.1      
requests                2.22.0     
requests-oauthlib       1.3.0      
rsa                     4.0        
scikit-image            0.16.2     
scipy                   1.0.0      
setuptools              44.0.0     
six                     1.13.0     
smplx                   0.1.13     
spacepy                 0.2.1      
tensorboard             2.0.2      
torch                   1.1.0      
torchgeometry           0.1.2      
torchvision             0.3.0      
tqdm                    4.41.1     
trimesh                 3.5.14     
urllib3                 1.25.7     
Werkzeug                0.16.0     
wheel                   0.33.6 

Any idea what is wrong here?

Originally posted by @threedlife in #36 (comment)

Question about shape prior of smplify

Thanks for sharing the code. I have a question about shape prior in smplify implementation. Specifically, it seems that this is a L2 regularizer. In the paper (eq. (7)), they said Σ is a diagonal matrix with the squared singular values estimated via Principal Component Analysis from the shapes in the SMPL training set. In the implementation, it seems to imply the matrix is an identity matrix?

Explain how to generate 'static fits'

It seems the static_fits/*_fits.npy files are used to get the 'current best fits' in the training loop. In the scenario that I want to train the model from scratch, I don't see why I should need to load something like this in from disk... shouldn't the current best fit should just be the mean pose at the beginning?

The comment given in fetch_data.sh is:

# Initial fits to start training
wget http://visiondata.cis.upenn.edu/spin/static_fits.tar.gz && tar -xvf static_fits.tar.gz --directory data && rm -r static_fits.tar.gz

For more context, I want to train the model from scratch on H36M, for which I have GT SMPL shape & pose in the format used by GraphCMR (which is loaded in base_dataset.py). However, I can't continue without being able to load in a h36m_fits.npy file, which seems to be missing and I'm not sure how to generate one!

Some advice on how to proceed training from scratch would be useful.

convergence of in-the-loop training

Hi, I wanted to reproduce the in-the-loop training, but it seems that the loss becomes extremely large after about three epochs. I use the following parameters

python train.py --name train_example --run_smplify --batch_size 160  --num_smplify_iters 50  --lr 1e-4

The only thing that's different is the batch size and learning rate, I double the learning rate since the batch size is almost doubled. I am trying to use the default learning rate, but I don't think this is the issue.
Did you encounter this in your experiment? Also, for the static training (no --run_smplify), I found loss_regr_betas also gets some extreme values, something like the first half of the loss_regr_betas in the following figures. Do you have any idea why these happen?

image
image
image
image

Trouble with reimplementation with GPU support (RTX 2070 Super)

Hello,
thanks for your work and for providing the docker image!
I am unable to run your demo and evaluation codes with the container as is - in order to get GPU support, I am using the option "--gpus all" when running the container. nvidia-smi recognizes it alright, with CUDA 10.1 (same as the host). I think this is why the code doesn't work, since torch 1.1.0 is compiled with CUDA 9. I am having some serious trouble changing the container's version to 9, and would appreciate any insight on the problem.

My outputs:
root@f17e6a9cbeee:/home/SPIN# python3 demo.py --checkpoint=data/model_checkpoint.pt --img=examples/im1010.jpg THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=383 error=11 : invalid argument

root@f17e6a9cbeee:/home/SPIN# python3 eval.py --checkpoint=data/model_checkpoint.pt --dataset=3dpw --log_freq=20 Traceback (most recent call last): File "eval.py", line 32, in <module> from utils.part_utils import PartRenderer File "/home/SPIN/utils/part_utils.py", line 3, in <module> import neural_renderer as nr File "/usr/local/lib/python3.6/dist-packages/neural_renderer/__init__.py", line 3, in <module> from .load_obj import load_obj File "/usr/local/lib/python3.6/dist-packages/neural_renderer/load_obj.py", line 8, in <module> import neural_renderer.cuda.load_textures as load_textures_cuda ImportError: /usr/local/lib/python3.6/dist-packages/neural_renderer/cuda/load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

  • Do you think it would be easier to run a torch=1.1.0 container with the correct cuda and cudnn versions, and go from there?
  • As far as I understand, neural-renderer is GPU only, right? That would explain why I actually can run the demo code with CPU-only, but not the evaluation code.

Camera Model

@nkolot Hi, I have a question about the camera model. I saw that you first estimated the parameters [s, tx, ty] for a weak perspective camera, then you converted it to cam translation by [tx, ty, 2f/(img_ress)]. Instead of directly regress the translation from the model, why do you first estimate [s, tx, ty] and then do this conversion? And how do you derive the formula for tz as tz=2f/(img_ress)?

How to get h36m_single_train_openpose.npz?

What's the data included in h36m_single_train_openpose?
How to generate h36m_single_train_openpose.npz?
Where can I find h36m_single_train_openpose.npz?

Any example?

Installing on windows

I tried to run the demo on Windows.
I found that to work I have to change the demo.py in follow way:

import torch
torch.cuda.current_device() # new line added
from torchvision.transforms import Normalize

otherwise I have a error in CUDA initialitazion. Anyway, I have follow error:
ModuleNotFoundError: No module named 'pyrender.platforms.osmesa'; 'pyrender.platforms' is not a package
I tried to install following the https://pyrender.readthedocs.io/en/latest/install/index.html without success. Any suggestion?
Thanks

Axial limb orientation

Hello,
I would like to use SPIN for my research both because of the high accuracy, and because it implicitly predicts axial limb orientation (e.g. if an arm is pronated or supinated)
image
since this can be retrieved from the SMPL model. That's an advantage over skeleton-based prediction for my research, and I wonder it you ever thought about measuring the orientation accuracy of the model predictions?

problems about the docker images

after pulling the docker image and run the eval code, it said that I have a lot of libraries that are not installed. I don't know if I'm the only one encountered with this problem.

demo in docker fails to run

When running in the docker pulled
demo.py --checkpoint=data/model_checkpoint.ptt --img=examples/im1010.jpg
I get the following error:
Traceback (most recent call last):
File "demo.py", line 34, in
from models import hmr, SMPL
File "/SPIN/models/init.py", line 1, in
from .hmr import hmr
File "/SPIN/models/hmr.py", line 6, in
from utils.geometry import rot6d_to_rotmat
File "/SPIN/utils/init.py", line 3, in
from .base_trainer import BaseTrainer
File "/SPIN/utils/base_trainer.py", line 8, in
from torch.utils.tensorboard import SummaryWriter
File "/usr/local/lib/python3.6/dist-packages/torch/utils/tensorboard/init.py", line 6, in
from .writer import FileWriter, SummaryWriter # noqa F401
File "/usr/local/lib/python3.6/dist-packages/torch/utils/tensorboard/writer.py", line 18, in
from ._convert_np import make_np
File "/usr/local/lib/python3.6/dist-packages/torch/utils/tensorboard/_convert_np.py", line 12, in
from caffe2.python import workspace
File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 15, in
from past.builtins import basestring
ModuleNotFoundError: No module named 'past'

smplify after model prediction generate erroneous results

I tried to run SMPLify after model prediction in the demo code after here

kpts = np.zeros((1, 49, 3))
kpts[0, :25, :] = keypoints # keypoints load from openpose json file
pred_cam_t = torch.stack([pred_camera[:, 1],
                          pred_camera[:, 2],
                          2 * constants.FOCAL_LENGTH / (constants.IMG_RES * pred_camera[:, 0] + 1e-9)], dim=-1)


bs = 1
pred_rotmat_hom = torch.cat(
    [pred_rotmat.detach().view(-1, 3, 3).detach(),
     torch.tensor([0, 0, 1], dtype=torch.float32, device=device).view(1, 3, 1).expand(bs * 24, -1, -1)],
dim=-1)
pred_pose = rotation_matrix_to_angle_axis(pred_rotmat_hom).contiguous().view(bs, -1)
pred_pose[torch.isnan(pred_pose)] = 0.0

smplify = SMPLify(step_size=1e-2, batch_size=bs, num_iters=1000, focal_length=constants.FOCAL_LENGTH)
new_opt_vertices, new_opt_joints, \
new_opt_pose, new_opt_betas, \
new_opt_cam_t, new_opt_joint_loss = smplify(
    pred_pose.detach(), pred_betas.detach(),
    pred_cam_t.detach(),
    0.5 * constants.IMG_RES * torch.ones(bs, 2, device=device),
    keypoints)
new_opt_joint_loss = new_opt_joint_loss.mean(dim=-1)
pred_vertices = new_opt_vertices[0].cpu().numpy()

I rendered with the above pred_vertices and get the following weird results:
im1010_shape
im1010_shape_side
I think the only difference between this and the training code is --- the ground truth keypoints[25:] is unknown here. But according to this, the unknown ground truth should not be the reason, right? Did I miss anything here?

Avoiding Overfitting

During the training TensorBoard shows the training loss.
Without a validation loss I find it difficult to estimate if the system overfits. Therefore It is difficult to know when training should stoped or further regularized.

Seems like the BaseTrainer has a dedicated function for it:
https://github.com/nkolot/SPIN/blob/6de0944655721e8b2ecad6566aa40bc436a0e662/utils/base_trainer.py#L75:L77
Not implemented in its inheriting class BaseTrainer

I only see test set for final evaluation, am I missing the option to incorporate a validation set?
What method do you use to pick the best model?

relationship between J_regressor_h36m.npy and J_regressor_extra.npy

I notice J_regressor_extra[7,:] == J_regressor_h36m[9,:], which is Jaw (H36M), and J_regressor_extra[6,:] == J_regressor_h36m[7,:], which is Spine (H36M).

However, J_regressor_extra[8,:] and J_regressor_h36m[10,:] are quite different, although they are both supposed to be Head (H36M). Is there any reason for this?

failed to install neural-renderer-pytorch

I try to install the neural-renderer-pytorch in win10 ,python3.6 with torch==1.1.0 , but failed.
This is the part of error . And I can not find the solution,.
ERROR: Command errored out with exit status 1: command: 'C:\Users\LH\AppData\Local\Programs\Python\Python36\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\LH\\AppData\\Local\\Temp\\pip-install-ccvzskvd\\neural-renderer-pytorch\\setup.py'"'"'; __file__='"'"'C:\\Users\\LH\\AppData\\Local\\Temp\\pip-install-ccvzskvd\\neural-renderer-pytorch\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\LH\AppData\Local\Temp\pip-wheel-cyg0cn7b' --python-tag cp36 cwd: C:\Users\LH\AppData\Local\Temp\pip-install-ccvzskvd\neural-renderer-pytorch\ Complete output (1140 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\get_points_from_angles.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\lighting.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\load_obj.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\look.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\look_at.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\mesh.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\perspective.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\projection.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\rasterize.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\renderer.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\save_obj.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\vertices_to_faces.py -> build\lib.win-amd64-3.6\neural_renderer copying neural_renderer\__init__.py -> build\lib.win-amd64-3.6\neural_renderer creating build\lib.win-amd64-3.6\neural_renderer\cuda copying neural_renderer\cuda\__init__.py -> build\lib.win-amd64-3.6\neural_renderer\cuda running build_ext C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\cpp_extension.py:184: UserWarning: Error checking compiler version for cl: [WinError 2] 系统找不到指定的文件。 warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error)) building 'neural_renderer.cuda.load_textures' extension creating build\temp.win-amd64-3.6 creating build\temp.win-amd64-3.6\Release creating build\temp.win-amd64-3.6\Release\neural_renderer creating build\temp.win-amd64-3.6\Release\neural_renderer\cuda C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX64\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MT -IC:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include -IC:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\TH -IC:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\include" -IC:\Users\LH\AppData\Local\Programs\Python\Python36\include -IC:\Users\LH\AppData\Local\Programs\Python\Python36\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\Include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.18362.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" /EHsc /Tpneural_renderer/cuda/load_textures_cuda.cpp /Fobuild\temp.win-amd64-3.6\Release\neural_renderer/cuda/load_textures_cuda.obj -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 /MD cl: 命令行 warning D9025 :正在重写“/MT”(用“/MD”) load_textures_cuda.cpp C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Exception.h(27): warning C4275: 非 dll 接口 class“std::exception”用作 dll 接口 class“c10::Error”的基 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\vcruntime_exception.h(44): note: 参见“std::exception”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Exception.h(27): note: 参见“c10::Error”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Exception.h(28): warning C4251: “c10::Error::msg_stack_”: class“std::vector<std::string,std::allocator<_Ty>>”需要有 dll 接口由 class“c10::Error”的客户端使用 with [ _Ty=std::string ] C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Exception.h(28): note: 参见“std::vector<std::string,std::allocator<_Ty>>”的声明 with [ _Ty=std::string ] C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Exception.h(29): warning C4251: “c10::Error::backtrace_”: class“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”需要有 dll 接口由 class“c10::Error”的客户端使用 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Exception.h(34): warning C4251: “c10::Error::msg_”: class“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”需要有 dll 接口由 class“c10::Error”的客户端使用 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Exception.h(35): warning C4251: “c10::Error::msg_without_backtrace_”: class“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”需要有 dll 接口由 class“c10::Error”的客户端使用 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits<char>,std::allocator<char>>”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/Allocator.h(21): warning C4251: “c10::DataPtr::ptr_”: class“c10::detail::UniqueVoidPtr”需要有 dll 接口由 class“c10::DataPtr”的客户端使用 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/UniqueVoidPtr.h(38): note: 参见“c10::detail::UniqueVoidPtr”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/Allocator.h(183): warning C4251: “c10::InefficientStdFunctionContext::ptr_”: class“std::unique_ptr<void,std::function<void (void *)>>”需要有 dll 接口由 struct“c10::InefficientStdFunctionContext”的客户端使用 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/Allocator.h(183): note: 参见“std::unique_ptr<void,std::function<void (void *)>>”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/TensorTypeIdRegistration.h(32): warning C4251: “c10::TensorTypeIdCreator::last_id_”: struct“std::atomic<unsigned char>”需要有 dll 接口由 class“c10::TensorTypeIdCreator”的 客户端使用 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\xxatomic(162): note: 参见“std::atomic<unsigned char>”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/TensorTypeIdRegistration.h(45): warning C4251: “c10::TensorTypeIdRegistry::registeredTypeIds_”: class“std::unordered_set<c10::TensorTypeId,std::hash<c10::TensorTypeId>,std::equal_to<_Kty>,std::allocator<_Kty>>”需要有 dll 接口由 class“c10::TensorTypeIdRegistry”的客户端使用 with [ _Kty=c10::TensorTypeId ] C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/TensorTypeIdRegistration.h(45): note: 参见“std::unordered_set<c10::TensorTypeId,std::hash<c10::TensorTypeId>,std::equal_to<_Kty>,std::allocator<_Kty>>”的声明 with [ _Kty=c10::TensorTypeId ] C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/TensorTypeIdRegistration.h(46): warning C4251: “c10::TensorTypeIdRegistry::mutex_”: class“std::mutex”需要有 dll 接口由 class“c10::TensorTypeIdRegistry”的客户端使用 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\mutex(82): note: 参见“std::mutex”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(168): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(171): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(174): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(177): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(181): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(184): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(187): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(190): warning C4244: “参数”: 从“int”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(196): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(199): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(202): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(205): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(209): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(212): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(215): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/Half-inl.h(218): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/intrusive_ptr.h(58): warning C4251: “c10::intrusive_ptr_target::refcount_”: struct“std::atomic<unsigned __int64>”需要有 dll 接口由 class“c10::intrusive_ptr_target”的客户端使用 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\xxatomic(162): note: 参见“std::atomic<unsigned __int64>”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/intrusive_ptr.h(59): warning C4251: “c10::intrusive_ptr_target::weakcount_”: struct“std::atomic<unsigned __int64>”需要有 dll 接口由 class“c10::intrusive_ptr_target”的客户 端使用 C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\Include\xxatomic(162): note: 参见“std::atomic<unsigned __int64>”的声明 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/intrusive_ptr.h(723): warning C4267: “return”: 从“size_t”转换到“uint32_t”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/util/intrusive_ptr.h(757): warning C4267: “return”: 从“size_t”转换到“uint32_t”,可能丢失数据 C:\Users\LH\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\include\c10/core/Storage.h(181): warning C4251: “c10::Storage::storage_impl_”: class“c10::intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type<TTarget>>”需要有 dll 接口由 struct“c10::Storage”的客户端使用

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.