Giter VIP home page Giter VIP logo

genre-shapehd's People

Contributors

alexzhou907 avatar amir-arsalan avatar xiumingzhang avatar ztzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

genre-shapehd's Issues

perspective in reconstruction results from MarrNet

Hi, I'm reproducing the MarrNet training on the chair class, the reconstruction results are good but it somehow gives me kind of perspective surfaces as follow, the surfaces get transparent when rotating the object in different views. Do you know how to solve this? Is it because of the voxel_isosurf_th? voxel_isosurf_th =0.25 is used.

image
image
image

How to generate full spherical maps / rotated vox for GenRe in current view?

Hi!

Thanks for sharing the code and dataset.

Could you please provide some information about how to generate full spherical map as ground truth for training? Or I notice that there are rotated voxels in dataset dir for each spherical map and how to generate such aligned voxel?

I met some problem when I tried to align the shapenet object and depth when I rendered my own dataset with some specific poses.

Thanks!

Camera parameters used when Creating the Dataset

Hi all!

I was trying to generate my own dataset, however the spherical maps I generate for my own dataset (from depth and from the voxel representation) do not match.

I'm using the camera back projection module:

def forward(self, depth_t, fl=418.3, cam_dist=2.2, shift=True):

Which is used here:

from toolbox.cam_bp.cam_bp.modules.camera_backprojection_module import Camera_back_projection_layer

and here:
from toolbox.cam_bp.cam_bp.modules.camera_backprojection_module import Camera_back_projection_layer

My question are the following:

  • what is fl? Is it the focal length of your camera? In which units?
  • If our camera is not at a distance of 2.2 from the object, is it sufficient to change it here, or we should change somewhere else?

Thanks in advance for your time.

Best,
Pedro

nvcc not found

Hello,
I am trying to run GenRe-ShapeHD in a conda environment with CUDA 9.2 on Ubuntu 18.04.2 LTS and am getting stuck when I run sudo ./build_toolbox.sh
The error I am getting is "cuda available but nvcc not found. Please add nvcc to $PATH"
I tried to fix this by adding a file to the GenRe-ShapeHD directory that adds /usr/local/cuda-9.2/bin to the path.

After doing this and running nvcc -V, it looks like it was able to find nvcc becasue it returns
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148

But it still is giving me the same error that it cannot find nvcc when I run sudo ./build_toolbox.sh
Do you know what could be causing this issue?

Thank you

The results of GenRe testing seem not right

Hi, Xiuming

Thank you very much for sharing your excellent work.
When I run scripts/test_genre.sh with your trained models, I couldn't get the results as you show.
Here are the results that I get for the 4 testing examples:
result
ShapeHD testing goes well.
Could you please help me? Are the models uploaded wrong or anything else?

Thanks a lot!

Dan

Data download link failed to connect

Thank you for your impressive work
How do I download GenRe data? None of the links provided in readme can be downloaded. Is the download link closed?

How to use multi-GPU

Hi, there.
Thank you for sharing. I tried to use multi-GPU to run this, but failed. I found there is a function named data_parallel_decorator in model/netinterface.py and tried to write self.net=data_parallel_decorator(self.net) for multi-GPU, but there is a bug about the input structure. Is there any demo or guidance about how to use multi-GPU?
Thank you !

Best regards,
Xuting

cffi.VerificationError: LinkError: command 'gcc' failed with exit status 1

Thanks for your reply.
I was faced with a new problem as follows. My version is :gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.10) with CUDA 9.0

Traceback (most recent call last):
  File "build.py", line 43, in <module>
    ffi.build()
  File "/home/fxy/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 189, in build
    _build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
  File "/home/fxy/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 111, in _build_extension
    outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
  File "/home/fxy/anaconda3/envs/shaperecon/lib/python3.6/site-packages/cffi/api.py", line 723, in compile
    compiler_verbose=verbose, debug=debug, **kwds)
  File "/home/fxy/anaconda3/envs/shaperecon/lib/python3.6/site-packages/cffi/recompiler.py", line 1527, in recompile
    compiler_verbose, debug)
  File "/home/fxy/anaconda3/envs/shaperecon/lib/python3.6/site-packages/cffi/ffiplatform.py", line 22, in compile
    outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
  File "/home/fxy/anaconda3/envs/shaperecon/lib/python3.6/site-packages/cffi/ffiplatform.py", line 58, in _build
    raise VerificationError('%s: %s' % (e.__class__.__name__, e))
cffi.VerificationError: LinkError: command 'gcc' failed with exit status 1

normals definition

I have printed the values of normals' ground truth, and got the results that they are in the range of 0-100. May I ask about how to get the value? Is it scale from RGB or something else? Thank you!

MarrNet2 training stops at Eval 3/1000

Hi,

I am training MarrNet2 using default setting on shapenet chair dataset. It seems it always stops at Eval 3/1000 as shown below:
Capture

Any idea what's going on?

Thanks!

IoU evaluation for Marrnet2

Hi, Xiuming
We trained 300 epoches for MarrNet2 on Chairs and it has converged.
We picked the best.pt and wrote code of IOU to evaluate the MarrNet2 without finetuning, but only get 0.077 IoU on the whole validation set. This is too low. Did you get the similar results for 2.5D to 3D without finetuning? and then get much better IoU after finetuning?

I'm not sure what goes wrong, I attached the IoU code as below. Did you also set the threshold as 0.5 to binarize voxel values after sigmoid?

screen shot 2019-03-07 at 12 22 28 pm

screen shot 2019-03-07 at 12 06 33 pm

Cheers.

Training GenRe with my own dataset - how to generate it

Hi Xiuming!

Thanks for sharing the code.
I would like to train GenRe on my own dataset.

I already have:

  1. the RGB image
  2. the silhouette image
  3. the normals
  4. the aligned 3D model (voxel representation)

I need, at least, the spherical maps as well, right?

How can I create my own dataset to train GenRe? Do you have any example on how you created yours on chairs, aeroplanes and cars based on shapenet?

Thanks in advance,
Best,
Pedro

Can this be run on Windows currently?

Hi there,

Thanks for releasing the codes. I was trying to run it on Windows, but met some issues. So I wonder what changes need to be made for running on Windows, such as compiling cam_bp using CUDA myself and if there are some things cannot be done in Windows.

Thank you in advance.

spherical inpainting network predict same spherical maps

Hi,

We are training the spherical inpainting network and found out that the network predicts the same spherical_full and spherical_partial for all objects, below is the result from 22 epoch:

spherical_full:
image

spherical_partial is totally white:
image

The arguments we are using:

--load_offline False
--joint_train False

Any ideas why this happens? Thanks in advance!

questions for training wgangp model

I want to train the wgangp model using my own voxel data. There are two questions bothered me.

  1. The released data you use is normalized voxel data. The element of my own voxel data is 0 or 1. Need I normalize my voxel data? And how to normalize it?
  2. In the paper, the wgangp model was trained for 80 epoches. But the epoch was set to 1000 in the training script(https://github.com/xiumingzhang/GenRe-ShapeHD/blob/master/scripts/train_wgangp.sh). Which one is correct?
    Thank you!

Questions about training detail

Hi,

I am wondering when training the marrnet2 with ShapeNet synthetic images, did you train the Marrnet2 only with ground truth normal, depth or did you train the Marrnet2 with ground truth normal, depth for some epochs and then keep training Marrnet2 with predictions from Marrnet1.

And did you use canon_sup when training Marrnet2? What are the batch_size and epoch_batches (how many data used in training per epoch?) For example, when training chair class, did you use 4 x 2500 training points or 6778 x 20 all training points (ignore invalid point)? Because if I use all the training point per epoch, the Eval loss fluctuate a lot and the reconstruction results are not good.

And are these setups the same for GenRe?

Thanks

error in projection foward: no kernel image is available for execution on the device

I've been stopped by this issue for several days.
while running test_genre.sh,I got the following error:
Traceback (most recent call last):
File "test.py", line 95, in
model.test_on_batch(i, batch)
File "/home/zhanghao/models/genre_full_model.py", line 182, in test_on_batch
pred = self.forward_with_trimesh(batch)
File "/home/zhanghao/models/genre_full_model.py", line 207, in forward_with_trimesh
proj = self.net.depth_and_inpaint.proj_depth(pred_abs_depth)
File "/media/zhanghao/娱乐/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/zhanghao/toolbox/cam_bp/cam_bp/modules/camera_backprojection_module.py", line 22, in forward
df = CameraBackProjection.apply(depth_t, fl, cam_dist, self.res)
File "/home/zhanghao/toolbox/cam_bp/cam_bp/functions/cam_back_projection.py", line 25, in forward
cam_bp_lib.back_projection_forward(depth_t, cam_dist, fl, tdf, cnt)
File "/media/zhanghao/娱乐/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/utils/ffi/init.py", line 202, in safe_call
result = torch._C._safe_call(*args, **kwargs)
torch.FatalError: aborting at /data/vision/billf/scratch/ztzhang/shape_oneshot/ShapeRecon/toolbox/cam_bp/cam_bp/src/back_projection.c:14

Does anyone have solution for that? thanks.

fail to render spherical maps for our own dataset

Hi, we tried to use the function mentioned in #24 to render the spherical maps and here is our code:

b = 64
sgrid = u.make_sgrid(b, 0, 0, 0)
mesh = trimesh.load('xxx.obj')
im_depth, im_texture = util.util_sph.render_model(mesh, sgrid)
im_depth = im_depth.reshape(2 * b, 2 * b)
im_depth = np.where(im_depth > 1, 1, im_depth)
np.save('xxx.npy', im_depth)

def render_model(mesh, sgrid):

We rendered the spherical maps of shapenet obj using this code and it worked, but when we tried to render the spherical maps of our dataset, it did not work and give us the following result, half of the map is white:

image

Original RGB image:

image

Any ideas why this happens? Is it because the obj size is too large or the projection center is not at the center of the object? We normalized the obj in different scale but did not help. Thanks in advance!

GenRe:How to calculate CD from voxel

Hi,there,
Thank you for releasing this repo. I wanna use CD metric for evaluation. It is said in the paper that we need to sample the point from the isosurfaces of the voxel pair(pred and gt) , and then calculate the CD of the sampled points.
I don't know the detail of this process, I would appreciate it if you provide some code for this.
Thank you.

Best regards,
Xuting

Question about the "**_gt_rotvox_samescale_128.npz" in the training dataset

Hi,
I've downloaded the dataset for training the GenRE. In this dataset, the ‘**_gt_rotvox_samescale_128.npz’ contains a 'voxel' file (128128128 array). Each point in this array is 0 or 1 or a decimal from 0 to 1. I understand that the 0 and 1 can represent the voxel. But what does the decimal mean?

What are the parameters for rendering the data?

Hi there,

Thank you for releasing the codes. I wonder if you can provide or where I can find the parameters you used for the rendering, e.g. the camera position, whether orthographic or perspective, the range of depth(normalization using min/max or a specified range) and etc.

Thanks,
Ryan

How to set gan patameter in fineturn_shapehd.sh ?

python train.py
--net shapehd
--marrnet2 "$marrnet2"
--gan "$gan"
--dataset shapenet
--classes "$class"
--canon_sup
--w_gan_loss 1e-3
--batch_size 4
--epoch_batches 200
--eval_batches 10
--optim adam
--lr 1e-3
--epoch 1000
--vis_batches_vali 10
--gpu "$gpu"
--save_net 1
--workers 4
--logdir "$outdir"
--suffix '{classes}_w_ganloss{w_gan_loss}'
--tensorboard
$*

How to set $gan parameter?

How many epochs to train

Hi there,

Thanks for sharing the code.

As I don't have a powerful GPU and the dataset is huge, I'm wondering how many epochs roughly does MarrNet2 requires to train? (using all default settings, Adam)

Cheers!

training for wgangp model

it is difficult for training the wgangp model, could you provide your trained wgangp model? thank you!

Training wgangp always generate same gen_voxel.obj

Hi, I'm training the wgangp on our dataset but it generates the same result as following for all valid_voxel in every epoch, we also ran the code on shapenet and it generated the same result as our. is this correct? any idea why this happens? thanks! We only have one view per object, so I am assuming that each view is the canonical view.

image

this is our loss at epoch 19

image

How can I read 2.5D sketch

hi,I have downloaded this,

This repo comes with a few Pix3D images and ShapeNet renderings, located in ```
downloads/data/test, for testing purposes.
For training, we make available our RGB and 2.5D sketch renderings, paired with their corresponding 3D shapes, for ShapeNet cars, chairs, and airplanes, with each object captured in 20 random views. Note that this .tar is 143 GB.
wget http://genre.csail.mit.edu/downloads/shapenet_cars_chairs_planes_20views.tar -P downloads/data/
mkdir downloads/data/shapenet/
tar -xvf downloads/data/shapenet_cars_chairs_planes_20views.tar -C downloads/data/shapenet/

Is 02691156_1a04e3eab45ca15dd86060f189eb133_view000_spherical.npz the 2.5D sketch ?And how can I read it by python?Use numpy?
Looking forward to your reply!

Range of mesh sizes

Hello, I am attempting to train marrnet on another dataset, but the depth I obtain from my dataset is very large and varies a lot in comparison to the shapenet dataset. Are the sizes of the meshes in the shapenet dataset constrained to a certain range?

The GenRe's output is not even close when testing with full_model.pt

hi,
I am runing the test_genre with your latest code and released model, following the instructions in README
The result is not even close, not even with the few test cases in the repo
I think I may did something wrong somewhere, but I have no clue where is it.
In my results, the predicted depth seems correct, but the sphere map and final reconstuction are really bad.
my results are shown as follows:
(seems the output pred_proj_depth do not have the same upside with pred_voxel, but pred_proj_depth all looks correct, however the pred_proj_sph_full and pred_voxel are not even close)
01
02
03
04

Terminated in epoch 99/1000

When I ran scripts/train_marrnet1.sh 0,1 03001627+02691156+02958343
The params shows bellow,
Namespace(adam_beta1=0.5, adam_beta2=0.9, batch_size=4, classes='03001627+02691156+02958343', dataset='shapenet', epoch=1000, epoch_batches=2500, eval_at_start=False, eval_batches=5, expr_id=0, gpu='0,1', log_batch=False, log_time=True, logdir='./output/marrnet1', lr=0.001, manual_seed=None, net='marrnet1', optim='adam', pred_depth_minmax=True, resume=0, save_net=10, save_net_opt=False, sgd_dampening=0, sgd_momentum=0.9, suffix='{classes}', tensorboard=True, vis_batches_train=10, vis_batches_vali=10, vis_every_train=1, vis_every_vali=1, vis_param_f=None, vis_workers=4, wdecay=0.0, workers=4)
But after Epoch 99/1000, it couldn't run anymore. Is there something wrong with my configs?

Where to find test dataset for shapehd?

rgb_pattern='./downloads/data/test/shapehd/*_rgb.*'
mask_pattern='./downloads/data/test/shapehd/*_mask.*'

I have downloaded the dataset http://genre.csail.mit.edu/downloads/shapenet_cars_chairs_planes_20views.tar, but I can't find where the test folder is. The dataset unzipped as follows,
1
Is this normal?

Model loaded without optimizer states.

hello,
when I use the pretrained genre model to test, there is a problem called "Model loaded without optimizer states", it seems that the pretrained model did not save the optimizers?

Testing Pipeline
==> Parsing arguments
Namespace(adam_beta1=0.5, adam_beta2=0.9, batch_size=1, classes='chair', dataset=None, epoch=0, epoch_batches=None, eval_at_start=False, eval_batches=None, expr_id=0, full_logdir=None, gpu='2', inpaint_path=None, input_mask='./downloads/data/test/genre/_silhouette.', input_rgb='./downloads/data/test/genre/_rgb.', joint_train=False, load_offline=False, log_batch=False, log_time=False, logdir=None, lr=0.0001, manual_seed=None, net='genre_full_model', net1_path=None, net_file='./downloads/models/full_model.pt', optim='adam', output_dir='./output/test', overwrite=True, padding_margin=16, pred_depth_minmax=True, resume=0, save_net=1, save_net_opt=False, sgd_dampening=0, sgd_momentum=0.9, suffix='{net}', surface_weight=1.0, tensorboard=False, vis_batches_train=10, vis_batches_vali=10, vis_every_train=1, vis_every_vali=1, vis_param_f=None, vis_workers=4, wdecay=0.0, workers=0)
==> Setting device
[Verbose] All designated GPU(s) free to use.
==> Setting up output directory
==> Setting up loggers
==> Setting up models
[Warning] Model loaded without optimizer states.
Traceback (most recent call last):
File "test.py", line 63, in
model = Model(opt, logger)
File "/mnt/disk/zhiyu/GenRe-ShapeHD/models/genre_full_model.py", line 152, in init
self.load_state_dict(opt.net_file, load_optimizer='auto')
File "/mnt/disk/zhiyu/GenRe-ShapeHD/models/netinterface.py", line 424, in load_state_dict
self._nets[i].load_state_dict(state_dicts['nets'][i])
File "/home/zhiyu/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Net:
Unexpected key(s) in state_dict: "depth_and_inpaint.net1.encoder.0.0.weight", "depth_and_inpaint.net1.encoder.0.1.weight", "depth_and_inpaint.net1.encoder.0.1.bias", "depth_and_inpaint.net1.encoder.0.1.running_mean", "depth_and_inpaint.net1.encoder.0.1.running_var", "depth_and_inpaint.net1.encoder.0.1.num_batches_tracked", "depth_and_inpaint.net1.encoder.1.0.conv1.weight", "depth_and_inpaint.net1.encoder.1.0.bn1.weight", "depth_and_inpaint.net1.encoder.1.0.bn1.bias", "depth_and_inpaint.net1.encoder.1.0.bn1.running_mean", "depth_and_inpaint.net1.encoder.1.0.bn1.running_var", "depth_and_inpaint.net1.encoder.1.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.1.0.conv2.weight", "depth_and_inpaint.net1.encoder.1.0.bn2.weight", "depth_and_inpaint.net1.encoder.1.0.bn2.bias", "depth_and_inpaint.net1.encoder.1.0.bn2.running_mean", "depth_and_inpaint.net1.encoder.1.0.bn2.running_var", "depth_and_inpaint.net1.encoder.1.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.encoder.1.1.conv1.weight", "depth_and_inpaint.net1.encoder.1.1.bn1.weight", "depth_and_inpaint.net1.encoder.1.1.bn1.bias", "depth_and_inpaint.net1.encoder.1.1.bn1.running_mean", "depth_and_inpaint.net1.encoder.1.1.bn1.running_var", "depth_and_inpaint.net1.encoder.1.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.1.1.conv2.weight", "depth_and_inpaint.net1.encoder.1.1.bn2.weight", "depth_and_inpaint.net1.encoder.1.1.bn2.bias", "depth_and_inpaint.net1.encoder.1.1.bn2.running_mean", "depth_and_inpaint.net1.encoder.1.1.bn2.running_var", "depth_and_inpaint.net1.encoder.1.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.encoder.2.0.conv1.weight", "depth_and_inpaint.net1.encoder.2.0.bn1.weight", "depth_and_inpaint.net1.encoder.2.0.bn1.bias", "depth_and_inpaint.net1.encoder.2.0.bn1.running_mean", "depth_and_inpaint.net1.encoder.2.0.bn1.running_var", "depth_and_inpaint.net1.encoder.2.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.2.0.conv2.weight", "depth_and_inpaint.net1.encoder.2.0.bn2.weight", "depth_and_inpaint.net1.encoder.2.0.bn2.bias", "depth_and_inpaint.net1.encoder.2.0.bn2.running_mean", "depth_and_inpaint.net1.encoder.2.0.bn2.running_var", "depth_and_inpaint.net1.encoder.2.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.encoder.2.0.downsample.0.weight", "depth_and_inpaint.net1.encoder.2.0.downsample.1.weight", "depth_and_inpaint.net1.encoder.2.0.downsample.1.bias", "depth_and_inpaint.net1.encoder.2.0.downsample.1.running_mean", "depth_and_inpaint.net1.encoder.2.0.downsample.1.running_var", "depth_and_inpaint.net1.encoder.2.0.downsample.1.num_batches_tracked", "depth_and_inpaint.net1.encoder.2.1.conv1.weight", "depth_and_inpaint.net1.encoder.2.1.bn1.weight", "depth_and_inpaint.net1.encoder.2.1.bn1.bias", "depth_and_inpaint.net1.encoder.2.1.bn1.running_mean", "depth_and_inpaint.net1.encoder.2.1.bn1.running_var", "depth_and_inpaint.net1.encoder.2.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.2.1.conv2.weight", "depth_and_inpaint.net1.encoder.2.1.bn2.weight", "depth_and_inpaint.net1.encoder.2.1.bn2.bias", "depth_and_inpaint.net1.encoder.2.1.bn2.running_mean", "depth_and_inpaint.net1.encoder.2.1.bn2.running_var", "depth_and_inpaint.net1.encoder.2.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.encoder.3.0.conv1.weight", "depth_and_inpaint.net1.encoder.3.0.bn1.weight", "depth_and_inpaint.net1.encoder.3.0.bn1.bias", "depth_and_inpaint.net1.encoder.3.0.bn1.running_mean", "depth_and_inpaint.net1.encoder.3.0.bn1.running_var", "depth_and_inpaint.net1.encoder.3.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.3.0.conv2.weight", "depth_and_inpaint.net1.encoder.3.0.bn2.weight", "depth_and_inpaint.net1.encoder.3.0.bn2.bias", "depth_and_inpaint.net1.encoder.3.0.bn2.running_mean", "depth_and_inpaint.net1.encoder.3.0.bn2.running_var", "depth_and_inpaint.net1.encoder.3.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.encoder.3.0.downsample.0.weight", "depth_and_inpaint.net1.encoder.3.0.downsample.1.weight", "depth_and_inpaint.net1.encoder.3.0.downsample.1.bias", "depth_and_inpaint.net1.encoder.3.0.downsample.1.running_mean", "depth_and_inpaint.net1.encoder.3.0.downsample.1.running_var", "depth_and_inpaint.net1.encoder.3.0.downsample.1.num_batches_tracked", "depth_and_inpaint.net1.encoder.3.1.conv1.weight", "depth_and_inpaint.net1.encoder.3.1.bn1.weight", "depth_and_inpaint.net1.encoder.3.1.bn1.bias", "depth_and_inpaint.net1.encoder.3.1.bn1.running_mean", "depth_and_inpaint.net1.encoder.3.1.bn1.running_var", "depth_and_inpaint.net1.encoder.3.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.3.1.conv2.weight", "depth_and_inpaint.net1.encoder.3.1.bn2.weight", "depth_and_inpaint.net1.encoder.3.1.bn2.bias", "depth_and_inpaint.net1.encoder.3.1.bn2.running_mean", "depth_and_inpaint.net1.encoder.3.1.bn2.running_var", "depth_and_inpaint.net1.encoder.3.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.encoder.4.0.conv1.weight", "depth_and_inpaint.net1.encoder.4.0.bn1.weight", "depth_and_inpaint.net1.encoder.4.0.bn1.bias", "depth_and_inpaint.net1.encoder.4.0.bn1.running_mean", "depth_and_inpaint.net1.encoder.4.0.bn1.running_var", "depth_and_inpaint.net1.encoder.4.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.4.0.conv2.weight", "depth_and_inpaint.net1.encoder.4.0.bn2.weight", "depth_and_inpaint.net1.encoder.4.0.bn2.bias", "depth_and_inpaint.net1.encoder.4.0.bn2.running_mean", "depth_and_inpaint.net1.encoder.4.0.bn2.running_var", "depth_and_inpaint.net1.encoder.4.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.encoder.4.0.downsample.0.weight", "depth_and_inpaint.net1.encoder.4.0.downsample.1.weight", "depth_and_inpaint.net1.encoder.4.0.downsample.1.bias", "depth_and_inpaint.net1.encoder.4.0.downsample.1.running_mean", "depth_and_inpaint.net1.encoder.4.0.downsample.1.running_var", "depth_and_inpaint.net1.encoder.4.0.downsample.1.num_batches_tracked", "depth_and_inpaint.net1.encoder.4.1.conv1.weight", "depth_and_inpaint.net1.encoder.4.1.bn1.weight", "depth_and_inpaint.net1.encoder.4.1.bn1.bias", "depth_and_inpaint.net1.encoder.4.1.bn1.running_mean", "depth_and_inpaint.net1.encoder.4.1.bn1.running_var", "depth_and_inpaint.net1.encoder.4.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.encoder.4.1.conv2.weight", "depth_and_inpaint.net1.encoder.4.1.bn2.weight", "depth_and_inpaint.net1.encoder.4.1.bn2.bias", "depth_and_inpaint.net1.encoder.4.1.bn2.running_mean", "depth_and_inpaint.net1.encoder.4.1.bn2.running_var", "depth_and_inpaint.net1.encoder.4.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.0.0.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.0.0.bn1.weight", "depth_and_inpaint.net1.decoder_normal.0.0.bn1.bias", "depth_and_inpaint.net1.decoder_normal.0.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.0.0.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.0.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.0.0.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.0.0.bn2.weight", "depth_and_inpaint.net1.decoder_normal.0.0.bn2.bias", "depth_and_inpaint.net1.decoder_normal.0.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.0.0.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.0.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.0.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_normal.0.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_normal.0.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_normal.0.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_normal.0.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_normal.0.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.0.1.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.0.1.bn1.weight", "depth_and_inpaint.net1.decoder_normal.0.1.bn1.bias", "depth_and_inpaint.net1.decoder_normal.0.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.0.1.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.0.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.0.1.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.0.1.bn2.weight", "depth_and_inpaint.net1.decoder_normal.0.1.bn2.bias", "depth_and_inpaint.net1.decoder_normal.0.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.0.1.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.0.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.1.0.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.1.0.bn1.weight", "depth_and_inpaint.net1.decoder_normal.1.0.bn1.bias", "depth_and_inpaint.net1.decoder_normal.1.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.1.0.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.1.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.1.0.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.1.0.bn2.weight", "depth_and_inpaint.net1.decoder_normal.1.0.bn2.bias", "depth_and_inpaint.net1.decoder_normal.1.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.1.0.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.1.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.1.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_normal.1.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_normal.1.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_normal.1.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_normal.1.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_normal.1.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.1.1.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.1.1.bn1.weight", "depth_and_inpaint.net1.decoder_normal.1.1.bn1.bias", "depth_and_inpaint.net1.decoder_normal.1.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.1.1.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.1.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.1.1.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.1.1.bn2.weight", "depth_and_inpaint.net1.decoder_normal.1.1.bn2.bias", "depth_and_inpaint.net1.decoder_normal.1.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.1.1.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.1.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.2.0.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.2.0.bn1.weight", "depth_and_inpaint.net1.decoder_normal.2.0.bn1.bias", "depth_and_inpaint.net1.decoder_normal.2.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.2.0.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.2.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.2.0.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.2.0.bn2.weight", "depth_and_inpaint.net1.decoder_normal.2.0.bn2.bias", "depth_and_inpaint.net1.decoder_normal.2.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.2.0.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.2.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.2.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_normal.2.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_normal.2.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_normal.2.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_normal.2.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_normal.2.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.2.1.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.2.1.bn1.weight", "depth_and_inpaint.net1.decoder_normal.2.1.bn1.bias", "depth_and_inpaint.net1.decoder_normal.2.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.2.1.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.2.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.2.1.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.2.1.bn2.weight", "depth_and_inpaint.net1.decoder_normal.2.1.bn2.bias", "depth_and_inpaint.net1.decoder_normal.2.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.2.1.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.2.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.3.0.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.3.0.bn1.weight", "depth_and_inpaint.net1.decoder_normal.3.0.bn1.bias", "depth_and_inpaint.net1.decoder_normal.3.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.3.0.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.3.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.3.0.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.3.0.bn2.weight", "depth_and_inpaint.net1.decoder_normal.3.0.bn2.bias", "depth_and_inpaint.net1.decoder_normal.3.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.3.0.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.3.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.3.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_normal.3.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_normal.3.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_normal.3.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_normal.3.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_normal.3.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.3.1.deconv1.weight", "depth_and_inpaint.net1.decoder_normal.3.1.bn1.weight", "depth_and_inpaint.net1.decoder_normal.3.1.bn1.bias", "depth_and_inpaint.net1.decoder_normal.3.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_normal.3.1.bn1.running_var", "depth_and_inpaint.net1.decoder_normal.3.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.3.1.deconv2.weight", "depth_and_inpaint.net1.decoder_normal.3.1.bn2.weight", "depth_and_inpaint.net1.decoder_normal.3.1.bn2.bias", "depth_and_inpaint.net1.decoder_normal.3.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_normal.3.1.bn2.running_var", "depth_and_inpaint.net1.decoder_normal.3.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.4.0.weight", "depth_and_inpaint.net1.decoder_normal.4.0.bias", "depth_and_inpaint.net1.decoder_normal.4.1.weight", "depth_and_inpaint.net1.decoder_normal.4.1.bias", "depth_and_inpaint.net1.decoder_normal.4.1.running_mean", "depth_and_inpaint.net1.decoder_normal.4.1.running_var", "depth_and_inpaint.net1.decoder_normal.4.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_normal.4.3.weight", "depth_and_inpaint.net1.decoder_depth.0.0.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.0.0.bn1.weight", "depth_and_inpaint.net1.decoder_depth.0.0.bn1.bias", "depth_and_inpaint.net1.decoder_depth.0.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.0.0.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.0.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.0.0.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.0.0.bn2.weight", "depth_and_inpaint.net1.decoder_depth.0.0.bn2.bias", "depth_and_inpaint.net1.decoder_depth.0.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.0.0.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.0.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.0.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_depth.0.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_depth.0.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_depth.0.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_depth.0.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_depth.0.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.0.1.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.0.1.bn1.weight", "depth_and_inpaint.net1.decoder_depth.0.1.bn1.bias", "depth_and_inpaint.net1.decoder_depth.0.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.0.1.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.0.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.0.1.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.0.1.bn2.weight", "depth_and_inpaint.net1.decoder_depth.0.1.bn2.bias", "depth_and_inpaint.net1.decoder_depth.0.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.0.1.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.0.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.1.0.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.1.0.bn1.weight", "depth_and_inpaint.net1.decoder_depth.1.0.bn1.bias", "depth_and_inpaint.net1.decoder_depth.1.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.1.0.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.1.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.1.0.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.1.0.bn2.weight", "depth_and_inpaint.net1.decoder_depth.1.0.bn2.bias", "depth_and_inpaint.net1.decoder_depth.1.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.1.0.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.1.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.1.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_depth.1.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_depth.1.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_depth.1.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_depth.1.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_depth.1.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.1.1.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.1.1.bn1.weight", "depth_and_inpaint.net1.decoder_depth.1.1.bn1.bias", "depth_and_inpaint.net1.decoder_depth.1.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.1.1.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.1.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.1.1.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.1.1.bn2.weight", "depth_and_inpaint.net1.decoder_depth.1.1.bn2.bias", "depth_and_inpaint.net1.decoder_depth.1.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.1.1.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.1.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.2.0.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.2.0.bn1.weight", "depth_and_inpaint.net1.decoder_depth.2.0.bn1.bias", "depth_and_inpaint.net1.decoder_depth.2.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.2.0.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.2.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.2.0.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.2.0.bn2.weight", "depth_and_inpaint.net1.decoder_depth.2.0.bn2.bias", "depth_and_inpaint.net1.decoder_depth.2.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.2.0.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.2.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.2.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_depth.2.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_depth.2.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_depth.2.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_depth.2.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_depth.2.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.2.1.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.2.1.bn1.weight", "depth_and_inpaint.net1.decoder_depth.2.1.bn1.bias", "depth_and_inpaint.net1.decoder_depth.2.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.2.1.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.2.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.2.1.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.2.1.bn2.weight", "depth_and_inpaint.net1.decoder_depth.2.1.bn2.bias", "depth_and_inpaint.net1.decoder_depth.2.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.2.1.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.2.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.3.0.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.3.0.bn1.weight", "depth_and_inpaint.net1.decoder_depth.3.0.bn1.bias", "depth_and_inpaint.net1.decoder_depth.3.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.3.0.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.3.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.3.0.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.3.0.bn2.weight", "depth_and_inpaint.net1.decoder_depth.3.0.bn2.bias", "depth_and_inpaint.net1.decoder_depth.3.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.3.0.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.3.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.3.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_depth.3.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_depth.3.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_depth.3.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_depth.3.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_depth.3.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.3.1.deconv1.weight", "depth_and_inpaint.net1.decoder_depth.3.1.bn1.weight", "depth_and_inpaint.net1.decoder_depth.3.1.bn1.bias", "depth_and_inpaint.net1.decoder_depth.3.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_depth.3.1.bn1.running_var", "depth_and_inpaint.net1.decoder_depth.3.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.3.1.deconv2.weight", "depth_and_inpaint.net1.decoder_depth.3.1.bn2.weight", "depth_and_inpaint.net1.decoder_depth.3.1.bn2.bias", "depth_and_inpaint.net1.decoder_depth.3.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_depth.3.1.bn2.running_var", "depth_and_inpaint.net1.decoder_depth.3.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.4.0.weight", "depth_and_inpaint.net1.decoder_depth.4.0.bias", "depth_and_inpaint.net1.decoder_depth.4.1.weight", "depth_and_inpaint.net1.decoder_depth.4.1.bias", "depth_and_inpaint.net1.decoder_depth.4.1.running_mean", "depth_and_inpaint.net1.decoder_depth.4.1.running_var", "depth_and_inpaint.net1.decoder_depth.4.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_depth.4.3.weight", "depth_and_inpaint.net1.decoder_silhou.0.0.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.0.0.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.0.0.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.0.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.0.0.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.0.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.0.0.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.0.0.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.0.0.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.0.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.0.0.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.0.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.0.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_silhou.0.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_silhou.0.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_silhou.0.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_silhou.0.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_silhou.0.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.0.1.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.0.1.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.0.1.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.0.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.0.1.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.0.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.0.1.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.0.1.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.0.1.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.0.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.0.1.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.0.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.1.0.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.1.0.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.1.0.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.1.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.1.0.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.1.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.1.0.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.1.0.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.1.0.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.1.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.1.0.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.1.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.1.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_silhou.1.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_silhou.1.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_silhou.1.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_silhou.1.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_silhou.1.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.1.1.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.1.1.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.1.1.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.1.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.1.1.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.1.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.1.1.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.1.1.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.1.1.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.1.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.1.1.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.1.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.2.0.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.2.0.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.2.0.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.2.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.2.0.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.2.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.2.0.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.2.0.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.2.0.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.2.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.2.0.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.2.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.2.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_silhou.2.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_silhou.2.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_silhou.2.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_silhou.2.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_silhou.2.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.2.1.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.2.1.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.2.1.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.2.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.2.1.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.2.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.2.1.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.2.1.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.2.1.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.2.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.2.1.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.2.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.3.0.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.3.0.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.3.0.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.3.0.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.3.0.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.3.0.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.3.0.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.3.0.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.3.0.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.3.0.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.3.0.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.3.0.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.3.0.upsample.0.weight", "depth_and_inpaint.net1.decoder_silhou.3.0.upsample.1.weight", "depth_and_inpaint.net1.decoder_silhou.3.0.upsample.1.bias", "depth_and_inpaint.net1.decoder_silhou.3.0.upsample.1.running_mean", "depth_and_inpaint.net1.decoder_silhou.3.0.upsample.1.running_var", "depth_and_inpaint.net1.decoder_silhou.3.0.upsample.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.3.1.deconv1.weight", "depth_and_inpaint.net1.decoder_silhou.3.1.bn1.weight", "depth_and_inpaint.net1.decoder_silhou.3.1.bn1.bias", "depth_and_inpaint.net1.decoder_silhou.3.1.bn1.running_mean", "depth_and_inpaint.net1.decoder_silhou.3.1.bn1.running_var", "depth_and_inpaint.net1.decoder_silhou.3.1.bn1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.3.1.deconv2.weight", "depth_and_inpaint.net1.decoder_silhou.3.1.bn2.weight", "depth_and_inpaint.net1.decoder_silhou.3.1.bn2.bias", "depth_and_inpaint.net1.decoder_silhou.3.1.bn2.running_mean", "depth_and_inpaint.net1.decoder_silhou.3.1.bn2.running_var", "depth_and_inpaint.net1.decoder_silhou.3.1.bn2.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.4.0.weight", "depth_and_inpaint.net1.decoder_silhou.4.0.bias", "depth_and_inpaint.net1.decoder_silhou.4.1.weight", "depth_and_inpaint.net1.decoder_silhou.4.1.bias", "depth_and_inpaint.net1.decoder_silhou.4.1.running_mean", "depth_and_inpaint.net1.decoder_silhou.4.1.running_var", "depth_and_inpaint.net1.decoder_silhou.4.1.num_batches_tracked", "depth_and_inpaint.net1.decoder_silhou.4.3.weight", "depth_and_inpaint.net1.decoder_minmax.0.weight", "depth_and_inpaint.net1.decoder_minmax.0.bias", "depth_and_inpaint.net1.decoder_minmax.1.weight", "depth_and_inpaint.net1.decoder_minmax.1.bias", "depth_and_inpaint.net1.decoder_minmax.3.weight", "depth_and_inpaint.net1.decoder_minmax.3.bias", "depth_and_inpaint.net1.decoder_minmax.4.weight", "depth_and_inpaint.net1.decoder_minmax.4.bias", "depth_and_inpaint.net1.decoder_minmax.4.running_mean", "depth_and_inpaint.net1.decoder_minmax.4.running_var", "depth_and_inpaint.net1.decoder_minmax.4.num_batches_tracked", "depth_and_inpaint.net1.decoder_minmax.6.weight", "depth_and_inpaint.net1.decoder_minmax.6.bias", "depth_and_inpaint.net1.decoder_minmax.7.weight", "depth_and_inpaint.net1.decoder_minmax.7.bias", "depth_and_inpaint.net1.decoder_minmax.7.running_mean", "depth_and_inpaint.net1.decoder_minmax.7.running_var", "depth_and_inpaint.net1.decoder_minmax.7.num_batches_tracked", "depth_and_inpaint.net1.decoder_minmax.9.weight", "depth_and_inpaint.net1.decoder_minmax.9.bias".

How to render images with high-dynamic-range backgrounds with illumination channels

As mentioned in the ShapeHD paper, ``to boost the realism
of the rendered RGB images, we put three different types of backgrounds behind
the object during rendering. One third of the images are rendered in a clean white
background; one third are rendered in high-dynamic-range backgrounds with
illumination channels that produce realistic lighting
. We render the remaining one
third images with backgrounds randomly sampled from the SUN database [61].''

I wonder how to render images with high-dynamic-range backgrounds with illumination channels.

IoU in MarrNet paper

Hi, I noticed that the IoU number under "4.1 3D Reconstruction on ShapeNet" section in MarrNet paper is 0.57, what class is this IoU for? Or is it the average IoU over the chair, plane and car classes? The best IoU I can get with MarrNet on the chair class is around 0.45, is this IoU reasonable since Reprojection consistency loss is not needed in synthetic data?

By the way, the IoU in ShapeHD paper on chair class is 0.488 which is lower than MarrNet, shouldn't it be higher since ShapeHD generates better reconstructions? Or maybe it is supposed to be lower because of the naturalness loss? Or we cannot just simply compare MarrNet and ShapeHD?

Thanks

TypeError: dist must be a Distribution instance

Hi, there.
Thank you for sharing.

I tried to build the cuda extension using "./build_toolbox.sh", but it failed....

(shaperecon) root@d31b5ed138c4:~/GenRe-ShapeHD# ./build_toolbox.sh           
Add -gencode to match all the GPU architectures you have.
Check 'https://en.wikipedia.org/wiki/CUDA#GPUs_supported' for list of architecture.
Check 'http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html' for GPU compilation based on architecture.
nvcc -c -o calc_prob_kernel.cu.o calc_prob_kernel.cu -x cu -Xcompiler -std=c++0x -fPIC -I /root/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/lib/include -I /root/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/lib/include/TH -I /root/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/lib/include/THC -I /root/GenRe-ShapeHD/toolbox/calc_prob/calc_prob/src         -gencode arch=compute_30,code=sm_30         -gencode arch=compute_35,code=sm_35         -gencode arch=compute_52,code=sm_52         -gencode arch=compute_61,code=sm_61 
nvcc fatal   : Unknown option 'fPIC'
/root/GenRe-ShapeHD/toolbox/calc_prob
/root/anaconda3/envs/shaperecon/lib/python3.6/distutils/extension.py:131: UserWarning: Unknown Extension options: 'headers', 'package', 'relative_to', 'with_cuda'
  warnings.warn(msg)
Traceback (most recent call last):
  File "build.py", line 42, in <module>
    BuildExtension(ext)
  File "/root/anaconda3/envs/shaperecon/lib/python3.6/site-packages/setuptools/__init__.py", line 163, in __init__
    _Command.__init__(self, dist)
  File "/root/anaconda3/envs/shaperecon/lib/python3.6/distutils/cmd.py", line 57, in __init__
    raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance
Add -gencode to match all the GPU architectures you have.
Check 'https://en.wikipedia.org/wiki/CUDA#GPUs_supported' for list of architecture.
......

Is there any way to solve this problem?
Thank you !

PASCAL 3D+

Hi,

Is it possible to have the pascal 3D+ dataset you used for testing? Especially the rgb and corresponding voxels? Thanks!

ImportError: numpy.core.multiarray failed to import

Hello, Xiuming. I ran codes following your steps, while when I ran ./build_toolbox.sh to build the environment, errors occurred as follows. Could you please help me solve this problem?

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/fxy/anaconda3/envs/shaperecon/lib/python3.6/site-packages/torch/__init__.py", line 80, in <module>
    from torch._C import *
ImportError: numpy.core.multiarray failed to import

Thanks a lot!

How to visualization the results?

I get the ShapeHD result of the three in download folder, but could you tell me how to visualization the results as what you show in the picture ?

How can I generate 2.5D sketch

I download shapenet_cars_chairs_planes_20views.tar,and I want to generate other category 2.5D sketch from the RGB(like 02691156_1a04e3eab45ca15dd86060f189eb133_view000_spherical.npz), is there any download link,or how can I generate them?

How to speed up training?

Hi,

I am trying to retrain marrnet2+3D-GAN and fine-tune them. But it seems each epoch takes ~1500s, if train 200epch, it would take ~100h only for Marrnet2. It seems 3D-GAN would take much longer time. Did I miss something or is there any method to speed up the training? Thank you so much.

(emergency) Part of parameters are not saved

Hi, I defined some the Net class like the following, after training, I found out that the parameters of some networks I defined, for example self.fc and self.classifier, in Net class are not saved because when I resume training, the loss and accuracy didn't start around where it stopped, do you have any suggestions where to look at?

class Net(nn.Module):
    def __init__(self, npf, npc, ndims):
        super().__init__()
        self.Encoder = VoxelEncoder(ndims=ndims).cuda()
        self.classifier = {}
        self.fc = FC(in_dims=ndims, out_dims=100).cuda()
        for i in range(2):
            self.classifier[i] = ClassificationLayer(100, npc[i]).cuda()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.