Giter VIP home page Giter VIP logo

occupancy_networks's Introduction

Occupancy Networks

Example 1 Example 2 Example 3

This repository contains the code to reproduce the results from the paper Occupancy Networks - Learning 3D Reconstruction in Function Space.

You can find detailed usage instructions for training your own models and using pretrained models below.

If you find our code or paper useful, please consider citing

@inproceedings{Occupancy Networks,
    title = {Occupancy Networks: Learning 3D Reconstruction in Function Space},
    author = {Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas},
    booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year = {2019}
}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called mesh_funcspace using

conda env create -f environment.yaml
conda activate mesh_funcspace

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

To compile the dmc extension, you have to have a cuda enabled device set up. If you experience any errors, you can simply comment out the dmc_* dependencies in setup.py. You should then also comment out the dmc imports in im2mesh/config.py.

Demo

Example Input Example Output

You can now test our code on the provided input images in the demo folder. To this end, simply run

python generate.py configs/demo.yaml

This script should create a folder demo/generation where the output meshes are stored. The script will copy the inputs into the demo/generation/inputs folder and creates the meshes in the demo/generation/meshes folder. Moreover, the script creates a demo/generation/vis folder where both inputs and outputs are copied together.

Dataset

To evaluate a pretrained model or train a new model from scratch, you have to obtain the dataset. To this end, there are two options:

  1. you can download our preprocessed data
  2. you can download the ShapeNet dataset and run the preprocessing pipeline yourself

Take in mind that running the preprocessing pipeline yourself requires a substantial amount time and space on your hard drive. Unless you want to apply our method to a new dataset, we therefore recommmend to use the first option.

Preprocessed data

You can download our preprocessed data (73.4 GB) using

bash scripts/download_data.sh

This script should download and unpack the data automatically into the data/ShapeNet folder.

Building the dataset

Alternatively, you can also preprocess the dataset yourself. To this end, you have to follow the following steps:

You are now ready to build the dataset:

cd scripts
bash dataset_shapenet/build.sh

This command will build the dataset in data/ShapeNet.build. To install the dataset, run

bash dataset_shapenet/install.sh

If everything worked out, this will copy the dataset into data/ShapeNet.

Usage

When you have installed all binary dependencies and obtained the preprocessed data, you are ready to run our pretrained models and train new models from scratch.

Generation

To generate meshes using a trained model, use

python generate.py CONFIG.yaml

where you replace CONFIG.yaml with the correct config file.

The easiest way is to use a pretrained model. You can do this by using one of the config files

configs/img/onet_pretrained.yaml
configs/pointcloud/onet_pretrained.yaml
configs/voxels/onet_pretrained.yaml
configs/unconditional/onet_cars_pretrained.yaml
configs/unconditional/onet_airplanes_pretrained.yaml
configs/unconditional/onet_sofas_pretrained.yaml
configs/unconditional/onet_chairs_pretrained.yaml

which correspond to the experiments presented in the paper. Our script will automatically download the model checkpoints and run the generation. You can find the outputs in the out/*/*/pretrained folders.

Please note that the config files *_pretrained.yaml are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

Evaluation

For evaluation of the models, we provide two scripts: eval.py and eval_meshes.py.

The main evaluation script is eval_meshes.py. You can run it using

python eval_meshes.py CONFIG.yaml

The script takes the meshes generated in the previous step and evaluates them using a standardized protocol. The output will be written to .pkl/.csv files in the corresponding generation folder which can be processed using pandas.

For a quick evaluation, you can also run

python eval.py CONFIG.yaml

This script will run a fast method specific evaluation to obtain some basic quantities that can be easily computed without extracting the meshes. This evaluation will also be conducted automatically on the validation set during training.

All results reported in the paper were obtained using the eval_meshes.py script.

Training

Finally, to train a new network from scratch, run

python train.py CONFIG.yaml

where you replace CONFIG.yaml with the name of the configuration file you want to use.

You can monitor on http://localhost:6006 the training process using tensorboard:

cd OUTPUT_DIR
tensorboard --logdir ./logs --port 6006

where you replace OUTPUT_DIR with the respective output directory.

For available training options, please take a look at configs/default.yaml.

Notes

  • In our paper we used random crops and scaling to augment the input images. However, we later found that this image augmentation decreases performance on the ShapeNet test set. The pretrained model that is loaded in configs/img/onet_pretrained.yaml was hence trained without data augmentation and has slightly better performance than the model from the paper. The updated table looks a follows: Updated table for single view 3D reconstruction experiment For completeness, we also provide the trained weights for the model which was used in the paper in configs/img/onet_legacy_pretrained.yaml.
  • Note that training and evaluation of both our model and the baselines is performed with respect to the watertight models, but that normalization into the unit cube is performed with respect to the non-watertight meshes (to be consistent with the voxelizations from Choy et al.). As a result, the bounding box of the sampled point cloud is usually slightly bigger than the unit cube and may differ a little bit from a point cloud that was sampled from the original ShapeNet mesh.

Futher Information

Please also check out the following concurrent papers that have proposed similar ideas:

occupancy_networks's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

occupancy_networks's Issues

Pixel2mesh training loss plateaus

Hello,

First of all, thanks for your amazing work!
I am currently trying to train your pixel2mesh implementation, but I am not able to obtain decent results and I was wondering if you changed something that is not tunable from its config file.

The code I am using doesn't have any modification and I downloaded your pre-processed data. After epoch 8 the loss plateaus around 30 and oscillates around that value at least until epoch 50 (then I stopped the training because didn't look like it was going to get better).

Here you can find the visualisation of a prediction and its GT at epoch 50.

000
000_gt

Thanks in advance for your time.

Possibly missing package 'im2mesh.data'

Dear @LMescheder,

First, congrats for your CVPR paper and thanks for open sourcing the code.

When I try to run demo via python generate.py configs/demo.yaml, I am getting below error:

Traceback (most recent call last):
  File "generate.py", line 10, in <module>
    from im2mesh import config
  File "/home/Occupancy-Networks/im2mesh/config.py", line 3, in <module>
    from im2mesh import data
ImportError: cannot import name 'data'

Is there a missing package under im2mesh?

Best,

Scale of the model

Hi there,

Thanks for releasing the codes, it is an amazing work! I have tried to do shape completion on point cloud sampled from original shapenet models. It seems my data is not in the same scale as your data. Did you rescale the shapenet model? If so, can you provide the scale value? And for point cloud completion, does it matter a lot for the scale(I guess so) and the view?

Thanks,
Ryan

Obtaining the "input" results for the voxel use case

Hi,

I'm doing followup research on the voxels use case of this paper, and am trying to reproduce the results of the paper before continuing.

I install the environment on ubuntu, and run the following:

python eval.py configs/voxels/onet_pretrained.yml

I obtain the following results (ran once on chairs dataset only, and once on everything):

Chairs Only
iou iou_voxels kl loss rec_error
class name
n/a 0.663234 0.659272 0.0 81.09695 81.09695
mean 0.663234 0.659272 0.0 81.09695 81.09695

Everything
iou iou_voxels kl loss rec_error
class name
n/a 0.695912 0.68121 0.0 57.70389 57.70389
mean 0.695912 0.68121 0.0 57.70389 57.70389

What is the difference between iou and iou_voxels?
Paper says
Input IOU 0.631
ONet IOU 0.703

How was the 0.631 obtained, and is it possible to reproduce the ONet IOU with the supplied code?

OSError: [Errno 99] Cannot assign requested address

When I execute the following command, an error occurs:
python generate.py configs/demo.yaml

Error:

https://s3.eu-central-1.amazonaws.com/avg-projects/occupancy_networks/models/onet_img2mesh_3-f786b04a.pt
=> Loading checkpoint from url...
Downloading: "https://s3.eu-central-1.amazonaws.com/avg-projects/occupancy_networks/models/onet_img2mesh_3-f786b04a.pt" to /home/fxru/.torch/models/onet_img2mesh_3-f786b04a.pt
Traceback (most recent call last):
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 1318, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1392, in connect
    super().connect()
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 936, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/socket.py", line 724, in create_connection
    raise err
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/socket.py", line 713, in create_connection
    sock.connect(sa)
OSError: [Errno 99] Cannot assign requested address

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "generate.py", line 46, in <module>
    checkpoint_io.load(cfg['test']['model_file'])
  File "/home/fxru/tensorflow_learn/occupancy_networks-master/im2mesh/checkpoints.py", line 47, in load
    return self.load_url(filename)
  File "/home/fxru/tensorflow_learn/occupancy_networks-master/im2mesh/checkpoints.py", line 78, in load_url
    state_dict = model_zoo.load_url(url, progress=True)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 66, in load_url
    _download_url_to_file(url, cached_file, hash_prefix, progress=progress)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 76, in _download_url_to_file
    u = urlopen(url)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 526, in open
    response = self._open(req, data)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 544, in _open
    '_open', req)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 1361, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>

Problem

When I run the demo, there have a problem. What's wrong?
ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216

Build Script uses internal alias 'lsfilter'?

Hi,
I'm trying to build the dataset on Ubuntu, and the build.sh script is failing. It says that it cannot find the command lsfilter . Could it be that this is an alias for ls with certain flags that you are using locally?

Which dataset type to use with custom dataset

Hello and thank you for this intriguing work!

I want to use my own custom dataset with this and just want to better understand the use cases for the different dataset types in the manifest -> ["data"]["dataset"].

I have a file structure like this:

Dataset -> { Data_Item_1...Data_Item_N } -> { [Images x n], model.binvox, points.npz }

from which I want to recover a mesh.

Should I use the ShapeNet dataset type or an Images dataset?

Many thanks for your insight!

Testing on a point cloud

Hi @LMescheder, Thanks for the work.

I wanted to test whether the pre-trained network may work on the point cloud from ScanNet dataset.
So,

  • I parsed one .ply file of a desk from ScanNet scene and converted it to .npy
  • I arranged the dataset in a way in which the occupancy_network takes input
    But I am not able to get the mesh. I am stuck on following error -
    Screenshot from 2019-12-29 20-51-22

What is the way to run the pretrained network on a single point cloud I have?

Artifacts in Data Pipeline?

buildpipeline.zip

I'm trying to perform some post-processing on the shapenet meshes for followup research, so I wanted to get the normal pipeline up and running. I am seeing some artifacts being generated in the watertight stage of the process. Attached an example of the 1_scaled and 2_watertight stages for a chair model. Is there something wrong with the setup? Have you experienced this as well?

image

IndexError: list index out of range

while building dataset with this command bash dataset_shapenet/build.sh i got an AssertionError and when i checked the directory data/ShapeNet.build it had created few folder but unfortunately folders were empty.. @LMescheder can you tell me what have i done wrong here??

wateritght meshes

Hey there,

congrats on a great paper!

I saw that there are no watertight meshes in your "preprocessed data (73.4 GB)".

Would it be possible for you to upload the preprocessed, ie watertight, meshes as well?

Downloading the data

Hey there,

could you please tell me how much time does it take, more or less, to download your
preprocessed data? the 70+GB?

I tried downloading it on one PC and got an eta of 23 days.

EDIT: turns out my connection was poor, sorry for that.

Best,
Matt

compile

I run python3 generate.py configs/demo.yaml
I get import error : /im2mesh/dmc/ops/_cuda_ext.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZN2atErrorCLENS_14SourceLocationERKSs

Custom dataset training

Hi @LMescheder , I was trying to train using my own point cloud dataset (in .npy format). What is the format of the file points.npz which we need to create for our own dataset. Do we have to concatenate all the point clouds data in it? Between I am using onet.yaml file as the config file.

sample_mesh script not working with voxel option

When running the sample_mesh.py script with the voxelize flags, I get the following error:

AttributeError: module 'im2mesh.utils.voxels' has no attribute 'voxelize'

In line 158 of sample_mesh.py which is:

voxels_occ = voxels.voxelize(mesh, res)

When looking at the module, I indeed see no .voxelize function, only voxelize_fill, voxelize_mesh etc. Am I missing something? What is the intended way of using this script for voxel generation?

Is the the TSDF fusion steps necessary for training a model?

Hi, thank you for your brilliant work pretty much, and I have starred your work.

Now I try to use a new 3D dataset to train a new model for a new research based on your code, and now I have got the original 2D images and 3D .off files. During I ran your code in /external/mesh-fusion to preprocess the 3D data, I met some trouble.

I have finished the 'Installation' section, however, when I dived into the 'Usage' section, I could only ran over the first command, which is used for scaling the models, and the second and third commands falied to run. I have spent some days on debugging but the bugs still exists. And I am really not willing to spend more days on it.

So, I just wonder, whether or not could I skip the TSDF preprocessing steps, and train the new model with my own dataset directly? Much appreciations if your could reply my question.

dataset

hi,
I preprocessed the data set according to your instructions, but found that some of the samples after processing had no point cloud.
It is found that the original data set of shapenet is missing point cloud itself. How do you deal with these errors? Did you remove them directly from the train.1st file?
Thanks in advance!

metadata.yaml missing in dataset

Currently, the provided dataset misses the metadata.yaml file. As a result, the evaluation is not printed on a per-class basis.

Windows support?

Hi,
I'm trying to get this project to run on windows.
I managed to compile the extensions (Without dmc). When I run the sample command

python generate.py configs/demo.yaml

I get the following error:

Traceback (most recent call last): File "generate.py", line 145, in <module> out = generator.generate_mesh(data) File "D:\Programming\Projects\MSc\details\occupancy_networks\im2mesh\onet\generation.py", line 81, in generate_mesh mesh = self.generate_from_latent(z, c, stats_dict=stats_dict, **kwargs) File "D:\Programming\Projects\MSc\details\occupancy_networks\im2mesh\onet\generation.py", line 114, in generate_from_latent points = mesh_extractor.query() File "im2mesh\utils\libmise\mise.pyx", line 122, in im2mesh.utils.libmise.mise.MISE.query cdef long[:, :] points_view = points_np ValueError: Buffer dtype mismatch, expected 'long' but got 'long long'

Any idea how to tackle this? (Windows 10, 64 bit)

time

hi,
when i run the demo,
Timings[s]:
mesh time(encode inputs) time(eval points) time(marching cubes) time(refine) time(simplify)
class name
n/a 8.02325 0.015601 1.29366 1.439129 4.992291 0.249395
mean 8.02325 0.015601 1.29366 1.439129 4.992291 0.249395
There is 'The inference time of our algorithm with simplification and refinement steps is about 3s / mesh' in the paper.
My GPU is TITAN Xp, and I don't know why it takes so much time.
Thanks in advance!

No module named 'im2mesh.utils.libkdtree.pykdtree.kdtree'

Hi, I have some problems with code's configuration.I google it but not found a proper solution.I also follow 'conda env create -f environment.yaml ', but there's something wrong with my network that i can't download.Do you know how to fix this?

Noise level added to point clouds

Hello, thanks for your code and data.
As described in paper, you "apply noise using a Gaussian distribution with zero mean and standard deviation 0.05 to the point clouds." However, I found in the configs/pointcloud/onet.yaml that the pointcloud_noise is set as 0.005. So I am wondering which noise level you used in your paper and in the pretrained model in configs/pointcloud/onet_pretrained.yaml.
Many thanks.

ModuleNotFoundError: No module named 'librender.pyrender'

when i try to run
bash dataset_shapenet/build.sh
i get this error
Processing class 03001627
Converting meshes to OFF
dataset_shapenet/build.sh: line 21: parallel: command not found
Scaling meshes
Create depths maps
Traceback (most recent call last):
File "../external/mesh-fusion/2_fusion.py", line 11, in
import librender
File "/home/zash/Desktop/occupancy_networks-master/external/mesh-fusion/librender/init.py", line 6, in
from librender.pyrender import *
ModuleNotFoundError: No module named 'librender.pyrender'
Produce watertight meshes
Traceback (most recent call last):
File "../external/mesh-fusion/2_fusion.py", line 11, in
import librender
File "/home/zash/Desktop/occupancy_networks-master/external/mesh-fusion/librender/init.py", line 6, in
from librender.pyrender import *
ModuleNotFoundError: No module named 'librender.pyrender'

Unsupported gpu architecture 'compute_75' during installation

python3 setup.py build_ext --inplace
running build_ext
building 'im2mesh.dmc.ops.cuda_ext' extension
gcc -pthread -B /home/sunglyoung_119/miniconda3/envs/mesh_funcspace/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/TH -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/THC -I/home/sunglyoung_119/miniconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/dmc/ops/src/extension.cpp -o build/temp.linux-x86_64-3.6/im2mesh/dmc/ops/src/extension.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=cuda_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/bin/nvcc -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/TH -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/THC -I/home/sunglyoung_119/miniconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/dmc/ops/src/curvature_constraint_kernel.cu -o build/temp.linux-x86_64-3.6/im2mesh/dmc/ops/src/curvature_constraint_kernel.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_cuda_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++11
nvcc fatal : Unsupported gpu architecture 'compute_75'
error: command '/usr/bin/nvcc' failed with exit status 1

I am using RTX2080 so I don't think the Cuda 9.2 architecture recognize the GPU. However, is it anyway I can add 'compute_75' as a support GPU?
If I can do, or may I know what file should I look into?

How can I change input img_size from 224?

Hi.

When I changed input image size config from default 224 to 448, an error occurred below.

RuntimeError: size mismatch, m1: [32 x 131072], m2: [2048 x 256] at /opt/conda/conda-bld/pytorch_1544174967633/work/aten/src/THC/generic/THCTensorMathBlas.cu:266

How can I change my config from default?
And, can I get some good 3D-reconstruction results because of changing input image size?

Thanks.

how can i train my own dataset?

hi
i'm working on my university project and for that i have to train my own dataset and get 3d mesh files of my own dataset..my question is how can i do renderings and voxelizations of my own dataset as mentioned in 2nd point of building dataset? will really appreciate if you could reply my question.

Error on running on subset of Shapenet Dataset

Hi,
I picked out two classes of Shapenet pre-processed data and placed it in a custom folder maintaining directory structure.
I want to run the point cloud -> mesh generation on this network.
But, when I run python generate.py configs/pointcloud/onet_pretrained.yaml I get following error
Screenshot from 2020-01-18 01-56-15

Any help is appreciated!

Results not consistent with paper

Hi there,

I just tried test your model on single view image reconstruction task, i.e. configs/img/onet_pretrained.yaml .

However, I noticed two results which are not consistent with the paper, actually worse than those. Did I use some wrong reconstruction methods or the model was retrained after the publication? If so, any idea on how to improve the quality?

Input 1:
00_in

Output:
Capture

Paper:
Capture

Input 2:
07_in

Output:
Capture

Paper:
Capture

Thanks,
Ryan

Own dataset training(unconditional models)

I want to train using own dataset.
I execute sample_mesh.py, after I get watertight meshes.
So, I get pointclouds, voxels, occupancies.

After that, I execute train.py
but i found following error.

Error occured when loading field points of model 〇〇
or
Error occured when loading field voxels of model 〇〇

I think that pointclouds and voxels can't be made exactly.
Please tell me anything solution.

(ShapeNet model can be train)

Setting up the dataset for training

Can anyone please elaborate what should be downloaded in the second step?

  • download the renderings and voxelizations from Choy et al. 2016 and unpack them in data/external/Choy2016

It would be useful if someone can provide an download link for this

High frequency details

Hi, thanks for the great work! I have trained the network generating data with the per-processing script and the end result looks good but it lacks details. Is there something I could/should do to improve the capture of details? Is there a parameter(s) to be changed in the per-processing step? Or something of the like. Can MISE parameters can achieve more details?
Thanks!

rendering & voxelizations

from where can i download rendering & voxelizations choy et al. 2016 as mentioned in building Dataset?

Compile error

When I compile the extension module, I reported the following error:

running build_ext
building 'im2mesh.utils.libkdtree.pykdtree.kdtree' extension
gcc -pthread -B /home/fxru/anaconda3/envs/mesh_funcspace/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/fxru/anaconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/utils/libkdtree/pykdtree/kdtree.c -o build/temp.linux-x86_64-3.6/im2mesh/utils/libkdtree/pykdtree/kdtree.o -std=c99 -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=kdtree -D_GLIBCXX_USE_CXX11_ABI=0
im2mesh/utils/libkdtree/pykdtree/kdtree.c:525:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1

Visualizing mesh and occupancy points

Hi, thank you for sharing your code.
Is there code in this repository to visualize a mesh along its predicted occupancy points? If not, could you point me how to get a visualization like this in your presentation video?
Selection_259

The unit of Chamfer-L1 in the paper

Hi, sorry to bother you again.
I have run your code on ShapeNet point cloud data and got the evaluation result. However, the
Chamfer-L1 in the result is several orders of magnitude lower than that in the table 2. So I am wondering if you have multiplied a large factor on the Chamfer-L1 when reporting in the paper, which is also common in other papers using Chamfer Distance.
Best, Zhenxing

undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

I constantly get this error below when I run python generate.py configs/demo.yaml on Ubuntu 18.04 anaconda3 cuda 9.0

Can anyone please help? Thanks!

(mesh_funcspace) user@user-FB-22866-One-Computer-Core-i5-46:~/occupancy_networks-master$ python generate.py configs/demo.yaml
Traceback (most recent call last):
File "generate.py", line 10, in
from im2mesh import config
File "/home/user/occupancy_networks-master/im2mesh/config.py", line 4, in
from im2mesh import onet, r2n2, psgn, pix2mesh, dmc
File "/home/user/occupancy_networks-master/im2mesh/dmc/init.py", line 1, in
from im2mesh.dmc import (
File "/home/user/occupancy_networks-master/im2mesh/dmc/config.py", line 2, in
from im2mesh.dmc import models, training, generation
File "/home/user/occupancy_networks-master/im2mesh/dmc/models/init.py", line 2, in
from im2mesh.dmc.models import encoder, decoder
File "/home/user/occupancy_networks-master/im2mesh/dmc/models/encoder.py", line 4, in
from im2mesh.dmc.ops.grid_pooling import GridPooling
File "/home/user/occupancy_networks-master/im2mesh/dmc/ops/grid_pooling.py", line 6, in
from ._cuda_ext import grid_pooling_forward, grid_pooling_backward
ImportError: /home/user/occupancy_networks-master/im2mesh/dmc/ops/_cuda_ext.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

Training Problem

I download the Preprocessed data and unzip it to the data/ShapeNet folder
I met some problems in the following messages, could you help me?
It seems like that i did not load the data

File "/media/mickyv2/micky/occupancy_networks-master/train.py", line 70, in
data_vis = next(iter(vis_loader))
File "/home/mickyv2/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/media/mickyv2/micky/occupancy_networks-master/im2mesh/data/core.py", line 169, in collate_remove_none
return data.dataloader.default_collate(batch)
File "/home/mickyv2/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 200, in default_collate
elem_type = type(batch[0])
IndexError: list index out of range

Cannot run demo script

model_file in configs/img/onet_pretrained.yaml cannot be downloaded while executing the Demo script => python generate.py configs/demo.yaml
Does anyone make it work?

eval

hello,
when I run the eval_meshes.py, it has same warning.
'warning: contains1 != contains2 for same points.'
I wonder if this warning will affect the results.

Error compiling im2mesh

Hi,
I am getting the following error while compiling im2mesh on Ubuntu 16.04:

(mesh_funcspace) giancos@PC-KW-60110:~/git/occupancy_networks$ python setup.py build_ext --inplace
running build_ext
building 'im2mesh.utils.libkdtree.pykdtree.kdtree' extension
gcc -pthread -B /home/giancos/anaconda3/envs/mesh_funcspace/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/giancos/anaconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/utils/libkdtree/pykdtree/kdtree.c -o build/temp.linux-x86_64-3.6/im2mesh/utils/libkdtree/pykdtree/kdtree.o -std=c99 -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=kdtree -D_GLIBCXX_USE_CXX11_ABI=0
im2mesh/utils/libkdtree/pykdtree/kdtree.c:525:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1

Any advice?

Installation issue, pyembree?

I tried to install via conda with the yaml file but it cannot find the pyembree package. I tried installing it separately but I get the same issue. Any ideas?

Thanks

What is voxel file used for during ONet training from pointcloud?

Hello,

Thank you for this great work and clean codebase!

I am trying to relate the training script of your work to the CVPR paper and I want to better understand the use of the voxel data (e.g. "model.binvox") when training from pointcloud data. Is it used during training, or only at inference?

By extension, is the performance of ONet highly dependent on the use of this voxel data? For example, does the performance improve if the resolution of the grid is increase from 32x32x32 to 64x64x64?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.