Giter VIP home page Giter VIP logo

deepsdf's Introduction

DeepSDF

This is an implementation of the CVPR '19 paper "DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation" by Park et al. See the paper here.

DeepSDF Video

Citing DeepSDF

If you use DeepSDF in your research, please cite the paper:

@InProceedings{Park_2019_CVPR,
author = {Park, Jeong Joon and Florence, Peter and Straub, Julian and Newcombe, Richard and Lovegrove, Steven},
title = {DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

File Organization

The various Python scripts assume a shared organizational structure such that the output from one script can easily be used as input to another. This is true for both preprocessed data as well as experiments which make use of the datasets.

Data Layout

The DeepSDF code allows for pre-processing of meshes from multiple datasets and stores them in a unified data source. It also allows for separation of meshes according to class at the dataset level. The structure is as follows:

<data_source_name>/
    .datasources.json
    SdfSamples/
        <dataset_name>/
            <class_name>/
                <instance_name>.npz
    SurfaceSamples/
        <dataset_name>/
            <class_name>/
                <instance_name>.ply

Subsets of the unified data source can be reference using split files, which are stored in a simple JSON format. For examples, see examples/splits/.

The file datasources.json stores a mapping from named datasets to paths indicating where the data came from. This file is referenced again during evaluation to compare against ground truth meshes (see below), so if this data is moved this file will need to be updated accordingly.

Experiment Layout

Each DeepSDF experiment is organized in an "experiment directory", which collects all of the data relevant to a particular experiment. The structure is as follows:

<experiment_name>/
    specs.json
    Logs.pth
    LatentCodes/
        <Epoch>.pth
    ModelParameters/
        <Epoch>.pth
    OptimizerParameters/
        <Epoch>.pth
    Reconstructions/
        <Epoch>/
            Codes/
                <MeshId>.pth
            Meshes/
                <MeshId>.pth
    Evaluations/
        Chamfer/
            <Epoch>.json
        EarthMoversDistance/
            <Epoch>.json

The only file that is required to begin an experiment is 'specs.json', which sets the parameters, network architecture, and data to be used for the experiment.

How to Use DeepSDF

Pre-processing the Data

In order to use mesh data for training a DeepSDF model, the mesh will need to be pre-processed. This can be done with the preprocess_data.py executable. The preprocessing code is in C++ and has the following requirements:

With these dependencies, the build process follows the standard CMake procedure:

mkdir build
cd build
cmake ..
make -j

Once this is done there should be two executables in the DeepSDF/bin directory, one for surface sampling and one for SDF sampling. With the binaries, the dataset can be preprocessed using preprocess_data.py.

Preprocessing with Headless Rendering

The preprocessing script requires an OpenGL context, and to acquire one it will open a (small) window for each shape using Pangolin. If Pangolin has been compiled with EGL support, you can use the "headless" rendering mode to avoid the windows stealing focus. Pangolin's headless mode can be enabled by setting the PANGOLIN_WINDOW_URI environment variable as follows:

export PANGOLIN_WINDOW_URI=headless://

Training a Model

Once data has been preprocessed, models can be trained using:

python train_deep_sdf.py -e <experiment_directory>

Parameters of training are stored in a "specification file" in the experiment directory, which (1) avoids proliferation of command line arguments and (2) allows for easy reproducibility. This specification file includes a reference to the data directory and a split file specifying which subset of the data to use for training.

Visualizing Progress

All intermediate results from training are stored in the experiment directory. To visualize the progress of a model during training, run:

python plot_log.py -e <experiment_directory>

By default, this will plot the loss but other values can be shown using the --type flag.

Continuing from a Saved Optimization State

If training is interrupted, pass the --continue flag along with a epoch index to train_deep_sdf.py to continue from the saved state at that epoch. Note that the saved state needs to be present --- to check which checkpoints are available for a given experiment, check the `ModelParameters', 'OptimizerParameters', and 'LatentCodes' directories (all three are needed).

Reconstructing Meshes

To use a trained model to reconstruct explicit mesh representations of shapes from the test set, run:

python reconstruct.py -e <experiment_directory>

This will use the latest model parameters to reconstruct all the meshes in the split. To specify a particular checkpoint to use for reconstruction, use the --checkpoint flag followed by the epoch number. Generally, test SDF sampling strategy and regularization could affect the quality of the test reconstructions. For example, sampling aggressively near the surface could provide accurate surface details but might leave under-sampled space unconstrained, and using high L2 regularization coefficient could result in perceptually better but quantitatively worse test reconstructions.

Shape Completion

The current release does not include code for shape completion. Please check back later!

Evaluating Reconstructions

Before evaluating a DeepSDF model, a second mesh preprocessing step is required to produce a set of points sampled from the surface of the test meshes. This can be done as with the sdf samples, but passing the --surface flag to the pre-processing script. Once this is done, evaluations are done using:

python evaluate.py -e <experiment_directory> -d <data_directory> --split <split_filename>
Note on Table 3 from the CVPR '19 Paper

Given the stochastic nature of shape reconstruction (shapes are reconstructed via gradient descent with a random initialization), reconstruction accuracy will vary across multiple reruns of the same shape. The metrics listed in Table 3 for the "chair" and "plane" are the result of performing two reconstructions of each shape and keeping the one with the lowest chamfer distance. The code as released does not support this evaluation and thus the reproduced results will likely differ from those produced in the paper. For example, our test run with the provided code produced Chamfer distance (multiplied by 103) mean and median of 0.157 and 0.062 respectively for the "chair" class and 0.101 and 0.044 for the "plane" class (compared to 0.204, 0.072 for chairs and 0.143, 0.036 for planes reported in the paper).

Examples

Here's a list of commands for a typical use case of training and evaluating a DeepSDF model using the "sofa" class of the ShapeNet version 2 dataset.

# navigate to the DeepSdf root directory
cd [...]/DeepSdf

# create a home for the data
mkdir data

# pre-process the sofas training set (SDF samples)
python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

# train the model
python train_deep_sdf.py -e examples/sofas

# pre-process the sofa test set (SDF samples)
python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_test.json --test --skip

# pre-process the sofa test set (surface samples)
python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_test.json --surface --skip

# reconstruct meshes from the sofa test split (after 2000 epochs)
python reconstruct.py -e examples/sofas -c 2000 --split examples/splits/sv2_sofas_test.json -d data --skip

# evaluate the reconstructions
python evaluate.py -e examples/sofas -c 2000 -d data -s examples/splits/sv2_sofas_test.json 

Team

Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove

Acknowledgements

We want to acknowledge the help of Tanner Schmidt with releasing the code.

License

DeepSDF is relased under the MIT License. See the LICENSE file for more details.

deepsdf's People

Contributors

jstraub avatar martinruenz avatar oafolabi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepsdf's Issues

error: if a fragment input is (or contains) an integer, then it must be qualified with 'flat'

Hi, I'm also trying to preprocess the data but am getting a different OpenGL error:

OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.GLSL Shader compilation failed: :
0:8(1): error: if a fragment input is (or contains) an integer, then it must be qualified with 'flat'
0:8(8): error: `gl_PrimitiveID' redeclared

Zero samples are successfully being generated. I'm trying to generate the chairs training dataset from ShapeNet Core v2.

I am running this code on a fresh VMWare VM of Ubuntu 18.04 64-bit on VMWare Workstation Player 15. I have an NVIDIA GTX 1060 Mobile.

Some previous things I've tried were:

  1. Running in a Docker contain (but Pangolin required a display)
  2. Running in a VirtualBox VM of Ubuntu 18.04 (but it did not support OpenGL 3.3)

NaN values for several points & normals during preprocessing of mesh

Greetings, some of the vertice coords and normal coords fetched after the virtual rendering of the mesh preprocessing executable have the value of (NaN, NaN, NaN).

Is this expected due to not enough 'watertightness' of mesh?

I am starting to think that my #version 120 implementation of the shaders are faulty:

@start vertex
#version 120

attribute vec3 vertex;
varying vec4 position_world;
varying vec4 position_camera;
varying vec3 viewDirection_camera;

uniform mat4 MVP;
uniform mat4 V;

void main(){

    // Projected image coordinate
    gl_Position =  MVP * vec4(vertex,1.0);

    // world coordinate location of the vertex
    position_world = vec4(vertex,1.0);
    position_camera = V * vec4(vertex, 1.0);

    viewDirection_camera = normalize(vec3(0.0,0.0,0.0) - position_camera.xyz);
}

@start geometry
#version 120
#extension GL_EXT_geometry_shader4 : enable

varying vec4 position_world[];
varying vec3 viewDirection_camera[];

varying vec3 normal_camera;
varying vec3 normal_world;
varying vec4 xyz_world;
varying vec3 viewDirection_cam;
varying vec4 xyz_camera;
varying float primitiveID;

uniform mat4 V;

void main() {
    vec3 A = position_world[1].xyz - position_world[0].xyz;
    vec3 B = position_world[2].xyz - position_world[0].xyz;
    vec3 normal = normalize(cross(A,B));
    vec3 normal_cam = (V * vec4(normal,0.0)).xyz;

    gl_Position = gl_PositionIn[0];
    normal_camera = normal_cam;
    normal_world = normal;
    xyz_world = position_world[0];
    xyz_camera = V * xyz_world;
    viewDirection_cam = viewDirection_camera[0];
    primitiveID = gl_PrimitiveIDIn;
    EmitVertex();

    gl_Position = gl_PositionIn[1];
    normal_camera = normal_cam;
    normal_world = normal;
    xyz_world = position_world[1];
    xyz_camera = V * xyz_world;
    viewDirection_cam = viewDirection_camera[1];
    primitiveID = gl_PrimitiveIDIn;

    EmitVertex();

    gl_Position = gl_PositionIn[2];
    normal_camera = normal_cam;
    normal_world = normal;
    xyz_world = position_world[2];
    xyz_camera = V * xyz_world;
    viewDirection_cam = viewDirection_camera[2];
    primitiveID = gl_PrimitiveIDIn;

    EmitVertex();
    EndPrimitive();
}

@start fragment
#version 120

varying vec3 viewDirection_cam;
varying vec3 normal_world;
varying vec3 normal_camera;
varying vec4 xyz_world;
varying vec4 xyz_camera;
varying float primitiveID;

uniform vec2 slant_thr;
varying vec4 ttt;
uniform mat4 V;
uniform mat4 ToWorld;

bool isnan( float val )
{
  return ( val < 0.0 || 0.0 < val || val == 0.0 ) ? false : true;
  // important: some nVidias failed to cope with version below.
  // Probably wrong optimization.
  /*return ( val <= 0.0 || 0.0 <= val ) ? false : true;*/
}

void main(){
    vec3 view_vector = vec3(0.0,0.0,1.0);
//    vec3 view_vector = normalize(vec3(0.0,0.0,1.0) - xyz_camera.xyz);
    vec4 test = vec4(0.0,0.0,0.0,1.0);

    // Check if we need to flip the normal.
    vec3 normal_world_cor;// = normal_world;
    float d = dot(normalize(normal_camera), normalize(view_vector));

    if (abs(d) < 0.001) {
        gl_FragData[0] = vec4(0.0,0.0,0.0,0.0);
        gl_FragData[1] = vec4(0.0,0.0,0.0,0.0);
        gl_FragData[2] = vec4(0.0,0.0,0.0,0.0);
        return;
    } else {
        if (d < 0) {
            test = vec4(0,1,0,1);
            normal_world_cor = -normal_world;
        } else {
            normal_world_cor = normal_world;
        }

        gl_FragData[0] = xyz_world;
        gl_FragData[0].w = primitiveID + 1.0f;

        gl_FragData[1] = vec4(normalize(normal_world_cor),1);
        gl_FragData[1].w = primitiveID + 1.0f;

    }

}

Any help would be appreciated.

Paremeter Eta for Shape Completion

Hi, thanks for your great work! I'm implementing your shape completion experiments recently, here are two questions about the sampling:

  • what's the parameter eta value for near surface sampling?
  • what's the sampling strategy for the free space sampling? do you sample on the ray direction of camera uniformly and crop to a unit cube?
    Thanks!

Why does DeepSDF require mesh data?

Thanks for sharing your work!
I read the paper and understood that DeepSDF predict SDF values from a given 3D points. So why is mesh data required?

Uniform Sampling in Data Creation

Hi!

In the paper, it says that uniform sampled points are used during training:

points we uniformly sample within the unit sphere. ```

Is that sampling logic present anywhere within the repo?

Thanks

Questions regarding to the "Shape Completion" experiments

Hello @jjparkcv and @tschmidt23, thanks for sharing this great work. I've finished the model training on "chairs" class and have a few questions about the shape completion experiments in the paper:

  1. Are the models in the shape completion experiments trained separately using only partial(single-view) point cloud input? Or I can just reuse the "complete sampling" version of training data(as preprocessing code published in this repo).
  2. Do you also use sdf_gt during inference for shape completion(even for noisy depth input)? Is it possible to use zeros as sdf_gt for point cloud input sampled only from the object surface?

For the second question I experimented a little bit, the result is not quite as expected.
This is the input point cloud:
image
and this is the reconstructed mesh:
image
image

If this is possible, any ideas on what I did wrong?

Thanks a lot!

OpenGL Error when running "preprocess_data.py"

Hi, all,

When I run preprocess_data.py, I got the following OpenGL Error

(pytorch1.0) root@milton-ThinkCentre-M93p:/data/code9/deepsdf# python preprocess_data.py --data_dir data --source /data2/ShapeNet/ShapeNetCore.v2 --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip
DeepSdf - INFO - Preprocessing data from /data2/ShapeNet/ShapeNetCore.v2 and placing the results in data/SdfSamples/ShapeNetV2
data sources stored to data/.datasources.json
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/1037fd31d12178d396f164a988ef37cc/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1037fd31d12178d396f164a988ef37cc.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/104256e5bb73b0b719fb4103277a6b93/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/104256e5bb73b0b719fb4103277a6b93.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/1053897adff12c7839c40eb1ac71e4c1/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1053897adff12c7839c40eb1ac71e4c1.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/10552f968486cd0ad138a53ab0d038a5/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/10552f968486cd0ad138a53ab0d038a5.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/105849baff12c6fc2bf2dcc31ba1713/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/105849baff12c6fc2bf2dcc31ba1713.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/107637b6bdf8129d4904d89e9169817b/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/107637b6bdf8129d4904d89e9169817b.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/107bce22d72f322eedf1bb0b62653056/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/107bce22d72f322eedf1bb0b62653056.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/10e0543e6e316dca30b07c64830a47f3/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/10e0543e6e316dca30b07c64830a47f3.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/113a2544e062127d79414e04132a8bef/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/113a2544e062127d79414e04132a8bef.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/117c47d75798788a5506ead0b132904c/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/117c47d75798788a5506ead0b132904c.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/117f6ac4bcd75d8b4ad65adb06bbae49/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/117f6ac4bcd75d8b4ad65adb06bbae49.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/118a7d6a1dfbbc14300703f05f8ccc25/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/118a7d6a1dfbbc14300703f05f8ccc25.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/11a47d5cdd42a5104b3c42e318f3affc/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11a47d5cdd42a5104b3c42e318f3affc.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/11b36d8f9025062513d2510999d0f1d2/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11b36d8f9025062513d2510999d0f1d2.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/11b544b22dedb59c654ea6737b0d3597/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11b544b22dedb59c654ea6737b0d3597.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/11be630221243013c087ef7d7cf00301/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11be630221243013c087ef7d7cf00301.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/11f31367f34bfea04b3c42e318f3affc/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11f31367f34bfea04b3c42e318f3affc.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/120735afde493c277ff6ace05b36a5/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/120735afde493c277ff6ace05b36a5.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/1210afeba868a87bf91f8f6988914003/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1210afeba868a87bf91f8f6988914003.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/1230d31e3a6cbf309cd431573238602d/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1230d31e3a6cbf309cd431573238602d.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/125ace480d9f2fd5369e32fb818f337/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/125ace480d9f2fd5369e32fb818f337.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/126ed5982cdd56243b02598625ec1bf7/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/126ed5982cdd56243b02598625ec1bf7.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/12843b5b189bf39f7cf414b698427dbd/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/12843b5b189bf39f7cf414b698427dbd.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/1299643f99c8a66df59decd9cfc8a5bb/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1299643f99c8a66df59decd9cfc8a5bb.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/12a0c645e0bb6601ad75d368738e0b47/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/12a0c645e0bb6601ad75d368738e0b47.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/12c6a146bde9f6f5c42c7f2c2bc04572/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/12c6a146bde9f6f5c42c7f2c2bc04572.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/12cd30f7f83f441dc13b22d2a852f9c2/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/12cd30f7f83f441dc13b22d2a852f9c2.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/130c64a2c0232fd03fc2ef4fdfb57f60/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/130c64a2c0232fd03fc2ef4fdfb57f60.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13169bd2b9b02ad44089c2a25bbcbf23/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13169bd2b9b02ad44089c2a25bbcbf23.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13181141c0d32f2e593ebeeedbff73b/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13181141c0d32f2e593ebeeedbff73b.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13534db5278d476d98e0d1738edd4f19/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13534db5278d476d98e0d1738edd4f19.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13568cb7d4bb7d90c274f5fac65789d8/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13568cb7d4bb7d90c274f5fac65789d8.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/1372c28325f2794046dd596893434005/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1372c28325f2794046dd596893434005.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/137589e785a414b38a2d601af174cc3c/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/137589e785a414b38a2d601af174cc3c.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13990109140043c919fb4103277a6b93/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13990109140043c919fb4103277a6b93.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/139b1622071f1864f7d7105e737c7740/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/139b1622071f1864f7d7105e737c7740.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13a8c6129a8e80379904131b50e062f6/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13a8c6129a8e80379904131b50e062f6.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13b60f5be9af777cc3bd24f986301745/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13b60f5be9af777cc3bd24f986301745.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13d0d8dcb20c0071effcc073d8ec38f6/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13d0d8dcb20c0071effcc073d8ec38f6.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13d3462293023fe71f530727405d60cf/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13d3462293023fe71f530727405d60cf.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/13de905fd21e501567a4cd2863eb1ca/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/13de905fd21e501567a4cd2863eb1ca.npz
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/1429db0e06466860dfd64b437f0ace42/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1429db0e06466860dfd64b437f0ace42.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /data2/ShapeNet/ShapeNetCore.v2/04256520/144cee9408bcdc3ad062f9c4aeccfad2/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/144cee9408bcdc3ad062f9c4aeccfad2.npz
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203

RuntimeError: CUDA error: unspecified launch failure

when I'm running "train_deep_sdf.py" in epoch 26, the error occurs, I had no idea about this.
`Traceback (most recent call last):
File "train_deep_sdf.py", line 591, in
main_function(args.experiment_directory, args.continue_from, int(args.batch_split))
File "train_deep_sdf.py", line 501, in main_function
chunk_loss = loss_l1(pred_sdf, sdf_gt[i].cuda()) / num_sdf_samples
RuntimeError: CUDA error: unspecified launch failure
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: unspecified launch failure (insert_events at /opt/conda/conda-bld/pytorch_1544202130060/work/aten/src/THC/THCCachingAllocator.cpp:470)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f16bc43ecc5 in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: + 0x135cb20 (0x7f16bff31b20 in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2_gpu.so)
frame #2: at::TensorImpl::release_resources() + 0x50 (0x7f16bca99f90 in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libcaffe2.so)
frame #3: + 0x2ad98b (0x7f16b970498b in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1)
frame #4: + 0x31a110 (0x7f16b9771110 in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1)
frame #5: torch::autograd::deleteFunction(torch::autograd::Function*) + 0x2f0 (0x7f16b97071d0 in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1)
frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x45 (0x7f16fcbbd0c5 in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #7: torch::autograd::Variable::Impl::release_resources() + 0x4a (0x7f16b997b15a in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so.1)
frame #8: + 0x121ebb (0x7f16fcbd4ebb in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #9: + 0x31c16f (0x7f16fcdcf16f in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #10: + 0x31c1b1 (0x7f16fcdcf1b1 in /home/cuili/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

frame #21: __libc_start_main + 0xe7 (0x7f170f89fb97 in /lib/x86_64-linux-gnu/libc.so.6)

Aborted (core dumped)
`

Is there anyone who could help me?

Preprocessing error

Preprocess part gives me quite a headache. I have the following error:

terminate called after throwing an instance of 'std::runtime_error'
what(): Pangolin X11: Unable to retrieve framebuffer options
DeepSdf - INFO - ShapeNetCore.v2/04256520/45d3384ab8d5b6295637fc0f4b98e88b/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/45d3384ab8d5b6295637fc0f4b98e88b.npz
Unable to read texture 'texture_0'
Unable to read texture 'texture_2'
terminate called after throwing an instance of 'std::runtime_error'
what(): Pangolin X11: Unable to retrieve framebuffer options
Unable to read texture 'texture_2'
Unable to read texture 'texture_4'
Unable to read texture 'texture_5'
terminate called after throwing an instance of 'std::runtime_error'
what(): Pangolin X11: Unable to retrieve framebuffer options
DeepSdf - INFO - ShapeNetCore.v2/04256520/45d96e52f535907d40c4baf1afd9784/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/45d96e52f535907d40c4baf1afd9784.npz

I run this on a remote GPU server. Previously, I resolved X11 can't open display X by enabling X11 forwarding on client-side. Now it gives me this kind of message and still generates 0 output. I have no idea how to fix this.

  • Ubuntu 16.04.5 LTS
  • Pangolin cloned and built with the master branch.

Training data and training hardware

Hello, and thank you for your great work!

Would you be able to give any indication as to the size of the training/testing set you used to generate the results shown in the paper? Did you use the same number of samples as are defined in the examples splits files (for chairs, lamps, plaes, sofas, and tables)? I ask because, for chairs especially, it looks like you only use about 20% of the shapenet data.

Also, which gpus were used for training? The paper says training was performed with 8 Nvidia gpus for 8 hours, but doesn't give any hint as to what specific Nvidia gpus were used.

Thank you!

mpark/variant Error when running make

All the requirement are already installed, and running cmake .. succeed, however when running make -j I get the following error:


DeepSDF/src/SampleVisibleMeshSurface.cpp:11:
/usr/local/include/pangolin/compat/variant.h:10:13: fatal error: mpark/variant.hpp: No such file or directory
 #   include <mpark/variant.hpp>
             ^~~~~~~~~~~~~~~~~~~
compilation terminated.
CMakeFiles/SampleVisibleMeshSurface.dir/build.make:62: recipe for target 'CMakeFiles/SampleVisibleMeshSurface.dir/src/SampleVisibleMeshSurface.cpp.o' failed
make[2]: *** [CMakeFiles/SampleVisibleMeshSurface.dir/src/SampleVisibleMeshSurface.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs.

error during reconstruction using small batch sizes

When I train with small batch sizes (ex.3,4), I get error during reconstruction, since the network could not predict negative SDF values. Although during training everything looks fine.

If I replace the decoder.eval() with decoder.train(), I get normal reconstructions. So I guess the problem is with the dropout scaling difference or the weight normalization in training and testing . @tschmidt23 your feedback is much appreciated!

No such file or directory: u'data/SdfSamples/ShapeNetV2/04256520/...npz'

When I run the data per-processing code,
$ python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

It generates following log:

...
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c955e564c9a73650f78bdf37d618e97e/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c955e564c9a73650f78bdf37d618e97e.npz
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c97af2aa2f9f02be9ecd5a75a29f0715/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c97af2aa2f9f02be9ecd5a75a29f0715.npz
DeepSdf - INFO - ~/data/ShapeNetCore.v2/04256520/c9c0132c09ca16e8599dcc439b161a52/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/c9c0132c09ca16e8599dcc439b161a52.npz
...

It seems that the data are generated and written to data/SdfSamples/ShapeNetV2/04256520/<model_name>.npz

However, when I run the training code:
$ python train_deep_sdf.py -e examples/sofas

It complains that no data found:

...
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cba1446e98640f603ffc853fc4b95a17.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cbccbd019a3029c661bfbba8a5defb02.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cbd547bfb6b7d8e54b50faf1a96496ef.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc20bb3596fd3c2e677ea8589de8c796.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc4a8ecc0f3b4ca1dc0efee4b442070.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc4f3aff596b544e599dcc439b161a52.npz'
DeepSdf - WARNING - Requested non-existent file 'ShapeNetV2/04256520/cc5f1f064a1ba342cbdb36da0ec8fda6.npz'
DeepSdf - INFO - There are 1628 scenes
DeepSdf - INFO - starting from epoch 1
DeepSdf - INFO - epoch 1...
Traceback (most recent call last):
  File "train_deep_sdf.py", line 558, in <module>
    main_function(args.experiment_directory, args.continue_from, int(args.batch_split))
  File "train_deep_sdf.py", line 436, in main_function
    for sdf_data, indices in sdf_loader:
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 582, in __next__
    return self._process_next_batch(batch)
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 608, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
IOError: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "~/project/deepSDF/deep_sdf/data.py", line 151, in __getitem__
    return unpack_sdf_samples(filename, self.subsample), idx
  File "~/project/deepSDF/deep_sdf/data.py", line 67, in unpack_sdf_samples
    npz = np.load(filename)
  File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 422, in load
    fid = open(os_fspath(file), "rb")
IOError: [Errno 2] No such file or directory: u'data/SdfSamples/ShapeNetV2/04256520/949054060a3db173d9d07e89322d9cab.npz'

When I check the source folder, the model file is there:
$ ls ~/<...>/ShapeNetCore.v2/02691156/ff12c3a1d388b03044eedf822e07b7e4/models/

total 5.3M
-rw-rw-r-- 1  217 Jul 11  2016 model_normalized.json
-rw-rw-r-- 1  1.3K Jul 11  2016 model_normalized.mtl
-rw-rw-r-- 1  5.2M Jul 11  2016 model_normalized.obj
-rw-rw-r-- 1  24K Jul 12  2016 model_normalized.solid.binvox
-rw-rw-r-- 1  25K Jul 12  2016 model_normalized.surface.binvox

However, when I checked the output folder, I do found that it's empty:
$ ls data/SdfSamples/ShapeNetV2/04256520
total 0

Does anyone know what's the cause for this?

Thanks for your help!

Unable to build

Hi - I'm trying to do the initial make of DeedSDF to get the binaries required for processing the data.

The make fails.

I've installed CLI11, Pangolin, nanoflann, and Eigen3 globally.

I'm in DeepSDF/build and run:

cmake ..

I get:
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:9 (add_subdirectory):
The source directory /home/myhome/git/DeepSDF/third-party/cnpy
does not contain a CMakeLists.txt file.

-- Configuring incomplete, errors occurred!
See also "/home/myhome/git/DeepSDF/build/CMakeFiles/CMakeOutput.log".

CMakeFiles/CMakeOutput.log is verbose but unhelpful.

I can't find anything addressing this issue. What do I need to do to get this to compile?

Visualization of latent code-space

Hi, in the supp, Figure 7: Comparison of the 2D latent code-space, the digits are visualized per latent code. I am wondering whether we could plot the same results for the sofa example? If so, what should be the mean/var range for the latent code-space?

OpenGL Error: XX (500) and what(): Interlace not yet supported

When running preprocess_data.py, two error occured:
DeepSdf - INFO - /home/mpl/ShapeNetCore.v2/03001627/df7fc0b3b796714fd00dd29272c1070b/models/model_normalized.obj --> /home/mpl/DeepSDF/data/SurfaceSamples/ShapeNetV2/03001627/df7fc0b3b796714fd00dd29272c1070b.ply
terminate called after throwing an instance of 'std::runtime_error'
what(): Interlace not yet supported
DeepSdf - INFO - /home/mpl/ShapeNetCore.v2/03001627/df8311076b838c7ea5f9d52c12457194/models/model_normalized.obj --> /home/mpl/DeepSDF/data/SurfaceSamples/ShapeNetV2/03001627/df8311076b838c7ea5f9d52c12457194.ply
OpenGL Error: XX (500)
In: /usr/local/include/pangolin/gl/gl.hpp, line 203
DeepSdf - INFO - /home/mpl/ShapeNetCore.v2/03001627/df8374d8f3563be8f1783a44a88d6274/models/model_normalized.obj --> /home/mpl/DeepSDF/data/SurfaceSamples/ShapeNetV2/03001627/df8374d8f3563be8f1783a44a88d6274.ply

Error when preprocessing data

I followed the instructions on how to setup the environment and when I ran the preprocessing script I got many lines with the following two errors.

OpenGL Error 500: GL_INVALID_ENUM: An unacceptable value is specified for an enumerated argument.
In: /usr/local/include/pangolin/gl/gl.hpp, line 205
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc

Unfortunately, nothing is generated in the output folder. I have used all the latest versions of the dependencies. I am running the script on a could VM in headless mode. What could be the problem?

Cuda Out of memory Error

Thanks for the amazing work
I got the following error when I run "python3 train_deep_sdf.py -e examples/sofas"
What should i do pls?

DeepSdf - INFO - Experiment description:
['This experiment learns a shape representation for sofas ', 'using data from ShapeNet version 2.']
DeepSdf - INFO - training with 1 GPU(s)
DeepSdf - INFO - There are 1628 scenes
DeepSdf - INFO - starting from epoch 1
DeepSdf - INFO - Number of decoder parameters: 1843195
DeepSdf - INFO - Number of shape code parameters: 416768 (# codes 1628, code dim 256)
DeepSdf - INFO - epoch 1...
Traceback (most recent call last):
File "train_deep_sdf.py", line 595, in
main_function(args.experiment_directory, args.continue_from, int(args.batch_split))
File "train_deep_sdf.py", line 499, in main_function
pred_sdf = decoder(input)
File "/home/pgstud/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/pgstud/anaconda3/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/pgstud/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/pgstud/DeepSDF/networks/deep_sdf_decoder.py", line 104, in forward
x = F.dropout(x, p=self.dropout_prob, training=self.training)
File "/home/pgstud/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 807, in dropout
else _VF.dropout(input, p, training))
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 7.93 GiB total capacity; 5.02 GiB already allocated; 1.27 GiB free; 5.02 GiB reserved in total by PyTorch)
(base) pgstud@pgstud-HP-EliteDesk-800-G4-Twr-IDS-APJ:~/DeepSDF$

Single shape reconstruction

Can you provide the reconstruction code for single shape with no latent vector. Given the predicted SDF; reconstruct the mesh. Is there a way to turn off the latent vector in reconstruction, in the current code?

Car Example Folder

Hello, thank you for your amazing work.

Could you provide the car example folder and/or the car split file used for the results shown in the paper?

How many epochs are used for training on ShapeNet?

In the paper of DeepSDF, it reports that 1000 epochs have been used for training. However, the code uses 2000 epochs and it does not converge at the checkpoint of 1000 epochs. Thus, I'm writing to ask how many epochs should be used. Thank you!

Shape Completion

Hi @tschmidt23

Is there a plan to add the Shape completion code to this repo?
If not, how can I go about training the network for shape completion.

Training details of the paper

Dear authors, thank you for your great works on SDF based 3D deep learning. It is really nice of you to release the code. However, I am a little confused about the training details of the result section. There are four parts (6.1 representing known 3d shapes, 6.2 representing unknown shapes, 6.3 shape completion, 6.4 shape interpolation). Could you kindly help me clarify the training data used for these four parts? I am not sure if all categories in ShapeNet are used to train models for 6.2, 6.3, and 6.4. If it is not true, what is your configuration? Thank you for your kind help.

Headless rendering while preprocessing meshes

I would suggest to by default execute pangolin headlessly, when preprocessing meshes, in order to avoid windows popping up (that steal the focus and make the PC unusable during the processing otherwise).

Something like this should do:

envir = os.environ.copy()
envir["PANGOLIN_WINDOW_URI"] = "headless://"
subproc = subprocess.Popen(command, stdout=subprocess.DEVNULL, env=envir)

Alternatively, one could mention that export PANGOLIN_WINDOW_URI=headless:// exists in the README.

【Error】preprocess_data.py

When I execute "preprocess_data.py", I found following error.
python3 preprocess_data.py --data_dir data -s ../Dataset/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/models/model_normalized.obj
DeepSdf - INFO - ../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/../Dataset/ShapeNetCore.v2/04256520/152161d238fbc55d41cf86c757faf4f9/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/152161d238fbc55d41cf86c757faf4f9.npz
terminate called after throwing an instance of 'std::runtime_error'
what(): Unable to load OBJ file '../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/models/model_normalized.obj'. Error: 'Cannot open file [../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/../Dataset/ShapeNetCore.v2/04256520/14aa542942c9ef1264dd558a50c0650d/models/model_normalized.obj]

Could not find a package configuration file provided by "CLI11"

I use the 5th method to install CLI11.
But I got the following error.


(base) xxx@goodnews:~/projects/DeepSDF/build$ cmake ..
CMake Error at CMakeLists.txt:6 (find_package):
  Could not find a package configuration file provided by "CLI11" with any of
  the following names:

    CLI11Config.cmake
    cli11-config.cmake

  Add the installation prefix of "CLI11" to CMAKE_PREFIX_PATH or set
  "CLI11_DIR" to a directory containing one of the above files.  If "CLI11"
  provides a separate development package or SDK, be sure it has been
  installed.


-- Configuring incomplete, errors occurred!
See also "/home/xxx/projects/DeepSDF/build/CMakeFiles/CMakeOutput.log".

SavePointsToPLY creates faces

When exporting testing point clouds via SampleVisibleMeshSurface, the function SavePointsToPLY generates faces, which I think is not the intended behaviour. Also, there are some mislabellings for the CLI11 options (see -s).

I can send a pull request to fix these minor issues, if you want to.

Visualize pre-processed SDF values as mesh

Hi thank you for your code and support with the issues.
Is there a way to visualize the mesh that results from the pre-processing stage? The output of preprocess_data.py is only SDF values. It there a way to get a mesh from these "GT" values?

Thanks

Bug: All Latent Vectors Optimized After Each Batch

Hi Tanner,

the auto-decoder implementation has a bug that optimizes all latent codes after every single batch, even though only the batch's latent codes should be optimized. This happens because Adam simply uses the gradients that are stored at each latent vector, regardless of whether those gradients were generated from the current batch or some previous batch. Since the gradients are never cleaned, Adam will reuse them after each batch, thus modifying all latent codes every time. (Tensorflow without eager mode would not have this problem.)

This can be verified by printing e.g. the 0-th latent code after every single batch. It changes its values after each batch even though it only occurs in some of the batches.

I have not confirmed this with a clean version of the official code, but I very much suspect it holds true.

I have unfortunately not come up with a good solution for this. Instead, my code abuses the fact that Adam does not update parameters if their gradients are None, see https://pytorch.org/docs/stable/_modules/torch/optim/adam.html#Adam.step : I reset the gradients of the current batch's latent vectors to None after optimizer.step(). This keeps all latent vector gradients as None unless they are in the current batch. As far as I can tell from the Adam code, this should lead to the intended behavior of the momentum part of Adam. After the bugfix, DeepSDF's performance improved not a lot but still quite noticeably for me, both quantitatively and qualitatively.

I use an older version of the code though, I don't know whether my workaround is straightforward when using the embeddings data structure to store the latent codes. It might be possible to modify the parameter groups of Adam. I don't know how that interacts with self.state[p] in step() though, which stores the momentum state. (Maybe "p" is somehow broken as a dictionary key because it comes from the embeddings structure.)

I want to note that the gradients are not somehow magically set to 0 after each batch, neither in the current release nor in my code. Doing so would not lead to the intended behavior because of the momentum part of Adam.

Best regards,
Edgar

DeepSDF results vary with the order of the dataset at inference time

Hi @jstraub Hi @tschmidt23 ,

It seems to me that DeepSDF results change (deep_sdf loss) with the order of the dataset (npyfiles here.
I see that decoder.eval is already set in the code, so both dropout and batch-norm are fixed during reconstruction.
Ideally the reconstructed shape and the corresponding chamfer distance should be same for any object, irrespective of its order in the dataset.
Did you notice this issue, and do you know any fix to this ?

Thanks !!

has anyone trained on all category?

Hi!

I wonder if anyone has experience train one model on all category instead of one model for each category? What are the performance be like? Thank you!

Account for details

Hi ,
Thank you for sharing your great work. I did some quick experiments with ShapeNetCoreV1 but I cannot get to reconstruct "high frequency details" from the learned latent vector. As shown in the image GT vs. reconstruction. Are there any parameters I can change to get a more detailed reconstruction?

image

Preprocessed files are not generating

I am able to run the preprocessing code, but files are not getting saved in the directory.
(pytorch) nagaharish@nagaharish-Lenovo-Legion-Y7000P-1060:~/Downloads/DeepSDF$ python preprocess_data.py --data_dir data --source ./data/ShapeNetCore.v2/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_chairs_train.json --skip DeepSdf - INFO - Preprocessing data from ./data/ShapeNetCore.v2/ShapeNetCore.v2/ and placing the results in data/SdfSamples/ShapeNetV2 data sources stored to data/.datasources.json ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/5d20adaf6d8f89fa2f1c10544d7d6f ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/79ed181ca18bf71dc8881577d38510 ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/115b11a77b8d8c3c110a27d1d78196 ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/2e8748c612c5d3519fb4103277a6b93 ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/3ab0a1dcb23aa0f620bea10952746d3 ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/4ab439279e665e08410fc47639efb60 ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/6c5b15a19101456219cb07ecb5b4102 ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/6fe690a6f597351162fd10b0938dcb5 ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/7e8b24aab1f2681e595557081060d0b ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/8ad1db95b5b9d60136d9cfd13835101 DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/5d20adaf6d8f89fa2f1c10544d7d6f/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/5d20adaf6d8f89fa2f1c10544d7d6f.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/79ed181ca18bf71dc8881577d38510/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/79ed181ca18bf71dc8881577d38510.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/115b11a77b8d8c3c110a27d1d78196/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/115b11a77b8d8c3c110a27d1d78196.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/2e8748c612c5d3519fb4103277a6b93/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/2e8748c612c5d3519fb4103277a6b93.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/3ab0a1dcb23aa0f620bea10952746d3/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/3ab0a1dcb23aa0f620bea10952746d3.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/4ab439279e665e08410fc47639efb60/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/4ab439279e665e08410fc47639efb60.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/6c5b15a19101456219cb07ecb5b4102/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/6c5b15a19101456219cb07ecb5b4102.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/6fe690a6f597351162fd10b0938dcb5/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/6fe690a6f597351162fd10b0938dcb5.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/7e8b24aab1f2681e595557081060d0b/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/03001627/7e8b24aab1f2681e595557081060d0b.npz DeepSdf - INFO - ./data/ShapeNetCore.v2/ShapeNetCore.v2/03001627/8ad1db95b5b9d60136d9cfd13835101/models/model_normalized.obj --> **data/SdfSamples/ShapeNetV2/03001627/8ad1db95b5b9d60136d9cfd13835101.npz**

My 'Sdfsamples' directory is still empty for the corresponding class(here chairs).

Training multiple classes together

Thank you for your impressive work,I have a problem when I train with three classes of data together,The test results are strange,I use airplane, sofa and table for training,Trained a total of 500 epochs,The test results are shown below. The first is the test results of the aircraft.
The second is the test results of the table

image
image
Why does this happen?Are there any points to pay attention to during training?

Cmake errors

Hello, thank you for your amazing work.
Could you please provide me with information on how to get the project working, i've been trying to do that but got stuck with cmake with the following errors

Performing C++ SOURCE FILE Test COMPILER_SUPPORT_Wshorten64to32 failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_db43e/fast"
/usr/bin/make -f CMakeFiles/cmTC_db43e.dir/build.make CMakeFiles/cmTC_db43e.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_db43e.dir/src.cxx.o
/usr/bin/c++ -std=c++03 -pedantic -Wall -Wextra -Wundef -Wcast-align -Wchar-subscripts -Wnon-virtual-dtor -Wunused-local-typedefs -Wpointer-arith -Wwrite-strings -Wformat-security -DCOMPILER_SUPPORT_Wshorten64to32 -Werror -Wshorten-64-to-32 -o CMakeFiles/cmTC_db43e.dir/src.cxx.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command line option '-Wshorten-64-to-32'
CMakeFiles/cmTC_db43e.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_db43e.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_db43e.dir/src.cxx.o] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_db43e/fast' failed
make: *** [cmTC_db43e/fast] Error 2

Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test COMPILER_SUPPORT_Wenumconversion failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_1b0e2/fast"
/usr/bin/make -f CMakeFiles/cmTC_1b0e2.dir/build.make CMakeFiles/cmTC_1b0e2.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_1b0e2.dir/src.cxx.o
/usr/bin/c++ -std=c++03 -pedantic -Wall -Wextra -Wundef -Wcast-align -Wchar-subscripts -Wnon-virtual-dtor -Wunused-local-typedefs -Wpointer-arith -Wwrite-strings -Wformat-security -Wlogical-op -DCOMPILER_SUPPORT_Wenumconversion -Werror -Wenum-conversion -o CMakeFiles/cmTC_1b0e2.dir/src.cxx.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command line option '-Wenum-conversion'; did you mean '-Wno-conversion'?
CMakeFiles/cmTC_1b0e2.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_1b0e2.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_1b0e2.dir/src.cxx.o] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_1b0e2/fast' failed
make: *** [cmTC_1b0e2/fast] Error 2

Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test COMPILER_SUPPORT_Wcpp11extensions failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_a44b3/fast"
/usr/bin/make -f CMakeFiles/cmTC_a44b3.dir/build.make CMakeFiles/cmTC_a44b3.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_a44b3.dir/src.cxx.o
/usr/bin/c++ -std=c++03 -pedantic -Wall -Wextra -Wundef -Wcast-align -Wchar-subscripts -Wnon-virtual-dtor -Wunused-local-typedefs -Wpointer-arith -Wwrite-strings -Wformat-security -Wlogical-op -DCOMPILER_SUPPORT_Wcpp11extensions -Werror -Wc++11-extensions -o CMakeFiles/cmTC_a44b3.dir/src.cxx.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command line option '-Wc++11-extensions'; did you mean '-fms-extensions'?
CMakeFiles/cmTC_a44b3.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_a44b3.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_a44b3.dir/src.cxx.o] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_a44b3/fast' failed
make: *** [cmTC_a44b3/fast] Error 2

Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test COMPILER_SUPPORT_wd981 failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_c05a7/fast"
/usr/bin/make -f CMakeFiles/cmTC_c05a7.dir/build.make CMakeFiles/cmTC_c05a7.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_c05a7.dir/src.cxx.o
/usr/bin/c++ -std=c++03 -pedantic -Wall -Wextra -Wundef -Wcast-align -Wchar-subscripts -Wnon-virtual-dtor -Wunused-local-typedefs -Wpointer-arith -Wwrite-strings -Wformat-security -Wlogical-op -Wdouble-promotion -Wshadow -Wno-psabi -Wno-variadic-macros -Wno-long-long -fno-check-new -fno-common -fstrict-aliasing -DCOMPILER_SUPPORT_wd981 -Werror -wd981 -o CMakeFiles/cmTC_c05a7.dir/src.cxx.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command line option '-wd981'
CMakeFiles/cmTC_c05a7.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_c05a7.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_c05a7.dir/src.cxx.o] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_c05a7/fast' failed
make: *** [cmTC_c05a7/fast] Error 2

Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test COMPILER_SUPPORT_wd2304 failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_97afa/fast"
/usr/bin/make -f CMakeFiles/cmTC_97afa.dir/build.make CMakeFiles/cmTC_97afa.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_97afa.dir/src.cxx.o
/usr/bin/c++ -std=c++03 -pedantic -Wall -Wextra -Wundef -Wcast-align -Wchar-subscripts -Wnon-virtual-dtor -Wunused-local-typedefs -Wpointer-arith -Wwrite-strings -Wformat-security -Wlogical-op -Wdouble-promotion -Wshadow -Wno-psabi -Wno-variadic-macros -Wno-long-long -fno-check-new -fno-common -fstrict-aliasing -DCOMPILER_SUPPORT_wd2304 -Werror -wd2304 -o CMakeFiles/cmTC_97afa.dir/src.cxx.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command line option '-wd2304'
CMakeFiles/cmTC_97afa.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_97afa.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_97afa.dir/src.cxx.o] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_97afa/fast' failed
make: *** [cmTC_97afa/fast] Error 2

Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test COMPILER_SUPPORT_STRICTANSI failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_c1134/fast"
/usr/bin/make -f CMakeFiles/cmTC_c1134.dir/build.make CMakeFiles/cmTC_c1134.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_c1134.dir/src.cxx.o
/usr/bin/c++ -std=c++03 -pedantic -Wall -Wextra -Wundef -Wcast-align -Wchar-subscripts -Wnon-virtual-dtor -Wunused-local-typedefs -Wpointer-arith -Wwrite-strings -Wformat-security -Wlogical-op -Wdouble-promotion -Wshadow -Wno-psabi -Wno-variadic-macros -Wno-long-long -fno-check-new -fno-common -fstrict-aliasing -DCOMPILER_SUPPORT_STRICTANSI -Werror -strict-ansi -o CMakeFiles/cmTC_c1134.dir/src.cxx.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command line option '-strict-ansi'; did you mean '-fstrict-enums'?
CMakeFiles/cmTC_c1134.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_c1134.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_c1134.dir/src.cxx.o] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_c1134/fast' failed
make: *** [cmTC_c1134/fast] Error 2

Source file was:
int main() { return 0; }
Performing C++ SOURCE FILE Test COMPILER_SUPPORT_Qunusedarguments failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_c7431/fast"
/usr/bin/make -f CMakeFiles/cmTC_c7431.dir/build.make CMakeFiles/cmTC_c7431.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building CXX object CMakeFiles/cmTC_c7431.dir/src.cxx.o
/usr/bin/c++ -std=c++03 -pedantic -Wall -Wextra -Wundef -Wcast-align -Wchar-subscripts -Wnon-virtual-dtor -Wunused-local-typedefs -Wpointer-arith -Wwrite-strings -Wformat-security -Wlogical-op -Wdouble-promotion -Wshadow -Wno-psabi -Wno-variadic-macros -Wno-long-long -fno-check-new -fno-common -fstrict-aliasing -DCOMPILER_SUPPORT_Qunusedarguments -Werror -Qunused-arguments -o CMakeFiles/cmTC_c7431.dir/src.cxx.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/src.cxx
c++: error: unrecognized command line option '-Qunused-arguments'; did you mean '-Wunused-parameter'?
CMakeFiles/cmTC_c7431.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_c7431.dir/src.cxx.o' failed
make[1]: *** [CMakeFiles/cmTC_c7431.dir/src.cxx.o] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_c7431/fast' failed
make: *** [cmTC_c7431/fast] Error 2

Source file was:
int main() { return 0; }
Determining if the pthread_create exist failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_51976/fast"
/usr/bin/make -f CMakeFiles/cmTC_51976.dir/build.make CMakeFiles/cmTC_51976.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_51976.dir/CheckSymbolExists.c.o
/usr/bin/cc -o CMakeFiles/cmTC_51976.dir/CheckSymbolExists.c.o -c /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c
Linking C executable cmTC_51976
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_51976.dir/link.txt --verbose=1
/usr/bin/cc -rdynamic CMakeFiles/cmTC_51976.dir/CheckSymbolExists.c.o -o cmTC_51976
CMakeFiles/cmTC_51976.dir/CheckSymbolExists.c.o: In function main': CheckSymbolExists.c:(.text+0x1b): undefined reference to pthread_create'
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_51976.dir/build.make:97: recipe for target 'cmTC_51976' failed
make[1]: *** [cmTC_51976] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_51976/fast' failed
make: *** [cmTC_51976/fast] Error 2

File /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
/* */
#include <pthread.h>

int main(int argc, char** argv)
{
(void)argv;
#ifndef pthread_create
return ((int*)(&pthread_create))[argc];
#else
(void)argc;
return 0;
#endif
}

Determining if the function pthread_create exists in the pthreads failed with the following output:
Change Dir: /home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp

Run Build Command:"/usr/bin/make" "cmTC_91b0e/fast"
/usr/bin/make -f CMakeFiles/cmTC_91b0e.dir/build.make CMakeFiles/cmTC_91b0e.dir/build
make[1]: Entering directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_91b0e.dir/CheckFunctionExists.c.o
/usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create -o CMakeFiles/cmTC_91b0e.dir/CheckFunctionExists.c.o -c /usr/share/cmake-3.10/Modules/CheckFunctionExists.c
Linking C executable cmTC_91b0e
/usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_91b0e.dir/link.txt --verbose=1
/usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create -rdynamic CMakeFiles/cmTC_91b0e.dir/CheckFunctionExists.c.o -o cmTC_91b0e -lpthreads
/usr/bin/ld: cannot find -lpthreads
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_91b0e.dir/build.make:97: recipe for target 'cmTC_91b0e' failed
make[1]: *** [cmTC_91b0e] Error 1
make[1]: Leaving directory '/home/sirnova/Pictures/DeepSDF/build/CMakeFiles/CMakeTmp'
Makefile:126: recipe for target 'cmTC_91b0e/fast' failed
make: *** [cmTC_91b0e/fast] Error 2

Issue with pre-processing

When I ran the following command, I got warnings that no mesh found for instance.

Command
python preprocess_data.py --data_dir data --source [...]/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

Warnings
DeepSdf - WARNING - No mesh found for instance 1037fd31d12178d396f164a988ef37cc
DeepSdf - WARNING - No mesh found for instance 104256e5bb73b0b719fb4103277a6b93
DeepSdf - WARNING - No mesh found for instance 1053897adff12c7839c40eb1ac71e4c1
... ...
... ...
DeepSdf - WARNING - No mesh found for instance cc5f1f064a1ba342cbdb36da0ec8fda6

I downloaded the shapeNetCore.v2. And there are meshes for those instances. But they are not found. Does anyone encounter the same issue? Could you let me know how to fix this problem?

Using GPU for preprocessing

Hi,

I'm running Ubuntu 18.04. I notice that during training stage the model is trained on GPU even when I don't have CUDA installed. Is it possible at all to preprocess the data using a GPU?

Thanks.

Error during data preprocessing: 'GLSL Shader compilation failed: error: GLSL 3.30 is not supported'

I am trying to run the preprocess_data.py via this command:
python preprocess_data.py --data_dir data --source /smartscan/ShapeNetCore.v2/ --name ShapeNetV2 --split examples/splits/sv2_sofas_train.json --skip

DeepSdf - INFO - Preprocessing data from /smartscan/ShapeNetCore.v2/ and placing the results in data/SdfSamples/ShapeNetV2
data sources stored to data/.datasources.json
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/1037fd31d12178d396f164a988ef37cc/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1037fd31d12178d396f164a988ef37cc.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/104256e5bb73b0b719fb4103277a6b93/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/104256e5bb73b0b719fb4103277a6b93.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/1053897adff12c7839c40eb1ac71e4c1/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/1053897adff12c7839c40eb1ac71e4c1.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/10552f968486cd0ad138a53ab0d038a5/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/10552f968486cd0ad138a53ab0d038a5.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/105849baff12c6fc2bf2dcc31ba1713/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/105849baff12c6fc2bf2dcc31ba1713.npz
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/107637b6bdf8129d4904d89e9169817b/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/107637b6bdf8129d4904d89e9169817b.npz
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/107bce22d72f322eedf1bb0b62653056/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/107bce22d72f322eedf1bb0b62653056.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/10e0543e6e316dca30b07c64830a47f3/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/10e0543e6e316dca30b07c64830a47f3.npz
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/113a2544e062127d79414e04132a8bef/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/113a2544e062127d79414e04132a8bef.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/117c47d75798788a5506ead0b132904c/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/117c47d75798788a5506ead0b132904c.npz
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/117f6ac4bcd75d8b4ad65adb06bbae49/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/117f6ac4bcd75d8b4ad65adb06bbae49.npz
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/118a7d6a1dfbbc14300703f05f8ccc25/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/118a7d6a1dfbbc14300703f05f8ccc25.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/11a47d5cdd42a5104b3c42e318f3affc/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11a47d5cdd42a5104b3c42e318f3affc.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/11b36d8f9025062513d2510999d0f1d2/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11b36d8f9025062513d2510999d0f1d2.npz
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/11b544b22dedb59c654ea6737b0d3597/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11b544b22dedb59c654ea6737b0d3597.npz
DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/11be630221243013c087ef7d7cf00301/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11be630221243013c087ef7d7cf00301.npz
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.DeepSdf - INFO - /smartscan/ShapeNetCore.v2/04256520/11f31367f34bfea04b3c42e318f3affc/models/model_normalized.obj --> data/SdfSamples/ShapeNetV2/04256520/11f31367f34bfea04b3c42e318f3affc.npz
GLSL Shader compilation failed: <string>:
0:1(10): error: GLSL 3.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES

So there are these two errors:
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.
and
GLSL Shader compilation failed: <string>: 0:1(10): error: GLSL 3.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES

Note that this was run in a docker container (Ubuntu 16.04). Also, I use Xfcb as a dummy display via this command:
export DISPLAY=:20 Xvfb :20 -screen 0 1366x768x16 &
Also, I have checked the OpenGL version.

glxinfo | grep "Max"
    Max core profile version: 3.3
    Max compat profile version: 3.0
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.0

It seems the program used the compat profile(3.0) instead of the core profile (3.3).
How can I solve these problems?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.