Giter VIP home page Giter VIP logo

3dsnet's Introduction

3DSNet: Unsupervised Shape-to-shape 3D Style Transfer

This repository contains the code for our method for learning-based 3D style transfer as described in "3DSNet: Unsupervised Shape-to-shape 3D Style Transfer". The code used to train and evaluate our framework on the Shapenet dataset is here provided and ready to use.

If you find this code useful for your project, please consider citing our paper:

@article{segu20203dsnet,
  title={3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer},
  author={Segu, Mattia and Grinvald, Margarita and Siegwart, Roland and Tombari, Federico},
  journal={arXiv preprint arXiv:2011.13388},
  year={2020}
}

Reconstruction and style transfer results with 3DSNet on the archair-chair category.

Video

A video of our results will soon be available.

Prerequisites

Install

This implementation uses Python 3.6, Pytorch, Pymesh, Cuda 10.1.

# Copy-Paste the snippet in a terminal
git clone --recurse-submodules https://github.com/ethz-asl/3dsnet.git
cd 3dsnet 

# Install dependencies
conda create -n 3dsnet python=3.6 --yes
conda activate 3dsnet

conda install  pytorch torchvision cudatoolkit=10.1 -c pytorch --yes
conda install -y -c conda-forge pyembree
conda install -y -c conda-forge trimesh seaborn
conda install -c conda-forge -c fvcore fvcore
conda install pytorch3d -c pytorch3d

pip install git+https://github.com/rtqichen/torchdiffeq torchvision
pip3 install git+https://github.com/cnr-isti-vclab/PyMeshLab
pip install --user --requirement  requirements.txt # pip dependencies

Chumpy installation with pip is currently broken with pip version 20.1. Please use pip 20.0.2 until chumpy issue won't be fixed.

Compile Chamfer (MIT) + Metro Distance (GPL3 Licence)

# Copy/Paste the snippet in a terminal
python auxiliary/ChamferDistancePytorch/chamfer3D/setup.py install #MIT
cd auxiliary
git clone https://github.com/ThibaultGROUEIX/metro_sources.git
cd metro_sources; python setup.py --build # build metro distance #GPL3
cd ../..

Before running the code

Auxiliary models

Please download all auxiliary models needed for our framework in the aux_models folder HERE and place them at .../3dsnet/aux_models.

Data

The pre-trained model publicly provided are trained on the ShapeNet dataset. The pointcloud version of the dataset is automatically downloaded when running the code for the first time.

To seamlessly run the code, please also download the ShapeNet Core V1 dataset from HERE and move it to the subdirectory .../ShapeNet of the chosen opt.data_dir, set as default in the argument_parser.py to dataset/data/. In the same folder, also put the all.csv file containing training, validation and test splits for ShapeNet Core V1. You can download it from HERE or from the original page if available in the download section. Please notice that, despite V2 being already available, we used ShapeNet V1 for compatibility with the pointcloud version originally provided with the official Atlasnet implementation.

Pre-trained Models

You can find pre-trained models for our framework in the 3dsnet_models folder HERE.

Running the code

You can play with different parameters configurations by changing them directly in the provided training/evaluation/demo scripts or in the argument_parser.py.

For further details, please refer to the parameters description in the argument_parser.py.

Training

You can easily start training 3DSNet launching the provided training scripts:

./train_chairs.sh

or

./train_planes.sh

Evaluation

You can easily start evaluation of a pretrained model launching the provided training scripts:

./eval.sh

Please, modify the parameter RELOAD_MODEL_PATH according to the model that you wish to evaluate.

Demo

The provided demo script allows to generate multiple 3D objects and their interpolation in the latent space from a pretrained model.

./demo.sh

Please, modify the parameter RELOAD_MODEL_PATH according to the model that you wish to evaluate.

Acknowledgments

The code in this repository is built on the official implementation of Atlasnet.

The implementation of our adaptive Meshflow decoder is based on the official Meshflow implementation.

If you cite our work, please consider citing also theirs.

@inproceedings{groueix2018papier,
  title={A papier-m{\^a}ch{\'e} approach to learning 3d surface generation},
  author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G and Russell, Bryan C and Aubry, Mathieu},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={216--224},
  year={2018}
}
@article{gupta2020neural,
  title={Neural mesh flow: 3d manifold mesh generationvia diffeomorphic flows},
  author={Gupta, Kunal and Chandraker, Manmohan},
  journal={arXiv preprint arXiv:2007.10973},
  year={2020}
}

3dsnet's People

Contributors

margaritag avatar mattiasegu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dsnet's Issues

DataSize of training samples

Hi, Thanks for your wonderful code and it's helpful for me.
But, I feel uncertain in detail when conduct experiments:
In your paper 3DSnet subsection 4.4, it said that the number of armchair and straight chair is 1995 and 1974 respectively while i got the size are 794 and 1240 in practice. (I download the experiment data from the link offered in download_shapenet_pointclouds.sh)
Wish your reply soon!

Possible typo in adaptive norm

Line 351/354 and line 359/362 should be norm2 and norm3?

3dsnet/model/meshflow.py

Lines 338 to 362 in e823742

sph = self.norm0(sph, content_latent_vector, adain_params[:, 0:3 * 2])
# First Deform Block computation and its instance norm
pred_y1, _ = self.db1(content_latent_vector, sph, None, time)
if self.adaptive:
pred_y1 = self.norm1(pred_y1, content_latent_vector,
adain_params[:, 3 * 2:3 * 4])
else:
pred_y1 = self.norm1(pred_y1, content_latent_vector)
# Second Deform Block computation and its instance norm
pred_y2, _ = self.db2(content_latent_vector, pred_y1, None, time)
if self.adaptive:
pred_y2 = self.norm1(pred_y2, content_latent_vector,
adain_params[:, 3 * 4:3 * 6])
else:
pred_y2 = self.norm1(pred_y2, content_latent_vector)
# Third Deform Block computation and its instance norm
pred_y3, _ = self.db3(content_latent_vector, pred_y2, None, time)
if self.adaptive:
pred_y3 = self.norm1(pred_y3, content_latent_vector,
adain_params[:, 3 * 6:3 * 8])
else:
pred_y3 = self.norm1(pred_y3, content_latent_vector)

run

Why do I show 'No module named 'chamfer_3D''

Number of points in data_a and data_b in demo

Dear Mattia,

Sorry to bother you, but I'm having difficulties in understanding the demo.sh code. I guess demo.sh is to test our trained networks with our test dataset? Please correct me if I'm wrong.

The main part I don't understand is line 509 in trainer.py:
data_a = EasyDict(self.datasets.dataset_test[self.classes[0]][index_a]) (same for data_b)

I think data_a['points'] should be the normalised and downsampled shape, I can see it has 2500 points. However, I ran train.sh and demo.sh with --number_points = 642 and --decoder_type = 'atlasnet'. I really don't know why in demo.sh, each loaded sample has 2500 points but not 642 points.

Thank you in advance. Looking forward to your help!

Sincerely,
Wei

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.