Giter VIP home page Giter VIP logo

dmesh's Introduction

DMesh: A Differentiable Representation for General Meshes

By computing existence probability for faces in a mesh, DMesh offers a way to represent a general triangular mesh in a differentiable manner. Please refer to our arxiv, full paper, and project website for more details.

Teaser image

Installation

(This guide is mainly written for Ubuntu environment. Please see here to install our project in Windows environment.) Please clone this repository recursively to include all submodules.

git clone https://github.com/SonSang/dmesh.git --recursive

Dependencies

We use Python version 3.9, and recommend using Anaconda to manage the environment. After creating a new environment, please run following command to install the required python packages.

pip install -r requirements.txt

We also need additional external libraries to run DMesh. Please install them by following the instructions below.

Pytorch

Please install Pytorch that aligns with your NVIDIA GPU. Currently, our code requires NVIDIA GPU to run, because our main algorithm is written in CUDA. You can find instructions here. We tested with Pytorch version 1.13.1 and 2.2.1.

pytorch3d (0.7.6)

Please follow detailed instructions here. In short, you can install (the latest) pytorch3d by running the following command.

conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install "git+https://github.com/facebookresearch/pytorch3d.git"

CGAL (5.6)

We use CGAL to run the Weighted Delaunay Triangulation (WDT) algorithm, which forms the basis of our approach. If you cloned this repository recursively, you should already have the latest CGAL source code in the external/cgal directory. Please follow the instructions below to build and install CGAL.

cd external/cgal
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make

You might need to install some additional dependencies to build CGAL. Please refer to the official documentation and install essential third party libraries, such as Boost, GMP, and MPFR, to build CGAL and CGAL-dependent code successfully. If you are using Ubuntu, you can install GMP and MPFR with following commands.

sudo apt-get install libgmp3-dev
sudo apt-get install libmpfr-dev

OneTBB (2021.11.0)

We use OneTBB to (possibly) accelerate CGAL's WDT algorithm using multi-threading. Even though it is not used in the current implementation, we include it here for future work. If you cloned this repository recursively, you should already have the latest OneTBB source code in the external/oneTBB directory. Please follow the instructions below to build and install OneTBB.

cd external/oneTBB
mkdir build && cd build
cmake -DCMAKE_INSTALL_PREFIX=../install -DTBB_TEST=OFF ..
cmake --build .
cmake --install .

You would be able to find OneTBB install files in external/oneTBB/install directory.

Nvdiffrast

We use nvdiffrast for differentiable rasterization. Please follow the instructions below to build and install nvdiffrast.

sudo apt-get install libglvnd0 libgl1 libglx0 libegl1 libgles2 libglvnd-dev libgl1-mesa-dev libegl1-mesa-dev libgles2-mesa-dev
cd external/nvdiffrast
pip install -e .

Please see the official documentation if you encounter any issues during the installation.

DMeshRenderer

We implemented our own renderers, named DMeshRenderer for multi-view reconstruction. Before installing it, please install GLM library. If you use Ubuntu, you can install it by running the following command.

sudo apt-get install libglm-dev

Then, please follow the instructions below to build and install DMeshRenderer.

cd external/dmesh_renderer
pip install -e .

Build CGAL-dependent code

Run following command to build CGAL-dependent code.

cd cgal_wrapper
cmake -DCMAKE_BUILD_TYPE=Release .
make

You would be able to find libcgal_diffdt.a file in cgal_wrapper/ directory.

Build DMesh

Finally, run following command to build DMesh.

pip install -e .

Dataset

Now, it's ready to run downstream applications. For our reconstruction experiments, we mainly used 4 closed surface models from Thingi10K dataset, 4 open surface models from DeepFashion3D dataset, and 3 mixed surface model from Objaverse dataset and Adobe Stock. For models from DeepFashion3D dataset, we used the ground truth mesh provided by NeuralUDF repository. Additionally, we used 3 models from Stanford Dataset for the first mesh conversion experiment.

Except for 2 mixed surface models (plant, raspberry) from Adobe Stock, you can download the dataset from Google Drive. Please place it under dataset folder. For reference, we provide links for the plant and raspberry model in Adobe Stock.

Usage

Here we provide some examples to use DMesh. All of the examples use config files in exp/config folder. You can modify the config files to change the input/output paths, hyperparameters, etc. By default, all the results are stored in exp_result. If you want to run every experiment sequentially, plase use the following command.

bash run_all.sh

Example 1: Mesh to DMesh

First, we convert a ground truth mesh to DMesh, by restoring the connectivity of the given mesh.

Run following command to convert Stanford Bunny model into DMesh.

python exp/1_mesh_to_dmesh.py --config=exp/config/exp_1/bunny.yaml

Example 2: Point cloud reconstruction

Next, we reconstruct a 3D mesh from a point cloud using DMesh, by minimizing (expected) Chamfer Distance.

Run following command to reconstruct a Lucy model from a point cloud.

python exp/2_pc_recon.py --config=exp/config/exp_2/thingi32/252119.yaml

Example 3: Multi-view image reconstruction

Finally, we reconstruct a 3D mesh from multi-view (diffuse, depth) images using DMesh, by minimizing the rendering loss.

Run following command to reconstruct a cloth model from multi-view images.

python exp/3_mv_recon.py --config=exp/config/exp_3/deepfashion3d/448.yaml

Discussions and Future Work

As discussed in the paper, our approach is a quite versatile approach to represent triangular mesh. However, because there is no additional constraint, our method does not guarantee to generate manifold mesh. Therefore, the orientation is not aligned well in the current reconstruction results, with small geometric artifacts. Also, when it comes to multi-view reconstruction, we still have a lot of room for improvement, because the differentiable renderers are premature. In the near future, we are aiming at overcoming these issues.

Citation

@misc{son2024dmesh,
      title={DMesh: A Differentiable Representation for General Meshes}, 
      author={Sanghyun Son and Matheus Gadelha and Yang Zhou and Zexiang Xu and Ming C. Lin and Yi Zhou},
      year={2024},
      eprint={2404.13445},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

As described above, we used CGAL for implementing our core algorithm. For implementing multi-view reconstruction code, we brought implementations of nvdiffrast, 3D Gaussian Splatting and Continuous Remeshing For Inverse Rendering. We appreciate these great works.

dmesh's People

Contributors

sonsang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dmesh's Issues

How to control the number of face of the resulting mesh?

In the config file, for example, exp/config/exp_3/thingi32/252119.yaml. I can not find a parameter to control the final number of faces. It seems that the only relevant parameter is refine. However, the parameter refine can not directory control the face number either.

Install guide for Windows!

It was a nightmare trying to install it on windows, I spent about 6 hours finally find a way to make it work, following are main problems I encountered along the way:

Assume you already initialize your conda environment, and cloned this project

In windows, use cmake --build . instead of make

When build [DMeshRenderer](https://github.com/SonSang/dmesh_renderer), if you encounter error:

  • fatal error C1083: Cannot open include file: 'glm/glm.hpp': No such file or directory

Solution:

  1. Installing Vcpkg:

    git clone https://github.com/microsoft/vcpkg
    cd vcpkg
    .\bootstrap-vcpkg.bat
    # Add environment variable: VCPKG_ROOT = "Your vcpkg directory" e.g. C:\Users\reall\Softwares\vcpkg
    # Add same path to Path environment variable
  2. Install GLM: vcpkg install glm

    Copy glm folder from vcpkg\packages\glm_x64-windows\include to your conda environment’s include folder

When build & install **[OneTBB](https://github.com/oneapi-src/oneTBB)**  if you encounter error:

  • CMake Error at src/tbb/cmake_install.cmake:51 (file): file INSTALL cannot find "C:/Users/reall/Softwares/Miniconda/envs/AI3D_Exp/_Projects/dmesh/external/oneTBB/build/msvc_19.39_cxx_64_md_release/tbb12.dll": File exists.

Solution:

  • oneapi-src/oneTBB#708 (comment)

    cd external/oneTBB
    mkdir build && cd build
    cmake -DCMAKE_INSTALL_PREFIX=../install -DTBB_TEST=OFF ..
    cmake --build . --config release
    cmake --install . --config release

Install [CGAL 5.6.1 - Manual: Using CGAL on Windows (with Visual C++)](https://doc.cgal.org/latest/Manual/windows.html#install-from-source)

When build cgal_wrapper, if you encounter error:

  • dmesh\external\cgal\Installation\include\CGAL\config.h(111,10): error C1083: Cannot open include file: 'boost/config.hpp': No such file or directory

Solution:

  • In dmesh\cgal_wrapper\CMakeLists.txt add additional line after target_include_directories() to include_directories("boost folder’s parent folder, e.g. C:/Users/reall/Softwares/boost_1_85_0")

When build DMesh (running pip install -e . in dmesh directory), If you encounter error

  1. LINK : fatal error LNK1181: cannot open input file 'cgal wrapper.lib’

    Solution:

    • Move cgal wrapper.lib from dmesh\cgal_wrapper\Debug\ under dmesh\cgal_wrapper
    • Or move it to other include library path, check dmesh\setup.py
  2. LINK : fatal error LNK1181: cannot open input file 'gmp.lib’ or 'mpfr.lib’

    Solution:

    • Move gmp.lib or 'mpfr.lib’ from vcpkg\packages\gmp_x64-windows\lib or vcpkg\packages\mpfr_x64-windows\lib to your conda environment’s libs folder

@SonSang

Awesome work based on a great idea. But a little suggestion, if you want people to use your work and create more useful tools or research papers with it, then you definitely want to put in some effort to make it easy to install. This project is easily one of the hardest project I ever installed.

Cheers, have a good day :)

About point cloud to dmesh wo/ gt mesh ( ply file including vertices, faces )

Hello, and thank you for the excellent research. :)

I am currently testing code with custom point cloud result ( w/ only vertices ) ,
I want to do an experiment with a point cloud that does not have a ground truth (GT) mesh ( w/ vertices, faces ).

After looking at Supplementary Algorithm 2, I thought it might be possible to perform reconstruction even without the GT.
However, the code seems to be written assuming the presence of a GT mesh.

Could you please provide guidance on how to proceed in the absence of a GT mesh?
And, I'm wonder if there are any plans to release the code for experiments without GT mesh.

Thank you

error occurred

when run bash run_all.sh
error occurred

Traceback (most recent call last):
File "/home/code/dmesh/exp/1_mesh_to_dmesh.py", line 16, in
from diffdt import DiffDT
File "/home/code/dmesh/diffdt/init.py", line 5, in
from diffdt.pd import PDVertexComputationLayer, PDStruct
File "/home/code/dmesh/diffdt/pd.py", line 8, in
from . import _C
ImportError: /home/code/dmesh/diffdt/_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3tbb6detail2r117deallocate_memoryEPv

error when running the code

Thanks for your great work. I built the environment successfully following the instructions , however , I met this error when running the code :
Traceback (most recent call last):
File "/data/cz/dmesh/exp/1_mesh_to_dmesh.py", line 16, in
from diffdt import DiffDT
File "/data/cz/dmesh/diffdt/init.py", line 5, in
from diffdt.pd import PDVertexComputationLayer, PDStruct
File "/data/cz/dmesh/diffdt/pd.py", line 8, in
from . import _C
ImportError: libtbb.so.12: cannot open shared object file: No such file or directory

Problem about multi-view reconstruction

In section C.3.1 of the paper you mentioned that the implementation of F_A (the first renderer) does not produce visibility-related gradients (near face edges). It's hard for me to understand the term "visibility-related gradients", can you figure me out what it means in your method?

Why offload the points and weights to CPU before DT?

dmesh/diffdt/cgalwdt.py

Lines 26 to 38 in 8a76623

with th.no_grad():
t_positions, t_weights = points.positions, points.weights
if t_positions.device != th.device('cpu'):
t_positions = points.positions.cpu()
if t_weights.device != th.device('cpu'):
t_weights = points.weights.cpu()
result = _C.delaunay_triangulation(t_positions,
t_weights,
weighted,
parallelize,
p_lock_grid_size,
compute_cc)

I notice that the points and weights are offloaded to cpu before delaunay triangulation, wasn't this process executed in CUDA? And why make the differential DT under torch.no_grad() context?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.