Giter VIP home page Giter VIP logo

point2mesh's People

Contributors

galmetzer avatar ranahanocka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

point2mesh's Issues

Bugs in initial mesh

point2mesh/models/layers/mesh.py", line 131, in build_gfmm
    return torch.Tensor(gfmm).long().to(self.device)
ValueError: expected sequence of length 3 at dim 1 (got 2)

GPU out of memory,have some way to reduce the requirement

Thank for your job,
I GPU memory just 6g.
At 1000 or 2000 iter,ManiFold upsampled the faces,would report 'RuntimeError:CUDA out of memory'
in the past,I will adjust the batchsize when have this problems,but in your code seems to be no it.In the paper used 1080Ti and 2080Ti have 11g GPUmemory and in 6 part DISCUSSION said ' our method is quite expensive in terms of time and memory'.
So,There is no way to run this code used 6g memory?I don't care the runtime.

Temporary mesh not found

The default examples crash after the first 1000 iterations. I'm not sure if this is because Manifold.exe is not running properly, or if this is a windows filepath issue.

Traceback (most recent call last):
  File "main.py", line 76, in <module>
    res=opts.manifold_res, simplify=True)
  File "C:\Users\UserName\point2mesh\utils.py", line 22, in manifold_upsample
    m_out = Mesh(tname, hold_history=True, device=mesh.device)
  File "C:\Users\UserName\point2mesh\models\layers\mesh.py", line 22, in __init__
    self.vs, self.faces = load_obj(file)
  File "C:\Users\UserName\point2mesh\utils.py", line 52, in load_obj
    f = open(file)
FileNotFoundError: [Errno 2] No such file or directory: '.\\checkpoints\\giraffe\\temp0e3d676a-1573-473a-b907-2f6acae22b06.obj'

Once it crashes there are only 3 objects in the checkpoints\giraffe folder:

opt.txt - 1kb
recon_5000.obj - 159kb
recon_iter - 0kb

OS: Windows 10
Python: 3.6.4
pytorch: 1.5.0
pytorch3D: 0.2.0

Predicting shape parameters of a single/multiple objects

Hi @ranahanocka ,
Your work seems really impressive. I am trying to reconstruct the shape (superquadric) parameters given the point cloud. For a single shape, I have achieved satisfactory results, but I guess point2mesh with appropriate adaptations, could outperform my current results. I've posted my experiments here.
Anyway, while trying to generalize the model such that it could represent an arbitrary object as a composition of superquadrics, I got stuck.
A single superquadric is represented with 8 parameters (size along x, y, and z axis, offset along axes and two shape parameters). So first step would be to adapt point2mesh for a single superquadric regression and the next step, I guess, would be to somehow represent the input point cloud as a composition of multiple shapes.
I have read your Point2Mesh article and now I am going through your latest article. I would be very thankful if you could provide any suggestions about how to adapt your model for single and multiple shape parameters regression,

Thanks!

About the code result

Hello,I have read your paper and run your code.
It's obvious that your idea and code are both attractive.
As a layman on programming,I want to know if the pictures you upload are the result of code or the result of visualization by other tools.
Because after running your code,I don't receive any information about the result.
I will appreciate it if you can solve my problem!!!

Out-of-memory issue

Hello, I noticed that the code you provided is designed for a single GPU and may not support multi-GPU usage to address the out-of-memory issue when dealing with a single point cloud. Additionally, I understand that the paper mentions processing up to 40,000 faces, but you experience out-of-memory errors when upsampling to 16,000 faces using the same GPU.

Issues with custom data

Hello.

I'm having issues building a convex hull and then adapting the scripts to my own point clouds.
I seem to have been able to build the hull.obj file, but I'm not sure if it has problems.

When running the script, there's an error in models/losses.py:
this line

cham_norm_x = F.cosine_similarity(x_normals, x_normals_near, dim=2, eps=1e-6)

The error is:

RuntimeError: The size of tensor a (3) must match the size of tensor b (0) at non-singleton dimension 2

x_normals.shape = torch.Size([1, 15000, 3])
x_normals_near.shape = torch.Size([1, 15000, 0])

I have no idea where I went wrong, any help would be deeply appreciated.

Thanks!

What is the purpose of the non-uniform penalty?

Hi @ranahanocka, I was wondering if you could explain what is the purpose of adding the "non-uniform" penalty to the loss during training. I see that the penalty is computed based on the areas of the face, but I don't know the reasoning behind it and I couldn't find any details in your paper.

loss += opts.local_non_uniform * local_nonuniform_penalty(part_mesh.main_mesh).float()

Thanks!

ImportError: libtorch_cpu.so: cannot open shared object file: No such file or directory

I have encountered such a mistake:

Traceback (most recent call last):
File "main.py", line 6, in
from models.losses import chamfer_distance
File "/home/lab505/gpu_point2mesh/models/losses.py", line 3, in
from pytorch3d.structures.pointclouds import Pointclouds
File "/home/lab505/anaconda3/envs/p2m/lib/python3.8/site-packages/pytorch3d/structures/init.py", line 4, in
from .pointclouds import Pointclouds
File "/home/lab505/anaconda3/envs/p2m/lib/python3.8/site-packages/pytorch3d/structures/pointclouds.py", line 5, in
from .. import ops
File "/home/lab505/anaconda3/envs/p2m/lib/python3.8/site-packages/pytorch3d/ops/init.py", line 5, in
from .graph_conv import GraphConv
File "/home/lab505/anaconda3/envs/p2m/lib/python3.8/site-packages/pytorch3d/ops/graph_conv.py", line 6, in
from pytorch3d import _C
ImportError: libtorch_cpu.so: cannot open shared object file: No such file or directory

here is my configure:
cudatoolkit 10.1.243
pytorch 1.4.0 py3.8_cuda10.1.243_cudnn7.6.3_0
pytorch3d 0.2.0

but after i use another configure :
pytorch 1.5.0 py3.8_cpu_0 [cpuonly]
pytorch3d 0.2.0 pypi_0
it works,but it's too slow ,so i wonder if your code are running on GPU?
look forword to your help,thanks !

read .ply file using read_pts function

Hi Rana,

First of all, this is really amazing work!!

When I tried to test your method on my own .ply file, your read_pts function at this line gave me an error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbe in position 210: invalid start byte

Apparently, it is because I have a different way of saving the ply file from yours. I am wondering if you could let me know or provide scripts on how you save sampled points + normals to the .ply files? In this way, I could generate my own ply files :)

Many thanks in advance!

Songyou

Unable to install dependencies

On running conda env create -f environment.yml, I am getting the following error:

Collecting package metadata: done
Solving environment: failed

ResolvePackageNotFound:
  - pytorch3d=0.2.0

I tried with pip but that throws up error on this line from pytorch3d import _C.

Data for denoising and low density completion

Hi Rana,

I was wondering if it would be possible for you to release the data that you used to perform the experiments for denoising and low density completion. I've been trying to replicate the experiments, and it would be very useful to have access to the point clouds and meshes that you used for these experiments.

Thanks!

About normal constraints

Hello,

Congrats on this amazing work! I'm a little confused between the usage of normal information. From the paper (Eqn. 2) Chamfer's Distance is used only between deformed vertex positions and the input point cloud. However, as mentioned in Sec 5.1, the normal information is here. Can you please clarify in which experiments/scenarios is this information used? I can imagine it might not be a good option for Figure 6 (noisy input). If/how did you estimate normal information on noisy point clouds? I wasn't able to spot this in the code, but if I left something unnoticed, my apologies, can you refer me on how you estimate normal for such cases? Thanks!

problem with initial mesh

hi,thanks for your work. When I run with my own data, I have some problems with initial meshes. I created a obj farmat cube as initial mesh,but I met error as shown below:

device: cuda:0
Traceback (most recent call last):
File "main.py", line 19, in
mesh = Mesh(opts.initial_mesh, device=device, hold_history=True)
File "/home/abc/point2mesh/models/layers/mesh.py", line 24, in init
self.vs, self.faces = load_obj(file)
File "/home/abc/point2mesh/utils.py", line 69, in load_obj
assert len(face_vertex_ids) == 3
AssertionError

I want to know how to generate mesh data in the correct format?
Thanks again!

Failed to create an anaconda environment because of the torchvision package

Thanks for publishing the great research and code online.

These days I tried to run point2mesh but failed on creating an anaconda environment. In the dependency file environment.yml I found

dependencies:
  - python=3.8.2
  - numpy=1.18.1
  - pytorch=1.4.0
  - torchvision=0.5.0

After several times of failure, I found the package torchvision's version 0.5.0 is not available anymore on anaconda. The maintainers have deleted this package file from the conda's package source, the oldest version available is 0.6.1 (see torchvision on anaconda.org).

I tried to install it through pip, the packages are successfully installed but I cannot run the program. I have also tried several different combinations of newer versions but haven't succeeded once. Do you have any solution of this problem?

About the fixed random constant C

Excuse me, what is the fixed random constant C in the paper expressed in the code? In which file is it defined?
Looking forward to your reply, thank you very much!

The viewer used in paper's illustrations

Both the illustrations in the paper and the animations in this repo was so appealing! Could you provide some information about rendering methods and using 3D renderer? Thank you!

Update this code to the pytorch 2.0 numpy 1.24

Hi, I just update this code to pytorch 2.0 and numpy 1.24 by replace the np.bool to bool, adding device consistency for pytorch and also fix the dtype of inhomogeneous array of vei to object.

I test it and the result looks good so if anyone need help for the latest version of this code I can send a pull request

Installation Issues

Hello!
I am trying to install your software and had some issues during the installation steps.

When using the

conda env create -f environment.yml
command. It appears that the version of pytorch3d required is no longer available, so I removed that from the environment file and manually installed a later version of pytorch...

From there, installation was smooth until I tried to run

bash .scripts/get_data.sh
where I got an error saying

tar: data.tar: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now

However, the files seemed to be downloaded correctly in the ./data location.

From there, I tried to run the
bash ./scripts/examples/giraffe.sh
example, where I got an error saying

  File "main.py", line 30
    print(f'number of parts {part_mesh.n_submeshes}')
                                                   ^
SyntaxError: invalid syntax
./scripts/examples/giraffe.sh: line 2: --initial-mesh: command not found
./scripts/examples/giraffe.sh: line 3: --save-path: command not found
./scripts/examples/giraffe.sh: line 4: --iterations: command not found

Could this be the result of the alternate version of pytorch3d? or is this some other issue with installation/running?

Please advise,
Thank you

About the feature of the network

Sorry to bother you.
If I understand correctly, in each level, the input of the Self-Prior includes M and random constant C. Is M represented by the three coordinates of both endpoint vertices? Are M and C cascaded as total inputs?

Thank you very much.

Expected system requirements?

Without a doubt this is a computationally heavy program to run, but what is the expected load during runtime?

I've run a few of the examples (mainly the giraffe) and it seems to be using ~5GB on my GPU. After it upsamples the mesh after the first 1000 iterations, the load seems to go up to ~8+GB, which ends up overloading my GPU. Is this an expected amount of GPU memory to be using?

And in general, what's a good minimum system requirements to run this program? Is it feasible with a powerful enough personal computer or is cloud computing necessary in the end?

Thank you!

Problem with : in filename for

First of all, this is amazing work! So thank you for posting this code publicly.

I ran into a minor error when executing the code following your directions

  File "main.py", line 67, in <module>
    part_mesh.export(os.path.join(opts.save_path, f'recon_iter:{i}.obj'))
  File "/home/ctralie/code/point2mesh/models/layers/mesh.py", line 344, in export
    self.main_mesh.export(file)
  File "/home/ctralie/code/point2mesh/models/layers/mesh.py", line 225, in export
    export(file, vs, self.faces)
  File "/home/ctralie/code/point2mesh/utils.py", line 73, in export
    with open(file, 'w+') as f:
OSError: [Errno 22] Invalid argument: './checkpoints/giraffe/recon_iter:100.obj'

I don't know how general this is, but on my machine, Python does not appear to like the ":" in the filename. Once I changed this to a "_" (e.g. "recon_iter_300.obj" instead of "recon_iter:300.obj") on line 67 of main.py, the code worked beautifully.

Can these modules be used for MeshCNN Segmentation?

Hello, I found this project while trying to implement MeshCNN.

I can see codes here are based on MeshCNN, but have some new features using pytorch3D.

Do you recommend to use modules defined in this project for implementation of MeshCNN?

If I use Mesh and MeshEncoderDecoder module from this project instead of MeshCNN project,

can I expect some performance improvement?

Thank you for your great works

"manifold_script_path" not found

Hi,

Thank you for the excellent framework.

Running "bash ./scripts/examples/giraffe.sh" and got the error below when it gets to "1000" iteration:


Traceback (most recent call last):
File "main.py", line 85, in
mesh = utils.manifold_upsample(mesh, opts.save_path, Mesh,
File "point2mesh/utils.py", line 18, in manifold_upsample
raise FileNotFoundError(f'{manifold_script_path} not found')
FileNotFoundError: ~/code/Manifold/build/manifold not found


Any advise is appreciated!

Thanks so much and have a good day!

Reproduction issues

Hi!

I'd love to thank you for sharing the code of your project. It is very interesting!

I am trying to make it run in order to see if this can solve a problem I have on human meshes, but I'm encountering some problems.

I had to fix a lot of type incompatibilities, but this is fine. Now I encountered some memory problems.
After parts separation it goes OOM on my GPU.

Since I didn't see any indications on the paper, can you share with me the specifications of the setup you use for inference?
I see you used a GTX 1080 TI but no memory size is reported.

I am using a starting shape with around 60k faces so it seems inside the range of the tests you run, so I expect it should not take a lot of space. Am I wrong?

That would be helpful for me to understand if my setup is too light or if I am doing something wrong.

Thank you again in advance for your consideration.

Have a nice day.

Slides availability

Hi,

Congratulations on this work! Much like MeshCNN, this is a great idea/implementation in a beautifully written paper. I was wondering if the slides of this project will eventually be available. There is a link to them on the project's page, but it does not work.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.