Giter VIP home page Giter VIP logo

point2cad's Introduction

Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds

Website Open In Colab Docker

This repository implements the method from our paper titled "Point2CAD: Reverse Engineering CAD Models from 3D Point Clouds" by Yujia Liu, Anton Obukhov, Jan Dirk Wegner, and Konrad Schindler.

As shown in the figure above, it takes the raw point cloud of a CAD model scan and reconstructs its surfaces, edges, and corners.

Interactive Demo Gallery

Explore select models from the ABC CAD models dataset, showcasing their reconstruction by our method and competition, on the project page:

Quick Start

To reconstruct your own CAD model, use Colab or your local environment as described below.

Local Environment (5 min, recommended)

To process the CAD models from the assets folder, just clone the repository and run the command below in the repository root. The process finishes in less than 5 min on a machine with a GPU. Running without a GPU is also very feasible. Inspect results in the out directory.

docker run -it --rm --gpus "device=$CUDA_VISIBLE_DEVICES" -v .:/work/point2cad toshas/point2cad:v1 python -m point2cad.main

Google Colab (30 min)

Colab eliminates the need to run the application locally and use Docker. However, it may be slower due to the time taken to build the dependencies. Unlike the dockerized environment, the Colab functionality is not guaranteed. Click the badge to start:

Run with Your Data

If you want to run the process on your own point clouds, add the --help option to learn how to specify inputs file path and outputs directory path. Only in the dockerized runtime: both paths must be under the same repository root path.

Development

The code has many native dependencies, including PyMesh. To build from source and prepare a development environment, clone the repository and run the following command:

cd build && sh docker_build.sh

Then simply run from the repository root:

docker run -it --rm --gpus "device=$CUDA_VISIBLE_DEVICES" -v .:/work/point2cad point2cad python -m point2cad.main 

If docker is unavailable, refer to PyMesh installation guide to build the environment from source, or simply follow the steps from the Dockerfile or Colab installation script.

About the Demo

CAD model reconstruction from a point cloud consists of two steps: point cloud annotation with surface clusters (achieved by ParseNet, HPNet, etc.), and reconstructing the surfaces and topology.

Pretrained ParseNet models can be found here: for input points with normals and for input points without normals. If it is not working, please use the weights in point2cad/logs. To utilize it, please place the script point2cad/generate_segmentation.py in the ParseNet repository, and execute it in there.

This code focuses on the second part (views 3, 4, 5 from the teaser figure above) and requires the input point cloud in the (x, y, z, s) format, where each 3D point with x, y, z coordinates is annotated with the surface id s, such as the example in the assets folder.

The process stores the following artifacts in the output directory (out by default):

  • unclipped: unclipped surfaces ready for pairwise intersection;
  • clipped: reconstructed surfaces after clipping the margins;
  • topo: topology: reconstructed edges and corners.

Licence

This software is released under a CC-BY-NC 4.0 license, which allows personal and research use only. For a commercial license, please contact the authors. You can view a license summary here.

Acknowledgements

  • ParseNet: "ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds", Sharma G. et. al., 2020
  • geomfitty: A python library for fitting 3D geometric shapes
  • Color map: "Revisiting Perceptually Optimized Color Mapping for High-Dimensional Data Analysis", Mittelstädt. S et. al., 2014

point2cad's People

Contributors

yujialiu76 avatar toshas avatar

Stargazers

 avatar  avatar Yuri Rocha avatar Yuto Horikawa avatar Man Qi avatar Daci CHEN avatar Quin avatar  avatar zhang avatar  avatar  avatar  avatar 박재형 avatar luochenqi avatar vugia truong avatar 문이세 avatar  avatar feng avatar Paweł Klimkowski avatar 愚人王 avatar RxxS avatar Alexander Kozhevin avatar Luis Diego García avatar Jeekb.Hu avatar ziyu avatar teddy avatar  avatar  avatar Reggie avatar  avatar  avatar Wei-Lung Hsu avatar Matthew avatar 计算机视觉life avatar  avatar  avatar Ahmet Ceyhun Bilir avatar Vladislav avatar  avatar wyhailj avatar Yeco avatar Anand Umashankar avatar  avatar Jun avatar Jiahui Wang avatar  avatar  avatar  avatar LiuBinnan avatar codingbull avatar  avatar Benjamin Paine avatar Xiaohan Yang avatar  avatar  avatar Lennie Budgell avatar  avatar Kentechx avatar Lex van der Sluijs avatar RIA avatar  avatar  avatar Yoann Fleytoux avatar Haoren Zheng avatar  avatar D.P avatar  avatar oguzhan avatar  avatar  avatar  avatar Chopper233 avatar LALaLaLA avatar  avatar fyy avatar  avatar Zijian Yu avatar  avatar Yongsheng Bai avatar Yuanwen Yue avatar  avatar Akash avatar  avatar VALADI K JAGANATHAN avatar  avatar leanAI avatar La Minh Tuan Kiet avatar  avatar Caroline Pascal avatar  avatar dwatanabee avatar  avatar Louis Vass avatar Sean Stevens avatar Victor Condino avatar  avatar Hayden Shively avatar  avatar whuwuteng avatar  avatar

Watchers

Pradyumna Reddy Chinthala avatar  avatar  avatar  avatar  avatar  avatar Pengcheng Wei avatar 小白白学习 avatar Lingzhe Zhao avatar  avatar KIHONG KIM avatar  avatar

point2cad's Issues

Output in STEP or BREP format

Thank you for your work. May I ask how to obtain STEP or Brep files from the topo.json file? I would be very grateful if you could provide some guidance.

tmp.obj not found

Hello! I'm trying to use point2cad. I've setup according the to README. After I ran the docker run command, when I reached 88%, I got an error telling me the tmp.obj does not exist. This error happens fitting_one_surface.py.

I also tried using a gpu, and I get this error:

docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 2, stdout: , stderr: fatal error: unexpected signal during runtime execution
 [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x7f67f07c6d54]
 
 runtime stack:
 runtime.throw({0x5286a1?, 0x6d?})
         /usr/local/go/src/runtime/panic.go:992 +0x71
 runtime.sigpanic()
         /usr/local/go/src/runtime/signal_unix.go:802 +0x389
 
 goroutine 1 [syscall]:
 runtime.cgocall(0x4f48d0, 0xc000187958)
         /usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc000187930 sp=0xc0001878f8 pc=0x40523c
 [github.com/NVIDIA/go-nvml/pkg/dl._Cfunc_dlopen(0x1bb0820](http://github.com/NVIDIA/go-nvml/pkg/dl._Cfunc_dlopen(0x1bb0820), 0x1)
         _cgo_gotypes.go:113 +0x4d fp=0xc000187958 sp=0xc000187930 pc=0x4ee78d
 [github.com/NVIDIA/go-nvml/pkg/dl.(*DynamicLibrary).Open(0xc000187a30)](http://github.com/NVIDIA/go-nvml/pkg/dl.(*DynamicLibrary).Open(0xc000187a30))
         /go/src/nvidia-container-toolkit/vendor/[github.com/NVIDIA/go-nvml/pkg/dl/dl.go:55](http://github.com/NVIDIA/go-nvml/pkg/dl/dl.go:55) +0x74 fp=0xc0001879d0 sp=0xc000187958 pc=0x4ee994
 [gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info.(*infolib).HasNvml(0xc00012c1e0](http://gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info.(*infolib).HasNvml(0xc00012c1e0)?)
         /go/src/nvidia-container-toolkit/vendor/[gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info/info.go:47](http://gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info/info.go:47) +0x85 fp=0xc000187a68 sp=0xc0001879d0 pc=0x4eed85
 [github.com/NVIDIA/nvidia-container-toolkit/internal/info.ResolveAutoMode({0x54f5c8](http://github.com/NVIDIA/nvidia-container-toolkit/internal/info.ResolveAutoMode(%7B0x54f5c8), 0x6333e0}, {0xc000138157?, 0x52974f?})
         /go/src/nvidia-container-toolkit/internal/info/auto.go:42 +0x1bb fp=0xc000187b18 sp=0xc000187a68 pc=0x4ef53b
 main.doPrestart()
         /go/src/nvidia-container-toolkit/cmd/nvidia-container-runtime-hook/main.go:77 +0xdd fp=0xc000187f08 sp=0xc000187b18 pc=0x4f2e7d
 main.main()
         /go/src/nvidia-container-toolkit/cmd/nvidia-container-runtime-hook/main.go:176 +0x11e fp=0xc000187f80 sp=0xc000187f08 pc=0x4f43de
 runtime.main()
         /usr/local/go/src/runtime/proc.go:250 +0x212 fp=0xc000187fe0 sp=0xc000187f80 pc=0x4368d2
 runtime.goexit()
         /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000187fe8 sp=0xc000187fe0 pc=0x460981: unknown.
 reach@AaravAgarwal:/mnt/c/Users/reach/point$ sudo docker run -it --rm --gpus "device=$CUDA_VISIBLE_DEVICES" -v .:/work/point2cad point2cad python -m point2cad.main
 docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 2, stdout: , stderr: fatal error: unexpected signal during runtime execution
 [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x7f800a7c9d54]
 
 runtime stack:
 runtime.throw({0x5286a1?, 0x6d?})
         /usr/local/go/src/runtime/panic.go:992 +0x71
 runtime.sigpanic()
         /usr/local/go/src/runtime/signal_unix.go:802 +0x389
 
 goroutine 1 [syscall]:
 runtime.cgocall(0x4f48d0, 0xc000187958)
         /usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc000187930 sp=0xc0001878f8 pc=0x40523c
 [github.com/NVIDIA/go-nvml/pkg/dl._Cfunc_dlopen(0x212a820](http://github.com/NVIDIA/go-nvml/pkg/dl._Cfunc_dlopen(0x212a820), 0x1)
         _cgo_gotypes.go:113 +0x4d fp=0xc000187958 sp=0xc000187930 pc=0x4ee78d
 [github.com/NVIDIA/go-nvml/pkg/dl.(*DynamicLibrary).Open(0xc000187a30)](http://github.com/NVIDIA/go-nvml/pkg/dl.(*DynamicLibrary).Open(0xc000187a30))
         /go/src/nvidia-container-toolkit/vendor/[github.com/NVIDIA/go-nvml/pkg/dl/dl.go:55](http://github.com/NVIDIA/go-nvml/pkg/dl/dl.go:55) +0x74 fp=0xc0001879d0 sp=0xc000187958 pc=0x4ee994
 [gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info.(*infolib).HasNvml(0xc00012c1e0](http://gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info.(*infolib).HasNvml(0xc00012c1e0)?)
         /go/src/nvidia-container-toolkit/vendor/[gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info/info.go:47](http://gitlab.com/nvidia/cloud-native/go-nvlib/pkg/nvlib/info/info.go:47) +0x85 fp=0xc000187a68 sp=0xc0001879d0 pc=0x4eed85
 [github.com/NVIDIA/nvidia-container-toolkit/internal/info.ResolveAutoMode({0x54f5c8](http://github.com/NVIDIA/nvidia-container-toolkit/internal/info.ResolveAutoMode(%7B0x54f5c8), 0x6333e0}, {0xc000138157?, 0x52974f?})
         /go/src/nvidia-container-toolkit/internal/info/auto.go:42 +0x1bb fp=0xc000187b18 sp=0xc000187a68 pc=0x4ef53b
 main.doPrestart()
         /go/src/nvidia-container-toolkit/cmd/nvidia-container-runtime-hook/main.go:77 +0xdd fp=0xc000187f08 sp=0xc000187b18 pc=0x4f2e7d
 main.main()
         /go/src/nvidia-container-toolkit/cmd/nvidia-container-runtime-hook/main.go:176 +0x11e fp=0xc000187f80 sp=0xc000187f08 pc=0x4f43de
 runtime.main()
         /usr/local/go/src/runtime/proc.go:250 +0x212 fp=0xc000187fe0 sp=0xc000187f80 pc=0x4368d2
 runtime.goexit()
         /usr/local/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000187fe8 sp=0xc000187fe0 pc=0x460981: unknown. 

How would i go about fixing this issue?

Topo.json

Can you share the code for visualizing Topo.json?

Plan for uploading the code

Hi,

Thank you for the great work that reconstructs CAD models from the given point cloud. I and my colleagues recently came across your work and got inspired by it, and trying to check out how the code works.
So I wonder when you're able to upload the code.

You can reach me out with my email, [email protected]
Thank you.

Eunji Hong.

GT Segmentation input failure!

point2cad_failure
I entered the point cloud segmentation result for GT.
Why does this result appear?
What is your solution?

Looking forward to your reply.

Creating segmentation

Hello,

Thank you for your nice work ! I read your paper and I tried to reproduce some results, with the example you provide in this repository. My question about the segmentation is : how do you generate GT Point2CAD segmentation ?
Is it manually generated ? Or do you use your file "generate_segmentation.py" ?
(I tried this last option on the example abc_00470.xyz, and the output has not the same segmentation as the input of the already segmented abc_00470.xyzc)

Thank you for your answer

Code Release Plan

Very interesting work! I'd like to know if you have a code release plan?

Mesh coordinates

Hello,

After I have implemented the point2cad, then I found out that after reconstructed the mesh from raw point cloud input. The coordinate of mesh and the point clouds are different. The coordinate of the mesh does not remain the same as the initial point cloud. So does it affect the predicted reconstruction parameters ?

Or u can tell me the way to calculate the transformation matrix to turn the reconstructed mesh back into initial input point clouds coordinate ?

docker run point2cad error

(1)follow your script
‘docker run -it --rm --gpus "device=$CUDA_VISIBLE_DEVICES" -v .:/work/point2cad toshas/point2cad:v1 python -m point2cad.main’
i get this message:
“docker: Error response from daemon: create .: volume name is too short, names should be at least two alphanumeric characters. See 'docker run --help'.”
(2)change script
‘docker run -it --rm --gpus "device=$CUDA_VISIBLE_DEVICES" -v /pyy/point2cad toshas/point2cad:v1 python -m point2cad.main’
.:work --> pyy , my project name is pyy
i get this message:

groupadd: GID '0' already exists
useradd: group 'usergroup' does not exist
error: failed switching to "user": unable to find user user: no matching entries in passwd file

Please tell me how to solve this problem or how to run this program.
Thanks。

Sharing weights of ParSeNet

Hi Yujia,

Thanks for sharing the code!

I noticed that the weight of ParSeNet is no longer available. I couldn't find it in the ParSeNet repository. Can you kindly share the two weights files with an additional link?

Thanks in advance!

About how to reproduce the paper.

Dear authors,

Thanks for your nice work. The project is nice. Congrats for your work!

I wish to reproduce your work, but I meet two problems here. Hoping to get your notice and reply.

  1. Could you please give detailed guidance on how to preprocess our own data?

README doesn't seem to detail how to split the train and test data.

The original paper seems to say only pure test-time optimization is needed.

So, if I wish to do an evaluation, do I only need to run directly with the test data?

  1. The README only gives how to inference.

But no quantitative evaluation code is given(If I haven't missed it).

If that's convenient, could you please offer guidance to reproduce the quantitative results in the paper's table?

Thank you in advance!
Best,
Jingwei

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.