Giter VIP home page Giter VIP logo

softpool's Introduction

SoftPoolNet: Shape Descriptor for Point Cloud Completion and Classification

The implementation of our paper accepted in ECCV (EUROPEAN CONFERENCE ON COMPUTER VISION, 16th, 2020)

Authors: Yida Wang, David Tan, Nassir Navab and Federico Tombari If you find this work useful in yourr research, please cite:

@article{DBLP:journals/corr/abs-2008-07358,
  author    = {Yida Wang and
               David Joseph Tan and
               Nassir Navab and
               Federico Tombari},
  title     = {SoftPoolNet: Shape Descriptor for Point Cloud Completion and Classification},
  journal   = {CoRR},
  volume    = {abs/2008.07358},
  year      = {2020},
  url       = {https://arxiv.org/abs/2008.07358},
  archivePrefix = {arXiv},
  eprint    = {2008.07358},
  timestamp = {Fri, 21 Aug 2020 15:05:50 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2008-07358.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Thanks for @slothfulxtx who has corrected my PyTorch implementation of SoftPool operator!

SoftPoolNet

road condition

Object Completion

shapenet

Soft-Pool Operation

softpool

Train

SoftPool operators are provided in Pytorch frameworks, both using CUDA 10.2, we recommend to use the Pytorch version 1.2.0.

As we have some comparison experiments on GRNet and MSN, we suggest that you compile python libs in chamfer_pkg, emd, expansion_penalty and extensions. You can go towards each folder which includes the mentioend libs by cd, then

python setup.py install --user

Suppose that GPU 0 is supposed to get used for training

CUDA_VISIBLE_DEVICES=0 python3 train.py --batch 16 --n_regions 8 --num_points 2048 --dataset shapenet --savepath ijcv_shapenet_softpool --methods softpool

In case there are pretrained models

CUDA_VISIBLE_DEVICES=0 python3 train.py --batch 16 --n_regions 8 --num_points 2048 --dataset shapenet --savepath ijcv_shapenet_softpool --model log/ijcv_shapenet_softpool/network.pth --methods softpool

Benchmarks

Currently you can train and validate related works which are posted in Complete3D benchmark using the same infrastructure

CUDA_VISIBLE_DEVICES=0 python3 val.py --n_regions 1 --num_points 2048 --model log/ijcv_shapenet_pcn/network.pth --dataset shapenet --methods pcn # PCN
CUDA_VISIBLE_DEVICES=1 python3 val.py --n_regions 1 --num_points 2048 --model log/ijcv_shapenet_pointcnn/network.pth --dataset shapenet --methods pointcnn # PointCNN
CUDA_VISIBLE_DEVICES=0 python3 val.py --n_regions 1 --num_points 2048 --model log/ijcv_shapenet_folding/network.pth --dataset shapenet --methods folding # FoldingNet
CUDA_VISIBLE_DEVICES=0 python3 val.py --n_regions 1 --num_points 2048 --model log/ijcv_shapenet_grnet/network.pth --dataset shapenet --methods grnet # GRNet
CUDA_VISIBLE_DEVICES=0 python3 val.py --n_regions 8 --num_points 2048 --model log/ijcv_shapenet_softpool/network.pth --dataset shapenet --methods softpool # SoftPoolNet
CUDA_VISIBLE_DEVICES=0 python3 val.py --n_regions 1 --num_points 2048 --model log/ijcv_shapenet_msn/network.pth --dataset shapenet --methods msn # MSN
CUDA_VISIBLE_DEVICES=0 python3 val.py --n_regions 1 --num_points 2048 --model log/ijcv_shapenet_pointgcn/network.pth --dataset shapenet --methods pointgcn 

Listed approaches (until ECCV 2020) are reported in complete3d dataset where you can reproduce our results with scripts in 'benchmark' folder.

benchmarks

softpool's People

Contributors

wangyida avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

softpool's Issues

Concatenation operation

Hi,

x = torch.cat((x, part), 1).contiguous()

I find difficult to understand this operation. Is this to concatenate the original 3D coordinate with the feature computed by the MLP? In practice a 3 + N_f vector?

Where is this explained in the paper?

Cheers

Bug : SoftPool Operator

class SoftPool(nn.Module):
    def __init__(self, regions=16, cabins=8, sp_ratio=4):
        super(SoftPool, self).__init__()
        self.regions = regions
        self.num_cabin = cabins
        self.sp_ratio = sp_ratio

    def forward(self, x):
        [self.size_bth, self.size_feat, self.pnt_per_sort] = list(x.shape)
        self.pnt_per_sort //= self.sp_ratio
        # cabin -2
        conv2d_1 = nn.Conv2d(
            self.size_feat, self.size_feat, kernel_size=(1, 3),
            stride=(1, 1)).cuda()
        # cabin -2
        conv2d_2 = nn.Conv2d(
            self.size_feat, self.size_feat, kernel_size=(1, 3),
            stride=(1, 1)).cuda()
        conv2d_3 = nn.Conv2d(
            self.size_feat,
            self.size_feat,
            kernel_size=(1, self.num_cabin - 2 * (3 - 1)),
            stride=(1, 1)).cuda()
        conv2d_5 = nn.Conv2d(
            self.size_feat,
            self.size_feat,
            kernel_size=(self.regions, 1),
            stride=(1, 1)).cuda()

Here you shouldn't create nn layers in forward function, otherwise you cann't save the weights of conv2d into the checkpoint file. I've been debugging for more than 3 days and finally I've found it.

About RuntimeError

SoftPool_TRAIN train [4: 1517/3621] emd1: 0.008102 emd2: 0.004063 emd3: 0.005554 emd4: 0.109300
SoftPool_TRAIN train [4: 1518/3621] emd1: 0.007283 emd2: 0.004718 emd3: 0.005535 emd4: 0.083149
SoftPool_TRAIN train [4: 1519/3621] emd1: 0.007046 emd2: 0.005290 emd3: 0.005096 emd4: 0.093155
SoftPool_TRAIN train [4: 1520/3621] emd1: 0.006560 emd2: 0.003878 emd3: 0.003762 emd4: 0.094298
SoftPool_TRAIN train [4: 1521/3621] emd1: 0.008697 emd2: 0.004597 emd3: 0.004208 emd4: 0.093892
SoftPool_TRAIN train [4: 1522/3621] emd1: 0.004853 emd2: 0.003058 emd3: 0.005057 emd4: 0.095074
SoftPool_TRAIN train [4: 1523/3621] emd1: 0.006060 emd2: 0.003544 emd3: 0.003586 emd4: 0.098028
SoftPool_TRAIN train [4: 1524/3621] emd1: 0.006612 emd2: 0.003719 emd3: 0.004614 emd4: 0.086453
SoftPool_TRAIN train [4: 1525/3621] emd1: 0.006943 emd2: 0.003364 emd3: 0.004205 emd4: 0.111324
SoftPool_TRAIN train [4: 1526/3621] emd1: 0.008450 emd2: 0.005112 emd3: 0.004640 emd4: 0.080969
SoftPool_TRAIN train [4: 1527/3621] emd1: 0.005002 emd2: 0.002095 emd3: 0.002938 emd4: 0.086772
SoftPool_TRAIN train [4: 1528/3621] emd1: 0.004835 emd2: 0.002619 emd3: 0.002841 emd4: 0.090955
SoftPool_TRAIN train [4: 1529/3621] emd1: 0.008848 emd2: 0.003807 emd3: 0.004815 emd4: 0.116368
SoftPool_TRAIN train [4: 1530/3621] emd1: 0.009957 emd2: 0.005667 emd3: 0.005009 emd4: 0.136988
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [240,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [368,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [144,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [208,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [336,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [176,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [304,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [6,0,0], thread: [272,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [2,0,0], thread: [368,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:130: void THCudaTensor_scatterKernel(TensorInfo<Real, IndexType>, TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [2,0,0], thread: [336,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCTensorScatterGather.cu line=194 error=59 : device-side assert triggered
Traceback (most recent call last):
File "train.py", line 212, in
loss_net.backward()
File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 118, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/generic/THCTensorScatterGather.cu:194

First of all thank you for your excellent contribution. At present, I have been able to successfully train on the data set shapnet, but the above problems occurred during the training process. I have spent a lot of time and have not been able to successfully solve this problem. I hope you can answer! Thank you!
My operating environment is as follows:
CUD:A10.0
torch:1.12.0

Pretrained model

Could you please provide the pretrained model for ShapeNet at the resolution of 16384? I noticed that the checkpoint is large, so, is it possible to share it on Google-Drive or, whatever else can satisfy the storage requirement?

About the weight

Hi, I have tried my best to reproduce your results, but I cannot get a good result. I hope you can upload your pytorch version of the training weight file.

Thank you!

About code

Hi! Thank you for your great work!
I would like to know which part of the code each loss function of the paper corresponds to.
Thanks!

Pretrained Model

Hi, I'm trying to test the performance of the model on an incomplete scene. I'm wondering will a pretrained model be provided?

Thank you!

About the code

Hi, I'm trying to reproduce the result by training softpoolnet,
but it seems that train.py, model.py are a mixture of softpoolnet, grnet, and pointnet. The latter 2 works are not included in the architecture reported in your paper.

If I only want to reproduce the completion result of softpoolnet, can I comment out output3, output4, part_regions, emd3, emd4 in train.py(line 115), and the associated networks in model.py?

By the way, are there tensorflow codes for softpoolnet in the tensorflow folder?

Thanks for replying!

Could you please provide a pretrained model for the completion of 16384 resolution?

Hi Yida,
Thank you for your good work! Recently, I'm working on completing dense objects. I've noticed that softpoolnet has reached a performance of 5.94 much better than about 8~9 of former works on 16384 points completion. I think it is an outstanding and excellent work. So could you please provide a pretrained model for the completion of dense models like shown in table 1?

Question about the semantic label

Hello! Thank you for your insightful work! I've noticed that semantic labels seem to be adopted in the calculation of loss. So, is it needed to give a semantic label for each point with a network or something before training?

About "python setup.py install --user" in extensions/chamfer_dist

root@f9adda5586c3:/mnt/txf/codes/SoftPoolNet/extensions/chamfer_dist# python setup.py install --user
running install
running bdist_egg
running egg_info
writing chamfer.egg-info/PKG-INFO
writing dependency_links to chamfer.egg-info/dependency_links.txt
writing top-level names to chamfer.egg-info/top_level.txt
reading manifest file 'chamfer.egg-info/SOURCES.txt'
writing manifest file 'chamfer.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'chamfer' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda-10.0/include -I/usr/include/python3.6m -c chamfer_cuda.cpp -o build/temp.linux-x86_64-3.6/chamfer_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/usr/local/cuda-10.0/bin/nvcc -I/usr/local/lib/python3.6/dist-packages/torch/include -I/usr/local/lib/python3.6/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.6/dist-packages/torch/include/TH -I/usr/local/lib/python3.6/dist-packages/torch/include/THC -I/usr/local/cuda-10.0/include -I/usr/include/python3.6m -c chamfer.cu -o build/temp.linux-x86_64-3.6/chamfer.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
chamfer.cu(160): error: a pointer to a bound function may only be used to call the function

chamfer.cu(160): error: type name is not allowed

chamfer.cu(160): error: expected an expression

chamfer.cu(160): error: a pointer to a bound function may only be used to call the function

chamfer.cu(160): error: type name is not allowed

chamfer.cu(160): error: expected an expression

chamfer.cu(161): error: a pointer to a bound function may only be used to call the function

chamfer.cu(161): error: type name is not allowed

chamfer.cu(161): error: expected an expression

chamfer.cu(161): error: a pointer to a bound function may only be used to call the function

chamfer.cu(161): error: type name is not allowed

chamfer.cu(161): error: expected an expression

chamfer.cu(163): error: a pointer to a bound function may only be used to call the function

chamfer.cu(163): error: type name is not allowed

chamfer.cu(163): error: expected an expression

chamfer.cu(163): error: a pointer to a bound function may only be used to call the function

chamfer.cu(163): error: type name is not allowed

chamfer.cu(163): error: expected an expression

chamfer.cu(164): error: a pointer to a bound function may only be used to call the function

chamfer.cu(164): error: type name is not allowed

chamfer.cu(164): error: expected an expression

chamfer.cu(164): error: a pointer to a bound function may only be used to call the function

chamfer.cu(164): error: type name is not allowed

chamfer.cu(164): error: expected an expression

chamfer.cu(216): error: a pointer to a bound function may only be used to call the function

chamfer.cu(216): error: type name is not allowed

chamfer.cu(216): error: expected an expression

chamfer.cu(216): error: a pointer to a bound function may only be used to call the function

chamfer.cu(216): error: type name is not allowed

chamfer.cu(216): error: expected an expression

chamfer.cu(217): error: a pointer to a bound function may only be used to call the function

chamfer.cu(217): error: type name is not allowed

chamfer.cu(217): error: expected an expression

chamfer.cu(217): error: a pointer to a bound function may only be used to call the function

chamfer.cu(217): error: type name is not allowed

chamfer.cu(217): error: expected an expression

chamfer.cu(218): error: a pointer to a bound function may only be used to call the function

chamfer.cu(218): error: type name is not allowed

chamfer.cu(218): error: expected an expression

chamfer.cu(218): error: a pointer to a bound function may only be used to call the function

chamfer.cu(218): error: type name is not allowed

chamfer.cu(218): error: expected an expression

chamfer.cu(220): error: a pointer to a bound function may only be used to call the function

chamfer.cu(220): error: type name is not allowed

chamfer.cu(220): error: expected an expression

chamfer.cu(220): error: a pointer to a bound function may only be used to call the function

chamfer.cu(220): error: type name is not allowed

chamfer.cu(220): error: expected an expression

chamfer.cu(221): error: a pointer to a bound function may only be used to call the function

chamfer.cu(221): error: type name is not allowed

chamfer.cu(221): error: expected an expression

chamfer.cu(221): error: a pointer to a bound function may only be used to call the function

chamfer.cu(221): error: type name is not allowed

chamfer.cu(221): error: expected an expression

chamfer.cu(222): error: a pointer to a bound function may only be used to call the function

chamfer.cu(222): error: type name is not allowed

chamfer.cu(222): error: expected an expression

chamfer.cu(222): error: a pointer to a bound function may only be used to call the function

chamfer.cu(222): error: type name is not allowed

chamfer.cu(222): error: expected an expression

60 errors detected in the compilation of "/tmp/tmpxft_000014de_00000000-6_chamfer.cpp1.ii".
error: command '/usr/local/cuda-10.0/bin/nvcc' failed with exit status 1

I encountered the error when I run "python setup.py install --user" in "extensions/chamfer_dist". I don't solve it. Can you help me? Thank you!
My environmnet as follow:
torch:1.2.0
torchvision:0.4.0
gcc:7.5.0
cuda:10.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.