Created by Yongheng Zhao, Tolga Birdal, Haowen Deng, Federico Tombari from TUM.
This repository contains the implementation of our CVPR 2019 paper 3D Point Capsule Networks. In particular, we release code for training and testing a 3D-PointCapsNet network for classification, reconstruction, part interpolation and extraction of 3d local descriptors as well as the pre-trained models for quickly replicating our results.
For an intuitive explanation of the 3D point capsule networks, please check out Tolga's CVPR tutorial.
In this paper, we propose 3D point capsule networks, an auto-encoder designed to process sparse 3D point clouds while preserving spatial arrangements of the input data. 3D capsule networks arise as a direct consequence of our novel unified 3D auto-encoder formulation. Their dynamic routing scheme and the peculiar 2D latent space deployed by our approach bring in improvements for several common point cloud-related tasks, such as object classification, object reconstruction and part segmentation as substantiated by our extensive evaluations. Moreover, it enables new applications such as part interpolation and replacement.
If you find our work useful in your research, please consider citing:
@inproceedings{zhao20193d,
author={Zhao, Yongheng and Birdal, Tolga and Deng, Haowen and Tombari, Federico},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
title={3D Point
Capsule Networks},
organizer={IEEE/CVF},
year={2019}
}
Our repositories are released under the MIT license (see LICENSE file for details).
Our source codes are based upon many helpful packages and libraries. The chamfer distance package is based on nndistance. The necessary modifications have been done to this repository in order to run it with PyTorch 1.0.0. The capsule layer is based upon and modified from Capsule-Network-Tutorial. Our capsule decoder is based upon the decoder of AtlasNet.