Giter VIP home page Giter VIP logo

3dmfv-net's Introduction

3DmFV : Three-Dimensional Point Cloud Classification in Real-Time using Convolutional Neural Networks

Created by Yizhak Ben-Shabat (Itzik), Michael Lindenbaum, and Anath Fischer from Technion, I.I.T.

3DmFV Architecture

Introduction

This is the code for training a point cloud classification network using 3D modified Fisher Vectors.

This work will be presented in IROS 2018 in Madrid, Spain and will also be published in Robotics and Automation Letters.

Modern robotic systems are often equipped with a direct 3D data acquisition device, e.g. LiDAR, which provides a rich 3D point cloud representation of the surroundings. This representation is commonly used for obstacle avoidance and mapping. Here, we propose a new approach for using point clouds for another critical robotic capability, semantic understanding of the environment (i.e. object classification). Convolutional neural networks (CNN), that perform extremely well for object classification in 2D images, are not easily extendible to 3D point clouds analysis. It is not straightforward due to point clouds' irregular format and a varying number of points. The common solution of transforming the point cloud data into a 3D voxel grid needs to address severe accuracy vs memory size tradeoffs. In this paper we propose a novel, intuitively interpretable, 3D point cloud representation called 3D Modified Fisher Vectors (3DmFV). Our representation is hybrid as it combines a coarse discrete grid structure with continuous generalized Fisher vectors. Using the grid enables us to design a new CNN architecture for real-time point cloud classification. In a series of performance analysis experiments, we demonstrate competitive results or even better than state-of-the-art on challenging benchmark datasets while maintaining robustness to various data corruptions.

Citation

If you find our work useful in your research, please cite our work:

@article{ben20183dmfv,
  title={3DmFV: Three-Dimensional Point Cloud Classification in Real-Time Using Convolutional Neural Networks},
  author={Ben-Shabat, Yizhak and Lindenbaum, Michael and Fischer, Anath},
  journal={IEEE Robotics and Automation Letters},
  volume={3},
  number={4},
  pages={3145--3152},
  year={2018},
  publisher={IEEE}
}

Preprint:

@article{ben20173d,
  title={3D Point Cloud Classification and Segmentation using 3D Modified Fisher Vector Representation for Convolutional Neural Networks},
  author={Ben-Shabat, Yizhak and Lindenbaum, Michael and Fischer, Anath},
  journal={arXiv preprint arXiv:1711.08241},
  year={2017}
}

Installation

Install Tensorflow. You will also need to install h5py, scikit-learn.

The code was tested with Python 2.7, TensorFlow 1.2.1, CUDA 8.0.61 and cuDNN 5105 on Ubuntu 16.04. The code was reported by users to have also been tested on Windows 10 with python 3.6, tensorflow 1.12, cuda 9.0.176 and cudnn 9.0.

This code uses the infrastructure of PointNet as a template, however, many substantial changes have been made to the CNN model and point cloud representation.

Download the ModelNet40 data in this link.

Classification

Train the point cloud classification model using the default settings on ModelNet40, using:

python train_cls.py

Alternatively, you can tweak the different GMM parameters (e.g. number of gaussians ) or learning parameters (e.g. learning rate) using

python train_cls.py  --gpu=0 --log_dir='log' --batch_size=64 --num_point=1024 --num_gaussians=8 --gmm_variance=0.0156 
--gmm_type='grid' --learning_rate=0.001  --model='voxnet_pfv' --max_epoch=200 --momentum=0.9 --optimizer='adam'
 --decay_step=200000  --weight_decay=0.0 --decay_rate=0.7

The model will be saved to log directory. Consecutive runs with the same directory name will be saved in numbered subdirectories in order to prevent accidental overwrite of trained models.

License

Our code is released under MIT License (see LICENSE file for details).

Disclaimer

I am a mechanical engineer, not a software engineer. git is relatively new to me. Therefore, if you find any place I have made an error or have an improvement recommendation, I would appreciate your advice.

3dmfv-net's People

Contributors

sitzikbs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dmfv-net's Issues

run time issue

i used a tesla T4 GPU and achieved about 100ms per point cloud (for 1024 points and the 8x8x8 gaussian grid), how does the diff happened? GPU performence?cuda version?
my env is ubuntu14 + cuda10 + cudnn7 + T4 GPU + tensorflow1.13.1
and i found net structure is related to batch_size, in order to evaluate time consume
i trained with batch size = 1

visualization.py in line 640 tf_util.pc_svd

Thanks a lot for your work.
when I run the visulization.py file, there is an error about the tf_util.pc_svd function, I have seen the tf_uitl.py file, but I don't find the pc_svd function,
appreciate your some advice at anytime

Couldn't achieved the performance described in the paper

I have trained the 3DmFV model several times using the default parameters, but i haven't achieved the performance described in the paper. My best classification accuracy on modelnet10 was about 93.3% and 88.9% on modelnet40. So, could you list your training config? Thank you.

Missing files

Hello Itzik,

I noticed that a few files are missing in the repo, I was able to find them from pointnet repo but you might want to include them.

List of files missing:
model\transform_net.py
tf_ops\tf_auctionmatch.py
tf_ops\tf_sampling.py

about real-time

thinks for you work
I would like to know, the real-time network in what environment, how many frames per second can be achieved, thinks

difference of probability computation per point between get_3DmFV in utils.py and get_3dmfv in tf_util.py

thinks for sharing you work
In utils.py and tf_util.py, you have realized 3DmFV using numpy and tensorflow. However, I find there is a little different between them.
In utils.py(line 292 to 294), you directly assigned:
w_p = p_per_point
Q = w_p # enforcing the assumption that the sum is 1
Q_per_d = np.tile(np.expand_dims(Q, -1), [1, 1, 1, D])
while in tf_util.py(line 610 to 613), you did this:
w_p = tf.multiply(p_per_point,batch_w)
Q = w_p/tf.tile(tf.expand_dims(tf.reduce_sum(w_p, axis=-1), -1),[1, 1, n_gaussians])
Q_per_d = tf.tile(tf.expand_dims(Q, -1), [1, 1, 1, D])
I think the two operations can achieve the same result as elements in batch_w are all the same value and both numerator and denominator of the following equation have the same w which can be eliminated, thus to enforce the assumption(line 610 and 612 in tf_util.py) is not necessary.
20190608094333
However, after I use the following code to test (test data is car_130.txt from your 3DmFV-Tutorial-master), I get different results (by the above analysis, I think Q_per_d, Q_per_d0 and Q_per_d1 should equal to each other).
` n_gaussians = 8
variance = np.square(1.0/n_gaussians)
gmm = utils.get_grid_gmm(subdivisions=[n_gaussians, n_gaussians, n_gaussians], variance=variance)
points = utils.load_point_cloud_from_txt('car_130.txt')
points = np.expand_dims(points, axis=0)
import tensorflow as tf
n_batches = points.shape[0]
n_points = points.shape[1]
n_gaussians = gmm.means_.shape[0]
D = gmm.means_.shape[1]

# Expand dimension for batch compatibility
batch_sig = np.tile(np.expand_dims(gmm.covariances_, 0), [n_points, 1, 1])  # n_points X n_gaussians X D
batch_sig = np.tile(np.expand_dims(batch_sig, 0), [n_batches, 1, 1, 1])  # n_batches X n_points X n_gaussians X D
batch_mu = np.tile(np.expand_dims(gmm.means_, 0), [n_points, 1, 1])  # n_points X n_gaussians X D
batch_mu = np.tile(np.expand_dims(batch_mu, 0), [n_batches, 1, 1, 1])  # n_batches X n_points X n_gaussians X D
batch_w = np.tile(np.expand_dims(np.expand_dims(gmm.weights_, 0), 0), [n_batches, n_points,
                                                            1])  # n_batches X n_points X n_guassians X D  - should check what happens when weights change
batch_points = np.tile(np.expand_dims(points, -2), [1, 1, n_gaussians,
                                                    1])  # n_batchesXn_pointsXn_gaussians_D  # Generating the number of points for each gaussian for separate computation
p_per_point1 = (1.0 / (np.power(2.0 * np.pi, D / 2.0) * np.power(batch_sig[:, :, :, 0], D))) * np.exp(
    -0.5 * np.sum(np.square((batch_points - batch_mu) / batch_sig), axis=3))
Q_per_d1 = np.tile(np.expand_dims(p_per_point1, -1), [1, 1, 1, D])

mvn = tf.contrib.distributions.MultivariateNormalDiag(loc=batch_mu, scale_diag=batch_sig)
#Compute probability per point
p_per_point = mvn.prob(batch_points)# # n_batches X n_points X n_guassians*n_guassians*n_guassians
w_p = tf.multiply(p_per_point,batch_w)
Q = w_p/tf.tile(tf.expand_dims(tf.reduce_sum(w_p, axis=-1), -1),[1, 1, n_gaussians])
Q_per_d0 = tf.tile(tf.expand_dims(Q, -1), [1, 1, 1, D])
w_p = p_per_point
Q = w_p
Q_per_d = tf.tile(tf.expand_dims(Q, -1), [1, 1, 1, D])
with tf.Session() as sess:
    p_per_point = p_per_point.eval()
    Q_per_d = Q_per_d.eval()
    Q_per_d0 = Q_per_d0.eval()
difp = p_per_point - p_per_point1
difQ_per_d = Q_per_d - Q_per_d1
difQ_per_d0 = Q_per_d - Q_per_d0
print(sum(sum(np.squeeze(difp, axis=0))))# -1.4404886240351694e-10
print(sum(sum(np.squeeze(difQ_per_d, axis=0))))# [-81559.81403891 -81559.81403891 -81559.81403891]
print(sum(sum(np.squeeze(difQ_per_d0, axis=0))))# [-1.44048862e-10 -1.44048862e-10 -1.44048862e-10]

`
I think get_3dmfv in tf_util.py can help to get right 3DmFV, but after analysis, I haven't found wrong place in get_3DmFV( in utils.py ), so could you tell me the reason? thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.