Giter VIP home page Giter VIP logo

sph3d-gcn's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

sph3d-gcn's Issues

Question about partition along the radial(r) dimension

Thanks for your nice paper and code, it helps me a lot.

Question 1

I’m trying to figure out how to partition along radial dimension. In the paper, section “3.1 Spherical Convolutions” has one sentence said:

We allow the partitions along the radial (r) dimension to be non-uniform because the cubic volume growth for large radius values can be undesirable

In the source code, it looks like that the partitions along the radial dimension are uniform:
https://github.com/hlei-ziyan/SPH3D-GCN/blob/27a0629b908e736d28b69723f333af29f63bea5c/tf_ops/buildkernel/tf_buildkernel_gpu.cu#L68

Could you share some experience or detailed method, about non-uniform partitions along the radial dimension?


Question 2

In source code, the 3D distance info (variable dist) is used for partitions along the radial dimension, and it is computed in:
https://github.com/hlei-ziyan/SPH3D-GCN/blob/27a0629b908e736d28b69723f333af29f63bea5c/tf_ops/nnquery/tf_nnquery_gpu.cu#L47

https://github.com/hlei-ziyan/SPH3D-GCN/blob/27a0629b908e736d28b69723f333af29f63bea5c/tf_ops/nnquery/tf_nnquery_gpu.cu#L54

…
dist3D = sqrtf(dist3D); //sqrt

if (dist3D<radius && fabs(dist3D-radius)>1e-6) // find a neighbor in range
{
    if (s<nnSample)
    {
        nnIndex[i*M*nnSample+j*nnSample+s] = k;
        nnDist[i*M*nnSample+j*nnSample+s] = sqrt(dist3D);
    }
    …
}

dist3D is the result of sqrtf first. When it is saved to nnDist, another sqrt is called. Why use sqrt two times, Is there any trick about this?

Ruemonge

Thank you for your code. I have some questions about Ruemonge dataset.

  1. I download this dataset, but I didn't find the 'pcl.txt' file. This is in SPH3D-GCN/preprocesing/ruemonge2014_prepare_data.m line 13 data = load('pcl.txt');.
  2. According to your paper, point clouds are splie into blocks. When I load the _pcl split.mat_file, some blocks contains less than 8192 point clouds. What did you deal with it? Repeat sampling?
    Lookinf forward to your reply!

About modelnet40 dataset

Thank you for author reply!
question: no . m program for making the modelnet40 dataset?
reply:for the modelnet40, you can use the generated data from pointnet or pointnet2.

Memory issue in training code

Hello.

I have tried to run your code for the S3DIS dataset. However, with 128GB of memory, the process keeps getting killed when the shuffle buffer for the evaluation of the 2nd epoch is getting filled. When monitoring the memory, it seems that the buffer never gets cleared, it just keeps filling up at the beginning of each epoch.

module 'shapenet_config' has no attribute 'multiscale'

When I run the code train_shapenet.py, I get several errors, like module 'shapenet_config' has no attribute 'multiscale', build_graph_deconv() got an unexpected keyword argument 'nnsearch'.
It seems that you give the wrong params to the functions s3g_util.build_graph and s3g_util.build_graph_deconv in:
https://github.com/hlei-ziyan/SPH3D-GCN/blob/6ca77b5906a6f3a228cb655156bfa7a58ee86e9c/models/SPH3D_shapenet.py#L51-L54
https://github.com/hlei-ziyan/SPH3D-GCN/blob/6ca77b5906a6f3a228cb655156bfa7a58ee86e9c/models/SPH3D_shapenet.py#L89-L92
How to fix these?

quetions on testing with your trained models

Hi,
I have got the s3dis dataset and have run preprocessing/s3dis_prepare_data.m.
And I have got your trained_models on s3dis.
But I have no idea what is the model_name xxx when I run :
python evaluate_s3dis_with_overlap.py --model_name=xxxx
Thanks !

The experiment results of the source code did not reach the results declared by the paper.

Hi @hlei-ziyan, thanks for sharing your excellent work.
However, I found the experiment results on modelnet40 were a little lower than the paper shown.
In my experiment, the avg class acc of 10000 points is 88.3, and the instance acc is 90.2.
In your paper, the avg class acc of 10000 points is 89.3, and the instance acc is 92.1.

my experiments platform:
Python 3.5
Tensorflow 1.12.0
Cuda 9.2
Ubuntu 16.04
GPU is NVIDIA 1080Ti
dataset:
modelnet40_normal_resampled ( with point 10000)
net config and parameters:
same as the config and parameters suggested in the paper.

If there is any problem, please let me know.

How to make the results( .mat files) visualized ?

Hi, Thank you for your sharing.
I want to make the segmentation results visualized. But the results are .mat files , and I don't know how to open them. Could you please help me ? Thanks in advance.

the train time

hello ,can you tell me the training time on the S3dis dataset?

training and inference

@hlei-ziyan thanks for open sourcing the wonderfull work , i had few queries
Q1 have you trained the architecture on the available other dataset like semanttic Kitti and 3D dataset
Q2 If not trained can we follow the same training pipeline , if trained can you please share the pre-trained model
Q3 can we use the currently pre-trained model to test on custom dataset which less number of point cloud density

Thanks in advance

error:CUB segmented reduce errorinvalid device function

thanks for your great job! ! !
but i have some question :
when i train the modelnet40_cls , some error happen:

2020-11-15 20:56:49.388664
Traceback (most recent call last):
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InternalError: CUB segmented reduce errorinvalid device function
[[{{node Max}} = Max[T=DT_FLOAT, Tidx=DT_INT32, keep_dims=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Sum, ArgMax/dimension)]]
[[{{node GroupCrossDeviceControlEdges_0/Adam/value/_82}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7776_GroupCrossDeviceControlEdges_0/Adam/value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/gahho/SPH3D-GCN/modelnet40_cls/train_modelnet.py", line 376, in
train()
File "/home/gahho/SPH3D-GCN/modelnet40_cls/train_modelnet.py", line 247, in train
train_one_epoch(sess, ops, next_train_element, train_writer)
File "/home/gahho/SPH3D-GCN/modelnet40_cls/train_modelnet.py", line 292, in train_one_epoch
ops['train_op'], ops['loss'], ops['pred']], feed_dict=feed_dict)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: CUB segmented reduce errorinvalid device function
[[node Max (defined at /home/gahho/SPH3D-GCN/models/SPH3D_modelnet.py:13) = Max[T=DT_FLOAT, Tidx=DT_INT32, keep_dims=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Sum, ArgMax/dimension)]]
[[{{node GroupCrossDeviceControlEdges_0/Adam/value/_82}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7776_GroupCrossDeviceControlEdges_0/Adam/value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Caused by op 'Max', defined at:
File "/home/gahho/SPH3D-GCN/modelnet40_cls/train_modelnet.py", line 376, in
train()
File "/home/gahho/SPH3D-GCN/modelnet40_cls/train_modelnet.py", line 161, in train
pred, end_points = MODEL.get_model(xyz_pl, training_pl, config=net_config)
File "/home/gahho/SPH3D-GCN/models/SPH3D_modelnet.py", line 42, in get_model
points = normalize_xyz(points)
File "/home/gahho/SPH3D-GCN/models/SPH3D_modelnet.py", line 13, in normalize_xyz
scale = tf.reduce_max(tf.reduce_sum(tf.square(points),axis=-1,keepdims=True),axis=1,keepdims=True)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py", line 1643, in reduce_max
name=name))
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4641, in _max
name=name)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/gahho/anaconda3/envs/sph3dgcn/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

InternalError (see above for traceback): CUB segmented reduce errorinvalid device function
[[node Max (defined at /home/gahho/SPH3D-GCN/models/SPH3D_modelnet.py:13) = Max[T=DT_FLOAT, Tidx=DT_INT32, keep_dims=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Sum, ArgMax/dimension)]]
[[{{node GroupCrossDeviceControlEdges_0/Adam/value/_82}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7776_GroupCrossDeviceControlEdges_0/Adam/value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

when i set the batch_size=1,it doesn't cause the error,but the result is very bad
i don't know how to fix it
thanks for your reply

Data preprocessing about Modelnet40

Hello.

How is the model of the .off type in your modelnet40 processed into .txt? Or is there any program that implements this part of the function?

run so slow

Thank your work. When I run the work,the speed of running one epoch is very slow. it is about one hour. But I see in the log file that the running one epoch just 20 minutes. So I can't understand. Can you help me?
this is the log file in one epoch
2020-12-03 20:29:16.650905
---- batch: 050 ----
mean loss: 184.534502
accuracy: 0.414029
---- batch: 100 ----
mean loss: 135.335496
accuracy: 0.530120
---- batch: 150 ----
mean loss: 129.747210
accuracy: 0.535523
---- batch: 200 ----
mean loss: 122.139866
accuracy: 0.558028
---- batch: 250 ----
mean loss: 119.617929
accuracy: 0.560829
---- batch: 300 ----
mean loss: 113.885079
accuracy: 0.584236
---- batch: 350 ----
mean loss: 116.340719
accuracy: 0.569333
---- batch: 400 ----
mean loss: 113.555371
accuracy: 0.580139
---- batch: 450 ----
mean loss: 110.236181
accuracy: 0.584645
---- batch: 500 ----
mean loss: 110.504616
accuracy: 0.594045
---- batch: 550 ----
mean loss: 104.735284
accuracy: 0.608489
---- batch: 600 ----
mean loss: 104.923058
accuracy: 0.601770
---- batch: 650 ----
mean loss: 106.618274
accuracy: 0.598362
---- batch: 700 ----
mean loss: 104.379048
accuracy: 0.612791
---- batch: 750 ----
mean loss: 102.553243
accuracy: 0.618261
---- batch: 800 ----
mean loss: 102.361754
accuracy: 0.611996
---- batch: 850 ----
mean loss: 102.780255
accuracy: 0.615874
---- batch: 900 ----
mean loss: 100.150999
accuracy: 0.626256
---- batch: 950 ----
mean loss: 104.170692
accuracy: 0.608262
---- batch: 1000 ----
mean loss: 102.701527
accuracy: 0.615849
---- batch: 1050 ----
mean loss: 101.427789
accuracy: 0.613831
---- batch: 1100 ----
mean loss: 104.226453
accuracy: 0.599298
---- batch: 1150 ----
mean loss: 97.109982
accuracy: 0.631632
---- batch: 1200 ----
mean loss: 99.082409
accuracy: 0.623211
---- batch: 1250 ----
mean loss: 98.161291
accuracy: 0.618052
---- batch: 1300 ----
mean loss: 93.044155
accuracy: 0.639523
---- batch: 1350 ----
mean loss: 90.239651
accuracy: 0.652435
---- batch: 1400 ----
mean loss: 90.905718
accuracy: 0.650580
---- batch: 1450 ----
mean loss: 90.796373
accuracy: 0.650573
---- batch: 1500 ----
mean loss: 88.180042
accuracy: 0.663508
---- batch: 1550 ----
mean loss: 90.931050
accuracy: 0.647735
---- batch: 1600 ----
mean loss: 90.572594
accuracy: 0.647632
---- batch: 1650 ----
mean loss: 83.347111
accuracy: 0.676880
---- batch: 1700 ----
mean loss: 88.313284
accuracy: 0.657798
---- batch: 1750 ----
mean loss: 82.861588
accuracy: 0.681264
---- batch: 1800 ----
mean loss: 89.783586
accuracy: 0.651877
---- batch: 1850 ----
mean loss: 84.404577
accuracy: 0.673862
---- batch: 1900 ----
mean loss: 87.348818
accuracy: 0.658631
---- batch: 1950 ----
mean loss: 83.427303
accuracy: 0.670092
---- batch: 2000 ----
mean loss: 88.491244
accuracy: 0.654787
---- batch: 2050 ----
mean loss: 84.942625
accuracy: 0.661988
---- batch: 2100 ----
mean loss: 84.637836
accuracy: 0.667242
---- batch: 2150 ----
mean loss: 86.843850
accuracy: 0.660539
---- batch: 2200 ----
mean loss: 85.992690
accuracy: 0.670484
---- batch: 2250 ----
mean loss: 86.092916
accuracy: 0.659830
---- batch: 2300 ----
mean loss: 82.865510
accuracy: 0.679619
---- batch: 2350 ----
mean loss: 82.640754
accuracy: 0.674528
---- batch: 2400 ----
mean loss: 81.347898
accuracy: 0.683257
---- batch: 2450 ----
mean loss: 83.726160
accuracy: 0.670507
---- batch: 2500 ----
mean loss: 82.711281
accuracy: 0.667460
---- batch: 2550 ----
mean loss: 85.248889
accuracy: 0.664610
---- batch: 2600 ----
mean loss: 79.271644
accuracy: 0.684864
---- batch: 2650 ----
mean loss: 82.488315
accuracy: 0.672837
---- batch: 2700 ----
mean loss: 81.616334
accuracy: 0.676569
---- batch: 2750 ----
mean loss: 83.177334
accuracy: 0.668547
---- batch: 2800 ----
mean loss: 81.139334
accuracy: 0.684465
---- batch: 2850 ----
mean loss: 80.436449
accuracy: 0.679211
---- batch: 2900 ----
mean loss: 80.295713
accuracy: 0.678259
---- batch: 2950 ----
mean loss: 80.749244
accuracy: 0.671857
---- batch: 3000 ----
mean loss: 80.518642
accuracy: 0.677207
---- batch: 3050 ----
mean loss: 77.829687
accuracy: 0.685728
---- batch: 3100 ----
mean loss: 81.392671
accuracy: 0.671245
---- batch: 3150 ----
mean loss: 76.950525
accuracy: 0.691033
---- batch: 3200 ----
mean loss: 79.833296
accuracy: 0.682424
---- batch: 3250 ----
mean loss: 81.639724
accuracy: 0.670625
---- batch: 3300 ----
mean loss: 77.314783
accuracy: 0.688428
---- batch: 3350 ----
mean loss: 76.034729
accuracy: 0.694535
---- batch: 3400 ----
mean loss: 78.178265
accuracy: 0.684664
---- batch: 3450 ----
mean loss: 75.660341
accuracy: 0.692333
---- batch: 3500 ----
mean loss: 74.944008
accuracy: 0.687972
---- batch: 3550 ----
mean loss: 77.615459
accuracy: 0.687553
---- batch: 3600 ----
mean loss: 77.393342
accuracy: 0.685459
---- batch: 3650 ----
mean loss: 80.323210
accuracy: 0.676606
---- batch: 3700 ----
mean loss: 77.831140
accuracy: 0.678844
---- batch: 3750 ----
mean loss: 73.645795
accuracy: 0.701312
---- batch: 3800 ----
mean loss: 73.109120
accuracy: 0.698930
---- batch: 3850 ----
mean loss: 72.719140
accuracy: 0.707183
---- batch: 3900 ----
mean loss: 76.973215
accuracy: 0.686412
---- batch: 3950 ----
mean loss: 72.995662
accuracy: 0.698651
---- batch: 4000 ----
mean loss: 74.334438
accuracy: 0.692604
---- batch: 4050 ----
mean loss: 71.758526
accuracy: 0.710445
---- batch: 4100 ----
mean loss: 73.972742
accuracy: 0.695303
---- batch: 4150 ----
mean loss: 70.600237
accuracy: 0.705352
---- batch: 4200 ----
mean loss: 71.107945
accuracy: 0.703613
training one batch require 791.24 milliseconds
2020-12-03 21:49:02.473758
---- EPOCH 000 EVALUATION ----
eval mean loss: 12.938971
eval overall accuracy: 0.732570
eval avg class acc: 0.566511
eval mIoU of other20: 0.432905
eval mIoU of wall: 0.602921
eval mIoU of floor: 0.916897
eval mIoU of cabinet: 0.323481
eval mIoU of bed: 0.547132
eval mIoU of chair: 0.734238
eval mIoU of sofa: 0.625162
eval mIoU of table: 0.548581
eval mIoU of door: 0.265242
eval mIoU of window: 0.233336
eval mIoU of bookshelf: 0.480115
eval mIoU of picture: 0.001725
eval mIoU of counter: 0.326607
eval mIoU of desk: 0.312228
eval mIoU of curtain: 0.325191
eval mIoU of refridgerator: 0.171487
eval mIoU of shower curtain: 0.188239
eval mIoU of toilet: 0.371836
eval mIoU of sink: 0.325391
eval mIoU of bathtub: 0.414665
eval mIoU of otherfurniture: 0.162511
eval mIoU of all classes: 0.395709
testing one batch require 334.10 milliseconds
Model saved in file: /home/disk1/hsf/SPH3D-GCN/log_scannet/model.ckpt-0

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.