Giter VIP home page Giter VIP logo

tangent_conv's People

Contributors

kmader avatar syncle avatar tatarchm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tangent_conv's Issues

ScanNet issue

I am trying to reproduce the scannet experiments, but meet some problems.

I successfully build the open3d and set the path correctly. Then, I successfully process the data using get_data script. However, when I start training with

python tc.py experiments/scannet/dhnrgb/config.json --train

It reports that:

<my_dir>/data/param/scannet/0p05/scene0568_00/0/scale_0.npz is not found.

May I know whether I did anything wrong, or how I should deal with the problem?

Thanks.

Bug when decreasing the batch size value

Hello,

I've been trying to test your program, using the Stanford dataset. During the training phase, I encountered a memory issue (not enough memory), so I tried to decrease the value of tt_batch_size (from 200000 to 5000, which might be overly small, but it was for the sake of the experiment). Training went well and finished without error, but when I am launching the test program, I get the following error:

Traceback (most recent call last):
  File "tc.py", line 31, in <module>
    run_net(config, "test")
  File "util/model.py", line 402, in run_net
    nn.test()
  File "util/model.py", line 367, in test
    out = self.sess.run(self.output, feed_dict=self.get_feed_dict(b))
  File "util/model.py", line 135, in get_feed_dict
    ret_dict = {self.c1_ind: expand_dim_to_batch2(b.conv_ind[0], bs),
  File "util/cloud.py", line 26, in expand_dim_to_batch2
    out_arr[0:sp[0], :] = array
ValueError: could not broadcast input array from shape (45210,9) into shape (5000,9)

It seems that the batch size is too small. Is there a limit to this batch size? How can I know it?

Thanks in advance

Pretrained weights

@kmader @tatarchm @syncle thanks for sharing your work i have few queries

  1. is the implementation completed ?
  2. can you share the pre-trained weights / model
  3. is it trained on semantic kitti dataset

strange output during executing --precompute

Hi, authors,

I run python tc.py /data/code6/tangent_conv/experiments/semantic3d/dhnrgb/config.json --precompute.
However, only shows the following output

==========================================================

(tf1.4_gpu) root@milton-ThinkCentre-M93p:/data/code6/tangent_conv# python tc.py /data/code6/tangent_conv/experiments/semantic3d/dhnrgb/config.json --precompute
:: precompute
(tf1.4_gpu) root@milton-ThinkCentre-M93p:/data/code6/tangent_conv#

==========================================================

nothing is generated in "data/param/semantic3d/0p1/"

Any suggestion to fix this issue?

Building Open3D requires RandR

The RandR headers were not found comes up as an error message when running cmake, it might be helpful to add this to the requirements

Training on a different dataset

Could you please provide some documentation on how to train on my own dataset, i.e. how to prepare training data.
Thank you.

"'model' object has no attribute 'tr_batch_array'"

Hello:
My computer has 16GB RAM.
When converting sg27_station1_intensity_rgb.txt to scan.pcd, the memory ran out. Therefore, I only converted a portion of the data for testing. Therefore, I only converted a portion of the data for testing. However, the error "'model' object has no attribute 'tr_batch_array'" appeared in the test. How can I solve this problem?

name 'scan' is not defined

@tatarchm First of all, thank you for your great work. I'm trying to train on a different outdoor dataset so I followed the experiment of semantic3d. However I got the following error during training.

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/joblib/externals/loky/process_executor.py", line 420, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 563, in __call__
    return self.func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in __call__
    for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in <listcomp>
    for func, args, kwargs in self.items]
  File "util/cloud.py", line 232, in get_scan_part_out
    num_scales = len(scan.clouds)
NameError: name 'scan' is not defined
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "tc.py", line 26, in <module>
    run_net(config, "train")
  File "util/model.py", line 399, in run_net
    nn.precompute_validation_batches()
  File "util/model.py", line 114, in precompute_validation_batches
    batch_array = get_batch_array(test_scan, self.par)
  File "util/cloud.py", line 325, in get_batch_array
    delayed(get_scan_part_out)(par, pts[i]) for i in range(0, arr_size))
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 996, in __call__
    self.retrieve()
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 899, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 517, in wrap_future_result
    return future.result(timeout=timeout)
  File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
    return self.__get_result()
  File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
    raise self._exception
NameError: name 'scan' is not defined

In the code file cloud.py, I noticed that the function get_scan_part_out is called by get_batch_array. Just give a try, I added a parameter def get_scan_part_out(scan, par, point=None, sample_type='POINT'): and added an argument delayed(get_scan_part_out)(scan, par, pts[i]) for i in range(0, arr_size).
However I ran into another error as following

joblib.externals.loky.process_executor._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/joblib/externals/loky/process_executor.py", line 420, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 563, in __call__
    return self.func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in __call__
    for func, args, kwargs in self.items]
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in <listcomp>
    for func, args, kwargs in self.items]
  File "util/cloud.py", line 249, in get_scan_part_out
    [k_valid, idx_valid, _] = scan.trees[0].search_radius_vector_3d(random_point, radius=par.valid_rad)
RuntimeError: search_radius_vector_3d() error!
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "tc.py", line 26, in <module>
    run_net(config, "train")
  File "util/model.py", line 399, in run_net
    nn.precompute_validation_batches()
  File "util/model.py", line 114, in precompute_validation_batches
    batch_array = get_batch_array(test_scan, self.par)
  File "util/cloud.py", line 329, in get_batch_array
    delayed(get_scan_part_out)(scan, par, pts[i]) for i in range(0, arr_size))
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 996, in __call__
    self.retrieve()
  File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 148, in retrieval_context
    yield
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 996, in __call__
    self.retrieve()
  File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 899, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 517, in wrap_future_result
    return future.result(timeout=timeout)
  File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
    return self.__get_result()
  File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
    raise self._exception
RuntimeError: search_radius_vector_3d() error!

I'm using ubuntu16.04, python3.5 and tesorflow1.3.0. I'm new to python. Could you please tell me how to resolve it or any reason that may cause it? Thank you very much.

Problem precomputing S3DIS Area_5_hallway_6

In the precompute step, on running python tc.py experiments/stanford/dhnrgb/config.json --precompute, I get:

:: precompute
processing scan: Area_5_hallway_6, rot: 0
Traceback (most recent call last):
  File "tc.py", line 21, in <module>
    run_precompute(config)
  File "util/precompute.py", line 93, in run_precompute
    labels_gt=np.reshape(np.asarray(pcd_labels_down.point_cloud.colors)[:, 0], (num_points)),
  File "/usr0/home/tkhot/anaconda3/envs/tensorflow/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 257, in reshape
    return _wrapfunc(a, 'reshape', newshape, order=order)
  File "/usr0/home/tkhot/anaconda3/envs/tensorflow/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 52, in _wrapfunc
    return getattr(obj, method)(*args, **kwds)
ValueError: cannot reshape array of size 0 into shape (46214,)

Here, pcd_labels_down.point_cloud.colors is empty although in the preceding step before downsampling, pcd_labels.colors is correctly sized.

Test on unclassified data sets

Hello,

Once a model has been trained, is there a way to apply to unclassified data? As far as I can tell, the configuration file does not differentiate the validation set (which needs to have a valid labeling) from the test set (which, in real life applications, could be an unclassified set). I have tried to include test sets with all labels equal to 0 (unclassified), but in that case precomputing the validation batches does not work.

Did I miss something or is it simply not possible, with the current framework, to classify data sets with unknown labeling?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.