tatarchm / tangent_conv Goto Github PK
View Code? Open in Web Editor NEWTangent Convolutions for Dense Prediction in 3D
Tangent Convolutions for Dense Prediction in 3D
It is used here, but does not seem to be defined anywhere in the code repository and throws up an error.
tangent_conv/util/precompute.py
Line 73 in 688fac5
Same for some of the other functions used in precompute.py
I am trying to reproduce the scannet experiments, but meet some problems.
I successfully build the open3d and set the path correctly. Then, I successfully process the data using get_data script. However, when I start training with
python tc.py experiments/scannet/dhnrgb/config.json --train
It reports that:
<my_dir>/data/param/scannet/0p05/scene0568_00/0/scale_0.npz is not found.
May I know whether I did anything wrong, or how I should deal with the problem?
Thanks.
Hello,
I've been trying to test your program, using the Stanford dataset. During the training phase, I encountered a memory issue (not enough memory), so I tried to decrease the value of tt_batch_size
(from 200000
to 5000
, which might be overly small, but it was for the sake of the experiment). Training went well and finished without error, but when I am launching the test program, I get the following error:
Traceback (most recent call last):
File "tc.py", line 31, in <module>
run_net(config, "test")
File "util/model.py", line 402, in run_net
nn.test()
File "util/model.py", line 367, in test
out = self.sess.run(self.output, feed_dict=self.get_feed_dict(b))
File "util/model.py", line 135, in get_feed_dict
ret_dict = {self.c1_ind: expand_dim_to_batch2(b.conv_ind[0], bs),
File "util/cloud.py", line 26, in expand_dim_to_batch2
out_arr[0:sp[0], :] = array
ValueError: could not broadcast input array from shape (45210,9) into shape (5000,9)
It seems that the batch size is too small. Is there a limit to this batch size? How can I know it?
Thanks in advance
I am trying to re-train scannet dataset but haven't found the specified files in the experiment folder.
Hi, authors,
I run python tc.py /data/code6/tangent_conv/experiments/semantic3d/dhnrgb/config.json --precompute
.
However, only shows the following output
==========================================================
(tf1.4_gpu) root@milton-ThinkCentre-M93p:/data/code6/tangent_conv# python tc.py /data/code6/tangent_conv/experiments/semantic3d/dhnrgb/config.json --precompute
:: precompute
(tf1.4_gpu) root@milton-ThinkCentre-M93p:/data/code6/tangent_conv#
==========================================================
nothing is generated in "data/param/semantic3d/0p1/"
Any suggestion to fix this issue?
The RandR headers were not found
comes up as an error message when running cmake, it might be helpful to add this to the requirements
Could you please provide some documentation on how to train on my own dataset, i.e. how to prepare training data.
Thank you.
Hello:
My computer has 16GB RAM.
When converting sg27_station1_intensity_rgb.txt to scan.pcd, the memory ran out. Therefore, I only converted a portion of the data for testing. Therefore, I only converted a portion of the data for testing. However, the error "'model' object has no attribute 'tr_batch_array'" appeared in the test. How can I solve this problem?
My computer has 32GB RAM and 59GB swap area.
When converting sg27_station1_intensity_rgb.txt to scan.pcd, the memory ran out.
Can you fix it? Thank you.
@tatarchm First of all, thank you for your great work. I'm trying to train on a different outdoor dataset so I followed the experiment of semantic3d. However I got the following error during training.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/joblib/externals/loky/process_executor.py", line 420, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 563, in __call__
return self.func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in __call__
for func, args, kwargs in self.items]
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in <listcomp>
for func, args, kwargs in self.items]
File "util/cloud.py", line 232, in get_scan_part_out
num_scales = len(scan.clouds)
NameError: name 'scan' is not defined
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "tc.py", line 26, in <module>
run_net(config, "train")
File "util/model.py", line 399, in run_net
nn.precompute_validation_batches()
File "util/model.py", line 114, in precompute_validation_batches
batch_array = get_batch_array(test_scan, self.par)
File "util/cloud.py", line 325, in get_batch_array
delayed(get_scan_part_out)(par, pts[i]) for i in range(0, arr_size))
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 996, in __call__
self.retrieve()
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 899, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 517, in wrap_future_result
return future.result(timeout=timeout)
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
NameError: name 'scan' is not defined
In the code file cloud.py
, I noticed that the function get_scan_part_out
is called by get_batch_array
. Just give a try, I added a parameter def get_scan_part_out(scan, par, point=None, sample_type='POINT'):
and added an argument delayed(get_scan_part_out)(scan, par, pts[i]) for i in range(0, arr_size)
.
However I ran into another error as following
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/joblib/externals/loky/process_executor.py", line 420, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 563, in __call__
return self.func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in __call__
for func, args, kwargs in self.items]
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 261, in <listcomp>
for func, args, kwargs in self.items]
File "util/cloud.py", line 249, in get_scan_part_out
[k_valid, idx_valid, _] = scan.trees[0].search_radius_vector_3d(random_point, radius=par.valid_rad)
RuntimeError: search_radius_vector_3d() error!
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "tc.py", line 26, in <module>
run_net(config, "train")
File "util/model.py", line 399, in run_net
nn.precompute_validation_batches()
File "util/model.py", line 114, in precompute_validation_batches
batch_array = get_batch_array(test_scan, self.par)
File "util/cloud.py", line 329, in get_batch_array
delayed(get_scan_part_out)(scan, par, pts[i]) for i in range(0, arr_size))
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 996, in __call__
self.retrieve()
File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 148, in retrieval_context
yield
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 996, in __call__
self.retrieve()
File "/usr/local/lib/python3.5/dist-packages/joblib/parallel.py", line 899, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "/usr/local/lib/python3.5/dist-packages/joblib/_parallel_backends.py", line 517, in wrap_future_result
return future.result(timeout=timeout)
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 405, in result
return self.__get_result()
File "/usr/lib/python3.5/concurrent/futures/_base.py", line 357, in __get_result
raise self._exception
RuntimeError: search_radius_vector_3d() error!
I'm using ubuntu16.04, python3.5 and tesorflow1.3.0. I'm new to python. Could you please tell me how to resolve it or any reason that may cause it? Thank you very much.
In the precompute step, on running python tc.py experiments/stanford/dhnrgb/config.json --precompute
, I get:
:: precompute
processing scan: Area_5_hallway_6, rot: 0
Traceback (most recent call last):
File "tc.py", line 21, in <module>
run_precompute(config)
File "util/precompute.py", line 93, in run_precompute
labels_gt=np.reshape(np.asarray(pcd_labels_down.point_cloud.colors)[:, 0], (num_points)),
File "/usr0/home/tkhot/anaconda3/envs/tensorflow/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 257, in reshape
return _wrapfunc(a, 'reshape', newshape, order=order)
File "/usr0/home/tkhot/anaconda3/envs/tensorflow/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 52, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
ValueError: cannot reshape array of size 0 into shape (46214,)
Here, pcd_labels_down.point_cloud.colors
is empty although in the preceding step before downsampling, pcd_labels.colors
is correctly sized.
Hello,
Once a model has been trained, is there a way to apply to unclassified data? As far as I can tell, the configuration file does not differentiate the validation set (which needs to have a valid labeling) from the test set (which, in real life applications, could be an unclassified set). I have tried to include test sets with all labels equal to 0 (unclassified), but in that case precomputing the validation batches does not work.
Did I miss something or is it simply not possible, with the current framework, to classify data sets with unknown labeling?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.