Giter VIP home page Giter VIP logo

openpointclass's Issues

Point Cloud Train with non fully ASPRS 1.4 Classification

We are working to train a lot of point cloud previosly well classified by Leica Cyclone 3DR. They have a different classificiation such curbs (74), sidewalk, (73), walls (100) amongs other. The request is... if the code can train more thant the ASPRS 1.4 23 classifications. We are happy to use the starndar ones and train my self classifications. The speed of the code its amazing. The other comments its that Potree dont support more classifications too. :(( but one problem at time.

Here some very well trained set. We are workig to share more. My idea its create different scenarios such, city, road, farm, town among other that may be can ne integrated in WebODM Platform to choose one of them when process the photogrametric data.

passenger_terminal_05_r0_10_s20.zip
marina_05_r2.zip
espigon_05_r2.zip

Support for distributed feature extraction / training

  • Modify pctrain by adding a --extract-features <path>.opcfeat.bin parameter. When set, execution should stop at https://github.com/uav4geo/OpenPointClass/blob/main/randomforest.cpp#L30 and https://github.com/uav4geo/OpenPointClass/blob/main/gbm.cpp#L45
  • Serialize the required vectors (for RF that's gt, ft, GBT populates the structures similarly although not identically). It might also be possible to serialize in a single format regardless of RF or GBT if one creates a new function that simply does the serialization (like train, but stops after creating the features). One might want to encode the various scale, radius, treeDepth, etc. parameters into the serialized output to avoid repeating them and validating other serialized outputs. All serialized output's parameters from different processes need to match.
  • Modify pctrain by checking for .opcfeat.bin file input extensions; if all files passed as input are .opcfeat.bin, then read features directly instead of computing them by adapting the rf::train and gbt::train functions. If you have serialized the scale, radius, etc. parameters one can read them from the serialized files instead of passing them manually.

-DWITH_GBT=ON fails

#9 0.554 [  5%] Creating directories for 'lightgbm'
#9 0.619 [ 10%] Performing download step (git clone) for 'lightgbm'
#9 0.637 Cloning into 'lightgbm'...
#9 3.799 HEAD is now at e4231205 [python-package] use keyword arguments in predict() calls (#5755)
#9 3.824 Submodule 'include/boost/compute' (https://github.com/boostorg/compute) registered for path 'external_libs/compute'
#9 3.825 Submodule 'eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'external_libs/eigen'
#9 3.826 Submodule 'external_libs/fast_double_parser' (https://github.com/lemire/fast_double_parser.git) registered for path 'external_libs/fast_double_parser'
#9 3.826 Submodule 'external_libs/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'external_libs/fmt'
#9 3.831 Cloning into '/opc/build/lightgbm-prefix/src/lightgbm/external_libs/compute'...
#9 5.099 Cloning into '/opc/build/lightgbm-prefix/src/lightgbm/external_libs/eigen'...
#9 5.522 error: RPC failed; HTTP 403 curl 22 The requested URL returned error: 403
#9 5.522 fatal: error reading section header 'shallow-info'
#9 5.525 fatal: clone of 'https://gitlab.com/libeigen/eigen.git' into submodule path '/opc/build/lightgbm-prefix/src/lightgbm/external_libs/eigen' failed
#9 5.530 Failed to clone 'external_libs/eigen'. Retry scheduled
#9 5.534 Cloning into '/opc/build/lightgbm-prefix/src/lightgbm/external_libs/fast_double_parser'...
#9 6.091 Cloning into '/opc/build/lightgbm-prefix/src/lightgbm/external_libs/fmt'...
#9 8.341 Cloning into '/opc/build/lightgbm-prefix/src/lightgbm/external_libs/eigen'...
#9 8.840 error: RPC failed; HTTP 403 curl 22 The requested URL re

Using the Dockerfile, the sha pointed at is no good anymore.

Segmentation fault during creation of second pass

Hi,

I get a segfault for a .laz file that I am trying to classify using docker. It looks like it fails during the creation of the second pass.

./pcclassify /data/21A027_628_58_0000.laz ./classified.ply /data/model.bin

I am uncertain if it it is a bug, but rather an older LAS specification version or similar that is causing the issue. You have specified that version 1.4 is required for training - is it the same for pcclassify? If this is the case, do you have any recommendation for conversion. I did try to convert it without any success. The location of the laser data is in the southern part of Sweden.

Please let me know if I need to provide you with any additional information.

Output of lasinfo:
Heading '21A027_628_58_0000.laz' with 11317852 points
lasinfo (240220) report for '21A027_628_58_0000.laz'
reporting all LAS header entries:
file signature: 'LASF'
file source ID: 0
global_encoding: 1
project ID GUID data 1-4: 00000000-0000-0000-0000-000000000000
version major.minor: 1.2
system identifier: ''
generating software: 'LASzip DLL 2.4 r0 (150731)'
file creation day/year: 336/2021
header size: 227
offset to point data: 227
number var. length records: 0
point data format: 1
point data record length: 28
number of point records: 11317852
number of points by return: 7718162 1749178 983754 571157 295601
scale factor x y z: 0.01 0.01 0.01
offset x y z: 535000 6715000 0
min x y z: 580000.00 6280000.00 -390.66
max x y z: 582499.99 6282499.99 341.04
LASzip compression (version 2.4r0 c2 50000): POINT10 2 GPSTIME11 2
reporting minimum and maximum for all LAS point record entries ...
X 4500000 4749999
Y -43500000 -43250001
Z -39066 34104
intensity 382 65535
return_number 1 5
number_of_returns 1 5
edge_of_flight_line 0 0
scan_direction_flag 0 1
classification 1 18
scan_angle_rank -18 18
user_data 0 0
point_source_ID 13603 13605
gps_time 300525420.164837 300527088.064599
number of first returns: 7718162
number of intermediate returns: 1849949
number of last returns: 7719279
number of single returns: 5969538
overview over number of returns of given pulse: 5969538 1531136 1237623 1102336 1477219 0 0
histogram of classification of points:
4987784 unclassified (1)
6310052 ground (2)
8978 noise (7)
2528 water (9)
8420 bridge deck (17)
90 Reserved for ASPRS Definition (18)

Pctrain problems with large input .LAS files

Hi, when using pctrain with 1.65GB .laz file as an input for training, it it is Starting resolution, jumps to Init scale ....and after short time it outputs Killed. (see attached screen dump). Are there any limitations on file size? Any paramaters we we can adjust to use large sized input files for pctrain?

We are running it as docker image which is deployed on VM servere with Red Hat Enterprise Linux 8.9 (Ootpa) operating system, 64GB RAM, Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (16 cores)

pctrain_1

Failed to setup (Access is denied. )

i am following the instructions to install. when i use the command

"cmake [-DWITH_GBT=ON] -DCMAKE_TOOLCHAIN_FILE=<C:\Researchteam\vcpkg\vcpkg>/scripts/buildsystems/vcpkg.cmake .."

it show "Access is denied." I already using administrator to open anaconda prompt, please someone help thank you so much.

Why result is so strange ?

Output is:

map.ply ./map.ply ./model.bin 
Model: Random Forest
Loading ./model.bin
Reading 1496024 points
Starting resolution: 0.01
Init scale 0 at 0.01 ...
Init scale 1 at 0.01 ...
Init scale 2 at 0.02 ...
Init scale 3 at 0.04 ...
Init scale 4 at 0.08 ...
Init scale 5 at 0.16 ...
Building scale 1 (3 points) ...
Building scale 2 (3 points) ...
Building scale 3 (3 points) ...
Building scale 4 (3 points) ...
Building scale 5 (3 points) ...
Features: 105
Classifying...
Local smoothing...
Wrote ./map.ply

Segmentation fault

Input files: https://hub.dronedb.app/r/hedo88/err-opc

Command: ./pcclassify ../point_cloud.ply out.ply ../model.bin

Loading ../model.bin
Reading 1060056 points
Starting resolution: 0.2
Init scale 0 at 0.2 ...
Init scale 1 at 0.2 ...
Init scale 3 at 0.8 ...
Init scale 5 at 3.2 ...
Init scale 4 at 1.6 ...
Init scale 2 at 0.4 ...
Building scale 1 (160105 points) ...
Building scale 2 (37145 points) ...
Building scale 3 (10084 points) ...
Building scale 4 (2694 points) ...
Building scale 5 (731 points) ...
Features: 105
Classifying...
Local smoothing...
Segmentation fault

with valgrind:

Classifying...
Local smoothing...
==656== Thread 3:
==656== Invalid write of size 1
==656==    at 0x17F432: rf::classify(PointSet&, liblearning::RandomForest::RandomForest<liblearning::RandomForest::NodeGini<liblearning::RandomForest::AxisAlignedSplitter> >*, std::vector<Feature*, std::allocator<Feature*> > const&, std::vector<Label, std::allocator<Label> > const&, rf::Regularization, bool, bool) [clone ._omp_fn.3] (in /build/pcclassify)
==656==    by 0x487CB9D: ??? (in /usr/lib/x86_64-linux-gnu/libgomp.so.1.0.0)
==656==    by 0x5396B42: start_thread (pthread_create.c:442)
==656==    by 0x5427BB3: clone (clone.S:100)
==656==  Address 0x15912 is not stack'd, malloc'd or (recently) free'd
==656== 
==656== 
==656== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==656==  Access not within mapped region at address 0x15912
==656==    at 0x17F432: rf::classify(PointSet&, liblearning::RandomForest::RandomForest<liblearning::RandomForest::NodeGini<liblearning::RandomForest::AxisAlignedSplitter> >*, std::vector<Feature*, std::allocator<Feature*> > const&, std::vector<Label, std::allocator<Label> > const&, rf::Regularization, bool, bool) [clone ._omp_fn.3] (in /build/pcclassify)
==656==    by 0x487CB9D: ??? (in /usr/lib/x86_64-linux-gnu/libgomp.so.1.0.0)
==656==    by 0x5396B42: start_thread (pthread_create.c:442)
==656==    by 0x5427BB3: clone (clone.S:100)
==656==  If you believe this happened as a result of a stack
==656==  overflow in your program's main thread (unlikely but
==656==  possible), you can try to increase the size of the
==656==  main thread stack using the --main-stacksize= flag.
==656==  The main thread stack size used in this run was 8388608.
==656== 
==656== HEAP SUMMARY:
==656==     in use at exit: 205,653,345 bytes in 1,223,477 blocks
==656==   total heap usage: 16,633,178 allocs, 15,409,701 frees, 1,342,197,731 bytes allocated
==656== 
==656== Searching for pointers to 1,223,477 not-freed blocks
==656== Checked 399,635,328 bytes
==656== 
==656== LEAK SUMMARY:
==656==    definitely lost: 415 bytes in 3 blocks
==656==    indirectly lost: 0 bytes in 0 blocks
==656==      possibly lost: 11,408 bytes in 23 blocks
==656==    still reachable: 205,641,522 bytes in 1,223,451 blocks
==656==         suppressed: 0 bytes in 0 blocks
==656== Rerun with --leak-check=full to see details of leaked memory
==656== 
==656== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==656== 
==656== 1 errors in context 1 of 1:
==656== Invalid write of size 1
==656==    at 0x17F432: rf::classify(PointSet&, liblearning::RandomForest::RandomForest<liblearning::RandomForest::NodeGini<liblearning::RandomForest::AxisAlignedSplitter> >*, std::vector<Feature*, std::allocator<Feature*> > const&, std::vector<Label, std::allocator<Label> > const&, rf::Regularization, bool, bool) [clone ._omp_fn.3] (in /build/pcclassify)
==656==    by 0x487CB9D: ??? (in /usr/lib/x86_64-linux-gnu/libgomp.so.1.0.0)
==656==    by 0x5396B42: start_thread (pthread_create.c:442)
==656==    by 0x5427BB3: clone (clone.S:100)
==656==  Address 0x15912 is not stack'd, malloc'd or (recently) free'd
==656== 
==656== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
Segmentation fault

It crashes both on my linux system (kde neon 5.27) and on linux docker image (ubuntu 22.04)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.