Giter VIP home page Giter VIP logo

massive-potreeconverter's Introduction

Massive-PotreeConverter

DOI

The PotreeConverter builds potree octree from laz files. When you have very many or big laz files then running a single PotreeConverter job will take a very long time. The Massive-PotreeConverter reduces the wallclock time of creating the octree by a divide and conquer approach. It does this by creating octree's in parallel and merging the octree's into a single octree.

This repository extends the PotreeConverter through a bunch of Python scripts to make it able to convert massive point clouds to the Potree-OctTree format.

Massive-PotreeConverter consists of four steps, all executable through command-line tools. The steps to convert a massive point cloud into the Potree-OctTree are:

  • Determine the bounding cube of the massive point cloud.
  • Split the point cloud in tiles following a special tiling schema.
  • For all the tiles run PotreeConverter to create Potree-OctTrees. We use pycoeman (https://github.com/NLeSC/pycoeman).
  • Merge the multiple Potree-OctTrees into a single massive Potree-OctTree.

All these steps are summarized in the following scheme: .

For a detailed description of each step the user should read Taming the beast: Free and open-source massive point cloud web visualization.

In addition, this repository also contains tools to:

  • Sort and index a bunch of LAS/LAZ files in parallel.
  • Dump the extents of a bunch of LAS/LAZ files into a PostGIS database. This is useful for LAStools as a pre-filter step when dealing with large number of files.
  • Dump the extents of the nodes of a Potree-OctTree into a PostGIS database. Each node of the tree is stored in a separate file.

These additional tools can be used to make rectangular selections on the raw data or in the different levels of the Potree-OctTree offering a multi-level selection tool. This is for example done in https://github.com/NLeSC/ahn-pointcloud-viewer-ws/blob/master/src/main/python/create_user_file.py. In this example a LAS/LAZ file is created from the selected data.

Massive-PotreeConverter has been used for the Dutch AHN2 with 640 Billion points.

Requirements

The following libraries/packages are required for the basic components of Massive-PotreeConverter:

Concretely the following command-line tools must be available: pdal, PotreeConverter, coeman-par-local (or coeman-par-sge or coeman-par-ssh), lasinfo and lasmerge

For now Massive-PotreeConverter works only in Linux systems. Requires Python 3.5.

There is a Dockerfile available and a image build in Docker Hub. See end of page for information on how to use it.

Installation

Clone this repository and install it with pip (using a virtualenv is recommended):

git clone https://github.com/NLeSC/Massive-PotreeConverter
cd Massive-PotreeConverter
pip install .

or install directly with:

pip install git+https://github.com/NLeSC/Massive-PotreeConverter

Installation for additional steps

In order to perform the additional components some additional libraries/packages have to be installed:

  • To insert extents LAS/LAZ files a Potree-OctTrees in a PostGIS database, the additional requirements are:

    • PostgreSQL, PostGIS
    • Python modules: psycopg2
  • To sort/index LAS/LAZ files in parallel (allowing faster selection), the additional requirements are:

    • LAStools with license. For the licensed-part of LAStools to run in Linux environments wine (https://www.winehq.org/) needs to be installed

Installation tips

For the installation of PotreeConverter look at https://github.com/potree/PotreeConverter. You will need to add the build executable manually to the PATH.

Look at the web page of PDAL to install it. You will need to install also GDAL, Geos, GeoTiff and LASzip. Note that for Massive-PotreeConverter there is no need to build PDAL with PostgreSQL support.

Method

More detailed steps:

  • mpc-info: We get the bounding cube, the number of points and the average density of the massive point cloud. First argument is the input folder with all the input data. Second argument is the number of processes we want to use to get the information. The tool also computes suggested values for the number of tiles and for the Cubic Axis Aligned Bounding Box (CAABB), the spacing, the number of levels and suggested potreeconverter command. These values must be used in the next steps! Assuming [laz input directory] is a folder with a bunch of LAS or LAZ files, run:
mpc-info -i [laz input directory] -c [number processes]
  • We use mpc-tiling to create tiles and we use the previous computed (by mpc-info) number of tiles and CAABB (only X and Y coordinates). Note that the number of tiles must be power of 4, in this way and thanks to the used bounding box, the extents of the tiles will match the extent of the OctTree nodes at a certain level (and thus the future merging will be done faster)
mpc-tiling -i input -o tiles -t temp -e "[minX] [minY] [maxX] [maxY]" -n [number tiles] -p [number processes]
  • Run the individual PotreeConverters for each tile using ALWAYS the same previously computed CAABB, spacing and number of levels. Use mpc-create-config-pycoeman to create a XML with the list of PotreeConverter commands that have to be executed. The format used is the parallel commands XML configuration file format for pycoeman. Then run any of the pycoeman tools to execute the commands. There are options to run them locally, in a SGE cluster or in a bunch of ssh-reachable hosts. In all cases it is not recommended to use more than 8 cores per machine since the processing is quite IO-bound. The example is using pycoeman locally in which case . must be the parent folder of tiles. For other pycoeman parallel commands execution modes visit https://github.com/NLeSC/pycoeman.
mpc-create-config-pycoeman -i tiles -o ParallelPotreeConverter.xml -f [format LAS or LAZ] -l [levels] -s [spacing] -e "[minX] [minY] [minZ] [maxX] [maxY] [maxZ]"
coeman-par-local -d . -c ParallelPotreeConverter.xml -e ParallelExecution -n [number processes]
  • After the various Potree-OctTrees are created (one per tile) we need to merge them into a single one. For this you need to use the mpc-merge tool which joins two Potree-OctTrees into one. You need to run different iterations until there is only one Potree-OctTree The script mpc-merge-all can be used to merged all the Potree-OctTrees into one but this has to be used carefully. The final Potree-Octree will be the folder in Potree-OctTrees-merged with the highest merging value.
mkdir Potree-OctTrees
mv ParallelExecution/*/* Potree-OctTrees
mpc-merge-all -i Potree-OctTrees -o Potree-OctTrees-merged -m

See an example in AHN2. For this web also the following repositories where used:

Optional steps

  • Index and sort the raw data (we consider raw data the data before the 2D tiling). Since we are running it on a Linux system we need wine to run lassort.exe. Hence, before the user runs mpc-sort-index (s)he should set the environment variable LASSORT.
export LASSORT="wine <path_to_lastools>/bin/lassort.exe"
  • Fill a DB with the extents of the files in the raw data. Before running mpc-db-extents, first create an user, a DB and add the postgis extension:
#login into postgres
sudo -u postgres psql

> create user <your_linux_user_name> with password '<password>';
> create database pc_extents owner <your_linux_user_name>;
> \connect pc_extents
> create extension postgis
> \q
  • Fill a DB with the extents of the files in the potree octree. Run the mpc-db-extents-potree

Docker

We have created a Dockerfile to use the basic tools of Massive-PotreeConverter. It is meant to help you when running mpc-info, mpc-tiling, mpc-create-config-pycoeman, coeman-par-local and mpc-merge (or mpc-merge-all)

Don't know about Docker? See Docker

There is also an image build in Docker Hub that can be directly pulled and worked with!

In addition to installing all the required software it also creates three volumnes (/data1, /data2, /data3) which are meant to be mounted from different devices when executing docker run. Ideally always try to run in a way that the input data is in one device and the output in another (we actually have 3 volumes because of temp data folder required by mpc-tiling)

An example of using Massive-PotreeConverter through docker:

  • Build the Massive-PotreeConverter docker image from the Dockerfile in this GitHub repository or pull the image from Docker Hub. The following instructions assume that the first option was used. If you pulled the image from Docker you will need to replace the image name.
cd /path/to/Massive-PotreeConverter
docker build -t oscar/mpc:v1 .
# OR
docker pull oscarmartinezrubi/massive-potreeconverter
  • Assuming that our LAZ/LAS files are in /media/data/big/sample, run mpc-info to know the point cloud details:
docker run -v /media/data/big/sample:/data1 oscar/mpc:v1 mpc-info -i /data1 -c 4
  • Run mpc-tiling to generate tiles (use the number of tiles and X,Y values of the CAABB suggested in the previous step). Note that we specify 3 different local folders which will be available in the docker container, one for the input data, one for the output and one for the temporal data. Also note that a local file in /media/data/big/sample/myfile is accessed as /data1/myfile in the container.
docker run -v /media/data/big/sample:/data1 -v /media/data/big/sample_tiles:/data2 -v /media/data/big/sample_tiles_temp:/data3 oscar/mpc:v1 mpc-tiling -i /data1/ -o /data2/ -t /data3/ -e "1555 1749 21659 21853" -n 4 -p 4
  • Run mpc-create-config-pycoeman to create the XML configuration file for the different PotreeConverters. Then run them in parallel in the local machine with coeman-par-local. Note that we use the details suggested by mpc-info for the PotreeConverters. Note that pycoeman can also be used to run the various PotreeConverters in a SGE cluster or in a bunch of ssh-reachable machines. However, the docker instance is only meant for local executions. To use SGE clusters or a bunch of ssh-reachable machines you need to install Massive-PotreeConverter and dependencies in all the involved machines.
mkdir /media/data/big/sample_distpotree
docker run -v /media/data/big/sample_distpotree:/data1 -v /media/data/big/sample_tiles:/data2 oscar/mpc:v1 mpc-create-config-pycoeman -i /data2 -o /data1/ParallelPotreeConverter.xml -f LAZ -l 9 -s 83 -e "1555 1749 -94 21659 21853 20010"
docker run -v /media/data/big/sample_distpotree:/data1 -v /media/data/big/sample_tiles:/data2 oscar/mpc:v1 coeman-par-local -d / -c /data1/ParallelPotreeConverter.xml -e /data1/execution -n 4
  • Run the script to merge all the Potree-OctTrees into one. Note that in this case we only mount and use one volume. For this specific script it is better to have the same device for both input and output.
sudo mv sample_distpotree/execution/*/* sample_distpotree/poctrees/
docker run -v /media/data/big/sample_distpotree:/data1 oscar/mpc:v1 mpc-merge-all -i /data1/poctrees -o /data1/poctrees_merge -m

massive-potreeconverter's People

Contributors

gitter-badger avatar goord avatar oscarmartinezrubi avatar romulogoncalves avatar sverhoeven avatar yifatdzigan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

massive-potreeconverter's Issues

mpc-merge-all Problems

Hi!! It's me again 🤓

I i'm trying to convert some different pointclouds and, some of them work perfectly, but other point clouds are giving me new problems:
when i run mpc-merge-all with all the arguments and following all the preveius steps, i sometimes get execution failed, but not always.
The times this doesn't work, what i see on command line is that the process simply gets stopped with no error message. Searching through generated files, everything is fine but a file. The .log file asociated to the last tile it tried to convert.

Input Potree Octtree A: MyPATH/Potree-OctTrees-merged/merged_7
Input Potree Octtree B: MyPATH/Potree-OctTrees/tile_7_5_potree
Output Potree Octtree: MyPATH/Potree-OctTrees-merged/merged_8
mkdir -p MyPATH/Potree-OctTrees-merged/merged_8
lasinfo MyPATH/Potree-OctTrees/tile_7_5_potree/data/r/rlas -nc -nv -nco
mkdir -p MyPATH/Potree-OctTrees-merged/merged_8/data/r
Execution failed!
Traceback (most recent call last):
File "/home/raquel/.local/lib/python3.6/site-packages/pympc/merge_potree_all.py", line 70, in main
run(args.input, args.output, args.move)
File "/home/raquel/.local/lib/python3.6/site-packages/pympc/merge_potree_all.py", line 44, in run
merge_potree.run(octTreeAInputFolder, octTreeBInputFolder, octTreeOutputFolder, moveFiles)
File "/home/raquel/.local/lib/python3.6/site-packages/pympc/merge_potree.py", line 157, in run
joinNode('r', dataA + '/r', dataB + '/r', dataO + '/r', hierarchyStepSize, extension, cmcommand)
File "/home/raquel/.local/lib/python3.6/site-packages/pympc/merge_potree.py", line 44, in joinNode
hasNodeB = (i < numChildrenB) and (hrcB[level][i] > 0)
TypeError: '>' not supported between instances of 'NoneType' and 'int'

Any idea why, or how to solve this??

Thank you very much!!!!!!! 😄

Problem Ignoring tile

Hi!

First of all, thank you for your time and effort.
I have a problem that is coming me crazy. When following the steps for transforming a Point Cloud in Ubuntu not using docker but the command line, i initially get the folder full of tiles, after all the process, the last command i use is:
mpc-merge-all -i ~/PathTo/Potree-OctTrees -o ~/PathTo/Potree-OctTrees-merged -m

what i see in command line after running that command is:
Input folder with Potree-OctTrees: /PathTo/Potree-OctTrees
Output Potree OctTree: /PathTo/Potree-OctTrees-merged
Move: True
Starting merge_potree_all.py...
Ignoring tile_0_2_potree_converter.mon.disk
Ignoring tile_1_5_potree_converter.mon.disk
Ignoring tile_0_1
Ignoring tile_1_2_potree_converter.log
Ignoring tile_0_2_potree_converter.mon
Ignoring tile_1_2_potree_converter.mon.disk
Ignoring tile_3_1
...

And i've noticed that every log file in /PathTo/Potree-OctTrees contains the text

Segmentation fault (core dumped)

I have no data transformation.
¿Could anybody help me having my first Massive-PotreeConverter conversion?

Thanks!!!

cant install

pip3 install .
Processing ~/Massive-PotreeConverter
Complete output from command python setup.py egg_info:
Installation could not be done: PDAL could not be found.

pip3 install pdal
Collecting pdal
Using cached PDAL-1.6.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-build-99kx84k8/pdal/setup.py", line 126, in
raise Exception(message % (built_major, running_major))
Exception: Version mismatch. PDAL Python support was compiled against version 2.x but setup is running version is 3.x.

support potree2

Hi,
thanks for sharing this tool.
When switching to Potree2 format, the file "cloud.js" is not written by potreeconverter anymore.
It is replaced by a "metadata.json" file.

I did not test it, but given the fact that access to "cloud.js" is hard-coded, I presume potree2 format is not supported

Docker coeman-par-local

Host: Windows 10 Hyper-V.
Using docker image mentioned in readme docker hub image.

When i try to execute coeman-par-local -d . -c /data1/ParallelPotreeConverter.xml -e /data1/execution -n 4 it fails to run PotreeConverter with log message:

ERROR: filesystem error: cannot canonicalize: No such file or directory [/data1/execution/tile_0_0_potree_converter/PotreeConverter] [/data1/execution/tile_0_0_potree_converter]

This message is same in every execution. I tried to run PotreeConverter i get similar result which is why i think bug is in PotreeConverter. But if i download PotreeConverter and use it in win10 it works. Bug is probably exclusive to Linux OS or even only to docker.

Using PotreeConverter in windows 10 i converted tiles in shared folder, but merge in docker image then failed.

TypeError: glob() got an unexpected keyword argument 'recursive'

Hi,

I've pulled the latest Docker image from nlesc/massive-potreeconverter, and I'm getting the following error running mpc-info when I supply a directory path, such as:

docker run --mount source=data-ingest,target=/data1 --mount source=data-processed,target=/data2 --mount source=data-temp,target=/data3 nlesc/massive-potreeconverter mpc-info -i /data1 -c 4
('Input folder: ', '/data1/')
('Number of processes: ', 4)
('Target tile number of points: ', 5000000000)
('Target OctTree node number of points: ', 60000)
Starting get_info.pyc...
Execution failed!
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/pympc/get_info.py", line 84, in main
    run(args.input, args.proc, args.avgtile, args.avgnode)
  File "/usr/local/lib/python2.7/dist-packages/pympc/get_info.py", line 9, in run
    (_, tcount, tminx, tminy, tminz, tmaxx, tmaxy, tmaxz, _, _, _) = utils.getPCFolderDetails(inputFolder, numberProcs)
  File "/usr/local/lib/python2.7/dist-packages/pympc/utils.py", line 112, in getPCFolderDetails
    inputFiles = getFiles(absPath, recursive=True)
  File "/usr/local/lib/python2.7/dist-packages/pympc/utils.py", line 67, in getFiles
    files.extend(glob.glob(os.path.join(inputElement,'*.' + ext),recursive = recursive))
TypeError: glob() got an unexpected keyword argument 'recursive'

If I specify a las file directly, it works:

docker run --mount source=data-ingest,target=/data1 --mount source=data-processed,target=/data2 --mount source=data-temp,target=/data3 nlesc/massive-potreeconverter mpc-info -i /data1/mymodel/apointcloud.las -c 4
('Input folder: ', '/data1/mymodel/apointcloud.las')
('Number of processes: ', 4)
('Target tile number of points: ', 5000000000)
('Target OctTree node number of points: ', 60000)
Starting get_info.pyc...
Completed 0.00%lasinfo /data1/mymodel/apointcloud.las -nc -nv -nco
Completed 100.00%!()
('AABB: ', 327699, 3640884, 32, 329172, 3641628, 140)
('#Points:', 3939233)
('Average density [pts / m2]:', 3.594479301257765)
Suggested number of tiles: 1. For this number of points Massive-PotreeConverter is not really required!
('Suggested Potree-OctTree CAABB: ', 327699, 3640884, 32, 329172, 3642357, 1505)
('Suggested Potree-OctTree spacing: ', 7.0)
('Suggested Potree-OctTree number of levels: ', 5)
Suggested potreeconverter command:
$(which PotreeConverter) -o <potree output directory> -l 5 -s 7 --CAABB "327699 3640884 32 329172 3642357 1505" --output-format LAZ -i <laz input directory>
Finished in 0.02 seconds

I saw that there was a recent commit to update the file search code, which makes use of glob. But I don't have much python experience to quickly troubleshoot. Anyone else experiencing this issue?

Cheers,
Daniel

Encountering error 'unpack requires a string argument of length 1' during mpc-merge

When we give custom spacing and levels during PotreeConversion (example spacing 0.5 and levels 5) for different datasets and try to merge their results using mpc-merge, am encountering below error.

Error Stack Trace below:

failed!
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pympc/merge_potree_all.py", line 70, in main
run(args.input, args.output, args.move)
File "/usr/local/lib/python2.7/dist-packages/pympc/merge_potree_all.py", line 44, in run
merge_potree.run(octTreeAInputFolder, octTreeBInputFolder, octTreeOutputFolder, moveFiles)
File "/usr/local/lib/python2.7/dist-packages/pympc/merge_potree.py", line 152, in run
joinNode('r', dataA + '/r', dataB + '/r', dataO + '/r', hierarchyStepSize, extension, cmcommand)
File "/usr/local/lib/python2.7/dist-packages/pympc/merge_potree.py", line 49, in joinNode
joinNode(node + childNode, nodeAbsPathA + '/' + childNode, nodeAbsPathB + '/' + childNode, nodeAbsPathO + '/' + childNode, hierarchyStepSize, extension, cmcommand)
File "/usr/local/lib/python2.7/dist-packages/pympc/merge_potree.py", line 18, in joinNode
hrcA = utils.readHRC(nodeAbsPathA + '/' + hrcFile, hierarchyStepSize)
File "/usr/local/lib/python2.7/dist-packages/pympc/utils.py", line 209, in readHRC
data[0].append(getNode(open(hrcFileAbsPath, "rb"), 1, data, True, hierarchyStepSize))
File "/usr/local/lib/python2.7/dist-packages/pympc/utils.py", line 184, in getNode
b = struct.unpack('B', binaryFile.read(1))[0]
error: unpack requires a string argument of length 1

This was because when merging two hrc files at line here the if condition never passes and it ends up writing empty hrc file of size 0 bytes. In the further merge steps above error would occur while merging the empty hrc file with another hrc file. Please let me know if you wanted a sample converted dataset for you to test the merge error above, I would be happy to provide one.

why count is None?

python ~/Massive-PotreeConverter/pympc/get_info.py -i ~/mpc -c 2
Input folder: ~/mpc
Number of processes: 2
Target tile number of points: 5000000000
Target OctTree node number of points: 60000
Starting get_info.py...
lasinfo ~/mpc/points_999_1001_100.las -nc -nv -nco
lasinfo /mpc/points_999_1014_140.las -nc -nv -nco
lasinfo /mpc/points_999_999_620.las -nc -nv -nco
Execution failed!
Traceback (most recent call last):
File "
/Massive-PotreeConverter/pympc/get_info.py", line 84, in main
run(args.input, args.proc, args.avgtile, args.avgnode)
File "
/Massive-PotreeConverter/pympc/get_info.py", line 9, in run
(_, tcount, tminx, tminy, tminz, tmaxx, tmaxy, tmaxz, _, _, _) = utils.getPCFolderDetails(inputFolder, numberProcs)
File "/root/py3/lib/python3.5/site-packages/pympc/utils.py", line 139, in getPCFolderDetails
tcount += count
TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'

lasinfo ~/mpc/points_999_1002_120.las -nc -nv -nco

TypreError: a float is required

We're trying to run the massive PotreeConverter on a Win10 VM with Docker, but when we run the first command (docker run -v Y:\mpctest\raw:/data1 oscarmartinezrubi/massive-potreeconverter mpc-info -i /data1 -c 4) we get an execution failed with TypeError: a float is required (as per attached screenshot).

image

Anyone who could help will be greatly appreciated. :)

Conversion does not have 100%

This issue is more for PotreeConverter. Current Octree do not contain 100% of points, or they require many levels to actually contain all 100%

lassort command on linux

The sort_index.py script calls the windows binary lassort.exe. To enable this on linux, make the user define the environment variable LASSORT (wine ) and use this in the python script.

The custom PotreeConverter won't build

It gives following errors on ubuntu linux 14.04 64 bit:
/usr/local/src/BIG_PC/PotreeConverter/PotreeConverter/src/PTXPointReader.cpp: In constructor ‘PTXPointReader::PTXPointReader(std::string)’:
/usr/local/src/BIG_PC/PotreeConverter/PotreeConverter/src/PTXPointReader.cpp:83:18: error: use of deleted function ‘std::basic_fstream& std::basic_fstream::operator=(const std::basic_fstream&)’
this->stream = fstream(*(this->currentFile), ios::in);
^
In file included from /usr/local/src/BIG_PC/PotreeConverter/PotreeConverter/src/PTXPointReader.cpp:1:0:
/usr/include/c++/4.8/fstream:776:11: note: ‘std::basic_fstream& std::basic_fstream::operator=(const std::basic_fstream&)’ is implicitly deleted because the default definition would be ill-formed:
class basic_fstream : public basic_iostream<_CharT, _Traits>

.....

In file included from /usr/include/c++/4.8/ios:43:0,
from /usr/include/c++/4.8/istream:38,
from /usr/include/c++/4.8/fstream:38,
from /usr/local/src/BIG_PC/PotreeConverter/PotreeConverter/src/PTXPointReader.cpp:1:
/usr/include/c++/4.8/streambuf:802:7: error: ‘std::basic_streambuf<_CharT, _Traits>::basic_streambuf(const std::basic_streambuf<_CharT, _Traits>&) [with _CharT = char; _Traits = std::char_traits]’ is private
basic_streambuf(const basic_streambuf& __sb)
^
In file included from /usr/local/src/BIG_PC/PotreeConverter/PotreeConverter/src/PTXPointReader.cpp:1:0:
/usr/include/c++/4.8/fstream:72:11: error: within this context
class basic_filebuf : public basic_streambuf<CharT, Traits>
^
/usr/local/src/BIG_PC/PotreeConverter/PotreeConverter/src/PTXPointReader.cpp: In member function ‘bool PTXPointReader::doReadNextPoint()’:
/usr/local/src/BIG_PC/PotreeConverter/PotreeConverter/src/PTXPointReader.cpp:216:26: error: use of deleted function ‘std::basic_fstream& std::basic_fstream::operator=(const std::basic_fstream&)’
this->stream = fstream(
(this->currentFile), ios::in);
^
make[2]: *
* [PotreeConverter/CMakeFiles/PotreeConverter.dir/src/PTXPointReader.cpp.o] Error 1
make[1]: *** [PotreeConverter/CMakeFiles/PotreeConverter.dir/all] Error 2
make: *** [all] Error 2

thanks,

Alex.

Impossible to merge all the Potree-OctTrees

Hi,
I am using the Docker image since I am finding it difficult to install all the programs required because I am more used to use Winodws

So i follow all the steps as explained, I have small .las file just to test the solution.
I run mpc-info -i /data1 -c 4

root@ca407fc28260:/# mpc-info -i /data1 -c 4
('Input folder: ', '/data1')
('Number of processes: ', 4)
('Target tile number of points: ', 5000000000)
('Target OctTree node number of points: ', 60000)
Starting get_info.pyc...
lasinfo /data1/station.las -nc -nv -nco
Completed 100.00%!()
('AABB: ', 460, 497, 24, 1171, 702, 49)
('#Points:', 4067815)
('Average density [pts / m2]:', 27.90857946554149)
Suggested number of tiles: 1. For this number of points Massive-PotreeConverter is not really required!
('Suggested Potree-OctTree CAABB: ', 460, 497, 24, 1171, 1208, 735)
('Suggested Potree-OctTree spacing: ', 3.0)
('Suggested Potree-OctTree number of levels: ', 5)
Suggested potreeconverter command:
$(which PotreeConverter) -o -l 5 -s 3 --CAABB "460 497 24 1171 1208 735" --output-format LAZ -i
Finished in 0.01 seconds

after that i run the command to generate tiles
mpc-tiling -i /data1/ -o /data2/ -t /data3/ -e "460 497 1171 1208" -n 4 -p 4

('Input folder: ', '/data1/')
('Output folder: ', '/data2/')
('Temporal folder: ', '/data3/')
('Extent: ', '460 497 1171 1208')
('Number of tiles: ', 4)
('Number of processes: ', 4)
Starting generate_tiles.pyc...
mkdir -p /data2/
mkdir -p /data3/
/data1/ contains 1 files
lasinfo /data1/station.las -nc -nv -nco
('Processing', 'station.las', 4067815, 459.925, 496.94, 1171.002, 702.146)
mkdir -p /data3//0
pdal split -i /data1/station.las -o /data3//0/station.las --origin_x=460.0 --origin_y=497.0 --length 355.5
lasinfo /data3//0/station_3.las -nc -nv -nco
mkdir -p /data2//tile_1_0
mv /data3//0/station_3.las /data2//tile_1_0/station_3.las
lasinfo /data3//0/station_2.las -nc -nv -nco
mv /data3//0/station_2.las /data2//tile_1_0/station_2.las
lasinfo /data3//0/station_1.las -nc -nv -nco
mkdir -p /data2//tile_0_0
mv /data3//0/station_1.las /data2//tile_0_0/station_1.las
Completed 1 of 1 (100.00%)
Finished in 33.09 seconds

I run the command to have the XML for every las to be converter to Potree (though it is strange that we want to transform it into LAS [ -f LAS/LAZ] it should be BINARY normally ! cause it is already in LAS format but however I followed the exemple)
mpc-create-config-pycoeman -i /data2 -o /data1/ParallelPotreeConverter.xml -f LAS -l 5 -s 3 -e "460 497 24 1171 1208 735"
and than after I run the XML in paralel
coeman-par-local -d / -c /data1/ParallelPotreeConverter.xml -e /data1/execution -n 4

tile_0_0_potree_converter finished!
tile_1_0_potree_converter finished!

techniclly I should have the las that i split in the second step, converted to Potree right ??
like data/r/r000...
temp/
cloud.js .... like if I used PotreeConverter to convert each one of them.
if I check the tile_0_0_potree_converter I have this inside:
tile_0_0 / inside this folder there is the las splited "station_1.las"
tile_0_0_potree_converter.log
tile_0_0_potree_converter.mon
tile_0_0_potree_converter.mon.disk

I Run the script to merge all the Potree-OctTrees into one:

mpc-merge-all -i /data1/execution -o /data1/poctrees_merge -m

('Input folder with Potree-OctTrees: ', '/data1/execution')
('Output Potree OctTree: ', '/data1/poctrees_merge')
('Move: ', True)
Starting merge_potree_all.pyc...
Ignoring tile_0_0_potree_converter
Ignoring tile_1_0_potree_converter

('Final merged Potree-OctTree is in ', None)
Finished in 0.00 seconds

nothing gets merged !!
I find it very strange that I dont have any pointclounds generated from the las that i split, I verified the las that I split they are okay, I opened them with cloudcompare.
Could someone please help me or show me an exemple of how they are doing it ?
I have tried with different files and I have the same error every time
@sverhoeven

docker mpc-merge-all failed, PotreeConverter: undefined symbol: laszip_create

hi all,
tried tocker version of the utility, untill merge all steps ran ok, but tile merging failed with following exception:

READING:  tile_0_2/e2075n70670_1.laz
PotreeConverter: symbol lookup error: PotreeConverter: undefined symbol: laszip_create

Seems like broken (absent laszip dependencies) docker container?

thanks,

Alex.

Can't convert Las to pnts,when using Docker on win10

hello, I want to process a 1.5T Las data, and i setup a Docker on my computer with a configuration like the fllow picture.
info

I do the steps ,first two steps were successed, but can't do the 3rd step, there is a error that is can't use potreeConverter.exe
here is the result
result

please help,tell me what should I do?

MPC tilling generates incorret PDAL split call when we use negative origins,

When calling mpc_tilling and we define an negative negative pdal split reports that no value was provided.

pdal split -i C_25EZ2.laz -o ./3/C_25EZ2.laz --origin_x -212.6837 --origin_y 99063.1429 --length 2816.85898203

PDAL: kernels.split: Argument 'origin_x' needs a value and none was provided.

To get around we need to use the assignment syntax: --origin_x=-212.6837

Performance differences Vs. multiple pointclouds

Hello!
I wondered, how does your method compare to processing 256 separate tiles and loading them all in the viewer together? I wondered, did you try that and experience loading issues (certain tiles not showing up)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.