Comments (32)
Hey there,
following the process described in the last bullet point of the recommendations section here should resolve your problem.
ctb-tile
will resample data from the source dataset when generating tilesets for the various zoom levels. This can lead to performance issues and datatype overflows at lower zoom levels (e.g. level 0) when the source dataset is very large....
from cesium-terrain-builder-docker.
- Tile your input dataset:
ctb-tile -f GTiff -o folder -s 18 -e 18 inputRaster.gtiff
- Build virtual raster for the tileset from step 1:
gdalbuildvrt lvl18.vrt folder/*/*.tif
- Build Cesium terrain for current zoom level using virtual raster from step 2:
ctb-tile -f Mesh -C -N -o cesiumTerrainFolder -s 18 -e 18 lvl18.vrt
Redo this process for each zoom level. Usually, this is only requried for the higher zoom levels.
Run ctb-tile -f Mesh -N -C -s 21 -e 0
with a range of zoom levels to find out what the first level that fails is.
from cesium-terrain-builder-docker.
From my understanding, the input .tif dataset in your first step is a single .tif file. Am I correct?
Hence, I have to merge multiple .tif files into one single .tif file:
gdalwarp input1.tif input2.tif input3.tif input4.tif output_merge.tif
No, you don't have to merge the tif
files to a single file. Create a GDAL vrt
and use this as input to ctb
instead.
Then, I follow your steps. But in step 2, I got the following error:
gdalbuildvrt level18.vrt tif_tilesets/18//.tif
bash: /usr/bin/gdalbuildvrt: Argument list too long
Try the -input_file_list my_list.txt
option of gdalbuildvrt
instead of listing all files as arguments. You can easily create a the file list using e.g. find /tiles/folder -type f -name "*.tif" > file_list.txt
.
https://gdal.org/programs/gdalbuildvrt.html
Another question is that we need to generate a layer.json in the end according to the command below:
ctb-tile -f Mesh -C -N -l -o terrain tiles.vrt
Should I just simply run this command ctb-tile -f Mesh -C -N -l -o terrain level16.vrt ?
No, I usually create the layer file from the input dataset, not one of the zoom levels.
from cesium-terrain-builder-docker.
Used
export GDAL_CACHEMAX=4000
to increasing the RAM use.
Hope it may help someone.
Thank you for quick response.
from cesium-terrain-builder-docker.
Hey there,
I recommend to re-project your data to WGS84 before using CTB. This might avoid the gdal warp memory issue you are facing as well. Moreover, make sure to grant enough main memory to Docker (Docker right-click menu -> Settings -> Advanced), when increasing GDAL_CACHEMAX
.
from cesium-terrain-builder-docker.
Thank you for your suggestion.
Unfortunately now I'm getting the following error
root@a1db04a328db:/data# ctb-tile -R -f Mesh -s 21 -e 0 -C -N -o ./tilesets_new2 ./a/a.tif
0...10...20...30...40...50...60...70...80...90...ERROR 2: gdalwarpoperation.cpp, 1574: cannot allocate 900553132 bytes
ERROR 1: IReadBlock failed at X offset 0, Y offset 0
ERROR 1: GetBlockRef failed at X block offset 0, Y block offset 0
ERROR 1: Attempt to call CreateSimilar on a non-GTI2 transformer.
Segmentation fault
I have used the following steps:
docker -m 8g run -v "${PWD}:/data" -it --name ctb tumgis/ctb-quantized-mesh
export GDAL_CACHEMAX=4000
I think it's in mbctb-tile -f Mesh -s 21 -e 0 -C -N -o ./tilesets_new2 ./a/a.tif
And as suggested I have converted the data to WGS84.
Please help me on this.
Thank you
from cesium-terrain-builder-docker.
Have you tried this approach (as described here)? For large datasets this always worked for me sor far.
ctb-tile
will resample data from the source dataset when generating tilesets for the various zoom levels. This can lead to performance issues and datatype overflows at lower zoom levels (e.g. level 0) when the source dataset is very large. To overcome this the tool can be used on the original dataset to only create the tile set at the highest zoom level (e.g. level 18) using the--start-zoom
and--end-zoom
options. Once this tileset is generated it can be turned into a GDAL Virtual Raster dataset for creating the next zoom level down (e.g. level 17). Repeating this process until the lowest zoom level is created means that the resampling is much more efficient (e.g. level 0 would be created from a VRT representation of level 1). Because terrain tiles are not a format supported by VRT datasets you will need to perform this process in order to create tiles in a GDAL DEM format as an intermediate step. VRT representations of these intermediate tilesets can then be used to create the final terrain tile output.
from cesium-terrain-builder-docker.
Still no luck.
Getting the same issue.
The size of the tiff image was 2.72 GB and the size of build terrain data is around 150 MB.
Is the size looks right?
from cesium-terrain-builder-docker.
Hey @BWibo
I have multiple .tif files, which consist of a city. The size of them is 6.3 GB.
The following is my pipeline:
-
translate the coordinate reference system to EPSG:4326 for each .tif file
gdalwarp -t_srs EPSG:4326 input.tif output.tif
-
create a GDAL virtual dataset for them
gdalbuildvrt tiles.vrt *.tif
-
create cesium terrain files
ctb-tile -f Mesh -C -N -o terrain tiles.vrt
Then, unfortunately I'm getting the following error:
root@399224581a8e:/data# ctb-tile -f Mesh -C -N -o terrain tiles.vrt
0...10...20...30...40...50...60...70...80...90...ERROR 1: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: IReadBlock failed at X offset 0, Y offset 0: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: IReadBlock failed at X offset 0, Y offset 0: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: IReadBlock failed at X offset 0, Y offset 0: IReadBlock failed at X offset 0, Y offset 0: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: IReadBlock failed at X offset 0, Y offset 0: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: IReadBlock failed at X offset 0, Y offset 0: IReadBlock failed at X offset 0, Y offset 0: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: IReadBlock failed at X offset 0, Y offset 0: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
ERROR 1: IReadBlock failed at X offset 0, Y offset 0: IReadBlock failed at X offset 0, Y offset 0: Integer overflow : nSrcXSize=41494, nSrcYSize=16585
Although the size of my dataset is also large, they are multiple .tif files. And my situation is different from the solution you list above
In your solution, the data is just a large single .tif file and it is able to process for each zoom level but it is not suitable for my case.
Please give some hints, thanks :)
PS: How about processing lots of .tif files that cover the whole country? The size of them would be up to 1TB
Oliver
from cesium-terrain-builder-docker.
Hey there,
I have had theses integer overflows as well. Check the last bullet point of this list. For large datasets this always worked for me so far. I was able to create large terrain tilessets of e.g. ~ 200km x 400km extent without any problems.
When processing several TIF
files make use of GDAL virtual rasters, as described above: gdalbuildvrt tiles.vrt /path/to/all/my/tiles/*.tif
ctb-tile
will resample data from the source dataset when generating tilesets for the various zoom levels. This can lead to performance issues and datatype overflows at lower zoom levels (e.g. level 0) when the source dataset is very large. To overcome this the tool can be used on the original dataset to only create the tile set at the highest zoom level (e.g. level 18) using the--start-zoom
and--end-zoom
options. Once this tileset is generated it can be turned into a GDAL Virtual Raster dataset for creating the next zoom level down (e.g. level 17). Repeating this process until the lowest zoom level is created means that the resampling is much more efficient (e.g. level 0 would be created from a VRT representation of level 1). Because terrain tiles are not a format supported by VRT datasets you will need to perform this process in order to create tiles in a GDAL DEM format as an intermediate step. VRT representations of these intermediate tilesets can then be used to create the final terrain tile output.
from cesium-terrain-builder-docker.
Hey @BWibo,
Thanks for your reply.
From my understanding, the input .tif dataset in your first step is a single .tif file. Am I correct?
Hence, I have to merge multiple .tif files into one single .tif file:
gdalwarp input1.tif input2.tif input3.tif input4.tif output_merge.tif
Then, I follow your steps. But in step 2, I got the following error:
gdalbuildvrt level18.vrt tif_tilesets/18/*/*.tif
bash: /usr/bin/gdalbuildvrt: Argument list too long
It seems because there are too many tiled .tif files in the folder.
Then, I tried to increase stack size:
ulimit -s 65536
It worked for level 16 but failed on level 17 and 18.
If I increased the stack size again, to e.g 131072 KB, it reported:
bash: ulimit: stack size: cannot modify limit: Operation not permitted
I am wondering how did you handle with this issue when you were processing large dataset?
So my zoom level range is just from 0 to 16 right now.
Another question is that we need to generate a layer.json in the end according to the command below:
ctb-tile -f Mesh -C -N -l -o terrain tiles.vrt
Should I just simply run this command ctb-tile -f Mesh -C -N -l -o terrain level16.vrt
?
Oliver
from cesium-terrain-builder-docker.
Hi @BWibo,
Thanks for your answers.
Everything was going well except building terrain tilesets on high zoom level 18:
ctb-tile -f Mesh -C -N -o terrain -s 18 -e 18 level18.vrt
Killed
The command was killed and I guess it may be related to low warp memory? I am not sure.
- Setting GDAL runtime configuration options will also affect Cesium Terrain Builder. Specifically the GDAL_CACHEMAX environment variable should be set to a relatively high value, in conjunction with the warp memory, if required (see next recommendation).
- If warping the source dataset then set the warp memory to a relatively high value. The correct value is system dependent but try starting your benchmarks from a value where the combined value of GDAL_CACHEMAX and the warp memory represents about 2/3 of your available RAM.
I don't know very clearly about the instruction/explanation above and don't know how to set these two options correctly. My OS is ubuntu 16.04.
Please give me some hints. Thanks again :)
Oliver
from cesium-terrain-builder-docker.
Hey there,
sadly, just the word Killed
ist not very helpful in finding the reason for the crash. I'm afraid I can't help you without more detailed information on the error.
This looks like the process has been forcefully determined by yourself (accidentally) or your system (maybe e.g. because of a memory leak). It is unlikely that the options above have anything to do with this.
Look out for ways to monitor your system resources and see if they overflow when using the tool.
Moreover, look out for more detailed logging/debug messages.
from cesium-terrain-builder-docker.
Hi @BWibo,
Yes, you are right. The process was forcefully killed by my system and I got the error below:
Out of memory: Kill process 45064 (ctb-tile) score 853 or sacrifice child.
Killed process 45064 (ctb-tile) total-vm:229898268kB, anon-rss:224757176kB, file-rss:0kB, shmem-rss:0kB
Moreover, I also observed that the usage of CPU and memory was increased to 3125% and 85.2+%, respectively. Then stuck there and process would be killed after a while.
Could you please give some advice? Thanks again!
Oliver
from cesium-terrain-builder-docker.
Apparently you are reaching a hardware memory limit.
If you are running this in a virtual machine you may be able to increase the memory limit, if not, you may need as system offering more memory. I have never faced such issues. Look into memory optimization with cesium terrain builder. Memory limits of Docker could also be the cause of this issue.
from cesium-terrain-builder-docker.
Hi @BWibo,
Thanks for your advice.
I checked the docker container's hardware configuration:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
399224581a8e ctb 2848.05% 212.2GiB / 250.6GiB 84.68% 2.31MB / 0B 13.6GB / 8.6MB 51
which indicated that the ctb container
had already been assigned enough memory (250GB) and should not be the cause of docker memory limits issue.
Hence, I should look into memory optimization with cesium terrain builder as you mentioned above.
More information about my dataset:
After tiling my input dataset on level 18, I got 1141140 .tif files and their size was 19.8GB
ctb-tile -f GTiff -o tif_tilesets -s 18 -e 18 input.vrt
One point I feel strange is that the size of my original .tif dataset is 6.3GB (around 40km x 35km extend and 1m resolution DTM), but now I get 19.8GB file size after tiling. Is that normal?
I remember you were able to create large terrain tilessets of e.g. ~ 200km x 400km extent without any problems
. How did you manage to do that? Could you please provide more detailed information to me? Thanks a lot again!
Oliver
from cesium-terrain-builder-docker.
One point I feel strange is that the size of my original .tif dataset is 6.3GB (around 40km x 35km extend and 1m resolution DTM), but now I get 19.8GB file size after tiling. Is that normal?
The file size may increase due to tiling, but I have no experience to what extent.
I did it just the way it is described here: #3 (comment)
I worked with 1m DTMs as well, but for a much larger extent. What seems strange to me is the memory consumption.
As fas as I remember I did this on a machine with ~32GB RAM.
Just one note:
First, I determined the zoom level where my process failed. (lets assume the ctb crashed at level 10)
For the lower zoom level, e.g. 18 to 11 I used ctb on the original input data.
Then I tiled the gtif files for one level below the level the crash occured, hence level 11 in this example.
After that I created the remaining levels (10 to 0) from tiled gtifs as descirbed in the link up top.
from cesium-terrain-builder-docker.
@predictwise Did you manage to resolve your issue? If yes, how?
from cesium-terrain-builder-docker.
Not yet. I temporarily put it aside and focus on my other work.
There must be certain step that I missed. Otherwise, I could manage to build terrain.
from cesium-terrain-builder-docker.
Another option is a problem with the inut data. Check your original input TIFs
and the tiled TIFs
you create with CTB for errors.
from cesium-terrain-builder-docker.
Hi @BWibo,
How about I share you a google drive link with my original input TIFs
data and you test it from your side?
from cesium-terrain-builder-docker.
Please post the output of gdalinfo
first:
https://gdal.org/programs/gdalinfo.html
from cesium-terrain-builder-docker.
Hi @BWibo,
The following is the output of gdalinfo
on one of original TIF
files:
$ gdalinfo -json original_input.tif
{
"description":"original_input.tif",
"driverShortName":"GTiff",
"driverLongName":"GeoTIFF",
"files":[
"original_input.tif",
"original_input.tif.ovr"
],
"size":[
15010,
15010
],
"coordinateSystem":{
"wkt":"PROJCS[\"ETRS_1989_UTM_Zone_33N\",\n GEOGCS[\"ETRS89\",\n DATUM[\"European_Terrestrial_Reference_System_1989\",\n SPHEROID[\"GRS 1980\",6378137,298.2572221010042,\n AUTHORITY[\"EPSG\",\"7019\"]],\n AUTHORITY[\"EPSG\",\"6258\"]],\n PRIMEM[\"Greenwich\",0],\n UNIT[\"degree\",0.0174532925199433],\n AUTHORITY[\"EPSG\",\"4258\"]],\n PROJECTION[\"Transverse_Mercator\"],\n PARAMETER[\"latitude_of_origin\",0],\n PARAMETER[\"central_meridian\",15],\n PARAMETER[\"scale_factor\",0.9996],\n PARAMETER[\"false_easting\",500000],\n PARAMETER[\"false_northing\",0],\n UNIT[\"metre\",1,\n AUTHORITY[\"EPSG\",\"9001\"]],\n AUTHORITY[\"EPSG\",\"25833\"]]"
},
"geoTransform":[
245425.0,
1.0,
0.0,
7041005.0,
0.0,
-1.0
],
"metadata":{
"":{
"AREA_OR_POINT":"Area",
"DataType":"Generic"
},
"IMAGE_STRUCTURE":{
"COMPRESSION":"LZW",
"INTERLEAVE":"BAND"
}
},
"cornerCoordinates":{
"upperLeft":[
245425.0,
7041005.0
],
"lowerLeft":[
245425.0,
7025995.0
],
"lowerRight":[
260435.0,
7025995.0
],
"upperRight":[
260435.0,
7041005.0
],
"center":[
252930.0,
7033500.0
]
},
"wgs84Extent":{
"type":"Polygon",
"coordinates":[
[
[
9.8990811,
63.4063278
],
[
9.9228485,
63.2721464
],
[
10.2209164,
63.2824831
],
[
10.198531,
63.4167248
],
[
9.8990811,
63.4063278
]
]
]
},
"bands":[
{
"band":1,
"block":[
512,
512
],
"type":"Float32",
"colorInterpretation":"Gray",
"noDataValue":-32767.0,
"overviews":[
{
"size":[
7505,
7505
]
},
{
"size":[
3753,
3753
]
},
{
"size":[
1877,
1877
]
},
{
"size":[
939,
939
]
},
{
"size":[
470,
470
]
},
{
"size":[
235,
235
]
}
],
"unit":"metre",
"metadata":{
"":{
"SourceBandIndex":"0"
}
}
}
]
}
The following is the output of gdalinfo
on projected (EPSG:4326) TIF
file:
$ gdalinfo -json 4326_input.tif
{
"description":"4326_input.tif",
"driverShortName":"GTiff",
"driverLongName":"GeoTIFF",
"files":[
"4326_input.tif"
],
"size":[
19811,
8900
],
"coordinateSystem":{
"wkt":"GEOGCS[\"WGS 84\",\n DATUM[\"WGS_1984\",\n SPHEROID[\"WGS 84\",6378137,298.257223563,\n AUTHORITY[\"EPSG\",\"7030\"]],\n AUTHORITY[\"EPSG\",\"6326\"]],\n PRIMEM[\"Greenwich\",0],\n UNIT[\"degree\",0.0174532925199433],\n AUTHORITY[\"EPSG\",\"4326\"]]"
},
"geoTransform":[
9.8990811498956237,
0.0000162451392481,
0.0,
63.4167247689999627,
0.0,
-0.0000162451392481
],
"metadata":{
"":{
"AREA_OR_POINT":"Area",
"DataType":"Generic"
},
"IMAGE_STRUCTURE":{
"INTERLEAVE":"BAND"
}
},
"cornerCoordinates":{
"upperLeft":[
9.8990811,
63.4167248
],
"lowerLeft":[
9.8990811,
63.272143
],
"lowerRight":[
10.2209136,
63.272143
],
"upperRight":[
10.2209136,
63.4167248
],
"center":[
10.0599974,
63.3444339
]
},
"wgs84Extent":{
"type":"Polygon",
"coordinates":[
[
[
9.8990811,
63.4167248
],
[
9.8990811,
63.272143
],
[
10.2209136,
63.272143
],
[
10.2209136,
63.4167248
],
[
9.8990811,
63.4167248
]
]
]
},
"bands":[
{
"band":1,
"block":[
19811,
1
],
"type":"Float32",
"colorInterpretation":"Gray",
"noDataValue":-32767.0,
"unit":"metre",
"metadata":{
"":{
"SourceBandIndex":"0"
}
}
}
]
}
This is the output of gdalinfo
on one of tiled TIF
files that I create with CTB on level 18:
$ gdalinfo -json 223575.tif
{
"description":"223575.tif",
"driverShortName":"GTiff",
"driverLongName":"GeoTIFF",
"files":[
"223575.tif"
],
"size":[
65,
65
],
"coordinateSystem":{
"wkt":"GEOGCS[\"WGS 84\",\n DATUM[\"WGS_1984\",\n SPHEROID[\"WGS 84\",6378137,298.257223563,\n AUTHORITY[\"EPSG\",\"7030\"]],\n AUTHORITY[\"EPSG\",\"6326\"]],\n PRIMEM[\"Greenwich\",0],\n UNIT[\"degree\",0.0174532925199433],\n AUTHORITY[\"EPSG\",\"4326\"]]"
},
"geoTransform":[
10.79681396484375,
0.0000105637770433,
0.0,
63.5174560546875,
0.0,
-0.0000105637770433
],
"metadata":{
"":{
"AREA_OR_POINT":"Area"
},
"IMAGE_STRUCTURE":{
"INTERLEAVE":"BAND"
}
},
"cornerCoordinates":{
"upperLeft":[
10.796814,
63.5174561
],
"lowerLeft":[
10.796814,
63.5167694
],
"lowerRight":[
10.7975006,
63.5167694
],
"upperRight":[
10.7975006,
63.5174561
],
"center":[
10.7971573,
63.5171127
]
},
"wgs84Extent":{
"type":"Polygon",
"coordinates":[
[
[
10.796814,
63.5174561
],
[
10.796814,
63.5167694
],
[
10.7975006,
63.5167694
],
[
10.7975006,
63.5174561
],
[
10.796814,
63.5174561
]
]
]
},
"bands":[
{
"band":1,
"block":[
65,
31
],
"type":"Float32",
"colorInterpretation":"Gray",
"noDataValue":-32767.0,
"metadata":{
}
}
]
}
Have you found out something wrong on my data?
Oliver
from cesium-terrain-builder-docker.
I can't spot anything problematic from a quick look on the output.
from cesium-terrain-builder-docker.
Hi @BWibo,
Yes, I also can't find out anything problematic from the output.
Could you please test my dataset on level 18 from your side? If yes, I will send you a google drive dataset link.
Thank you very much.
Oliver
from cesium-terrain-builder-docker.
No, I'm sorry.
I am maintaining this Docker image but this issue seems not to be related with the image itself. This seems to be an issue of either the CesiumTerrainBuilder application, your data, something in your data pipeline/your process or with the system/hardware you are using.
Taking a closer look into your data or doing the processing for you is something I cannot provide without an official assignment. If you are interested, let me know.
from cesium-terrain-builder-docker.
Hello,
Thanks @BWibo for maintaining this docker image and @homme and @ahuarte47 for the work on ctb!
Sorry for digging up this issue but I also have the data overflow issue when working with large datasets and I have found two different tips for solving this issue: (let's take an example where the generation fails at level 8)
- Tile the input file as a
gtiff
file at level 8 withctb-tile
, create avrt
at level 8 and usectb-tile
to create the terrain files from level 8 to 0 with the-s
and-e
respectively set to8
and0
(as explained in one of your comments) - Tile the input file as a
gtiff
file at level 9 withctb-tile
, create avrt
at level 9 and usectb-tile
to create the terrain files from level 8 to 0 with the-s
and-e
respectively set to8
and0
(as explained in one of your comments, in one comment on ctb github and in the last recommendation of ctb github)
Which one is the correct way to go and why ?
Thanks
from cesium-terrain-builder-docker.
Hey there, I think there is no right or wrong here, it depends on your data set. In fact, I don't know exactly what influcences at which level ctb-tile
crashes. It could be the extent of your input data.
For me, the level where to start tiling the GTIFF
files just depends on the level where ctb-tile
crashes. As mentioned before, I first usually first run ctb-tile -s 18 -e 0
to determine the level where ctb-tile
crashes.
The I tile the input GTIFF
with one level higher. For instance, if level 10 crashed, I tile the input GTIFF
at level 11.
After that I proceed as described above. I create a lvl-11.vrt
from the tiled GTIFFs
and use it to create level 10 with ctb-tile
.
from cesium-terrain-builder-docker.
Thanks for your answer;
in the context of your example, my question was more about why would you first tile the input GTIFF
at level 11 to build the level 10 terrain (versus tiling it at level 10 to build the level 10 terrain)? @homme and @ahuarte47 maybe ? (since this seems to be the recommendation given on the cesium-terrain-builder github)
from cesium-terrain-builder-docker.
Hello,
Thanks @BWibo for maintaining this docker image and @homme and @ahuarte47 for the work on ctb!
Sorry for digging up this issue but I also have the data overflow issue when working with large datasets and I have found two different tips for solving this issue: (let's take an example where the generation fails at level 8)
- Tile the input file as a
gtiff
file at level 8 withctb-tile
, create avrt
at level 8 and usectb-tile
to create the terrain files from level 8 to 0 with the-s
and-e
respectively set to8
and0
(as explained in one of your comments)- Tile the input file as a
gtiff
file at level 9 withctb-tile
, create avrt
at level 9 and usectb-tile
to create the terrain files from level 8 to 0 with the-s
and-e
respectively set to8
and0
(as explained in one of your comments, in one comment on ctb github and in the last recommendation of ctb github)Which one is the correct way to go and why ?
Thanks
These overflow issues are raised by GDAL when for lower levels CTB tries to create overviews from input data. If input data is big, GDAL throws an exception. I have always avoided this overflow issue for lower levels using a simplified version of input data with a lower resolution. I process with gdal_translate the original input data to output a set of rasters with lower resolutions (x2, x4, x8....) and then I use for each level those that do not throw any error, starting from level 0 to upper levels, one by one. The higher levels your are processing, the higher resolution raster you use.
from cesium-terrain-builder-docker.
Ok, thanks for both of your answers π
from cesium-terrain-builder-docker.
@ahuarte47 Thx, for the clarification.
@predictwise this could be a possible solution for your issue as well. Please let us know if this works for you, if you try it out.
from cesium-terrain-builder-docker.
Related Issues (20)
- Unable to run on docker - windows HOT 1
- How to resume tiling terrain after stopping HOT 2
- std::bad_alloc when trying to generate CTB with layer.json file HOT 1
- GDALOpenEx not found in Windows HOT 1
- How to view the results in Cesium? HOT 2
- quantized-mesh tiles are no βmetadataβ extension HOT 1
- Starts numbering at 10 instead of 0 HOT 1
- arm64 based Docker image HOT 2
- Gdal version? HOT 4
- Migrate Travis to Github actions build HOT 1
- Segmentation violation when using --profile mercator on tumgis/ctb-quantized-mesh HOT 1
- can we automate handling large datasets? HOT 4
- why cesium can not display globe? HOT 1
- Terrain pyramid getting created outside of dataset bounds HOT 1
- Normals not working/missing HOT 5
- value of bounds filed in layer.json seems to be not the file's real bounds HOT 1
- when Create layer description file, ctb-tile core dump HOT 3
- Failed to obtain terrain tile X: 1 Y: 0 Level: 0. HOT 3
- Tile coordinates are not consistent with Google or TMS standard
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cesium-terrain-builder-docker.