Comments (15)
I am not sure position correlates with normals in general, except in the simplest of cases.
What I meant was that our predictor computed geometric smooth normals using the underlying positions + connectivity data, taking into consideration crease angles for sharp edges.
In professionally created models for games, nearly no one actually creates normals by hand though and this is an important point. They specify the normals indirectly based on assuming first that the object is smooth but then specifying exceptions to that smooth surface with either tagging edges as "creases" (true, false, or a scalar) or by specifying smoothing groups.
That's a good point and Draco can already store per-face smoothing groups. Tagging edges would be somehow more difficult as we currently don't have a support for per-edge attributes, but that's something that can be added.
I am very curious. What paper reference do you have for that?
I'm not sure if there is a paper about octahedral coordinates for normals, but they have been used in many games lately. There is one paper about using octahedral coordinates for environment maps here and a website that shows how they can be used for normal vectors is here.
from draco.
https://diglib.eg.org/bitstream/handle/10.2312/vmv20171266/111-118.pdf
page 118
from draco.
Hey Ben. Thanks for the feedback and the requests. We'll run some tests that we can share and provide some comparisons!
from draco.
It would be cool to have bpv (bits per vertex) measures. Here is a really good comparative paper on 3d mesh compression that gives expected metrics in Table 1: http://liris.cnrs.fr/glavoue/travaux/revue/CSUR2015.pdf Given that your library uses edge breaker, we should expect an average of 2.1 bpv with triangle strip organized output, because while there are lower bpv compression schemes, they do not have the triangle strip output that Edge Breaker has -- and this is why I believe you chose the Edge Breaker approach.
We are very interested in adopting this tool btw if it is near optimal. I very very much like that it focuses just on a single mesh rather than a hierarchical format that includes scene graph, materials, etc., it makes it a good building block.
from draco.
I'd like to second what @bhouston just said. At work we are loading huge single meshes of buildings and sometimes even cities and draco looks very suitable for us, without all the cruft of scene graphs, materials, etc. It will be very interesting to see how it compares with other existing formats.
from draco.
@hccampos You can also use the mesh encoder that glTF uses independently of glTF: https://github.com/KhronosGroup/glTF/wiki/Open-3D-Graphics-Compression
from draco.
@hccampos The open 3DGC library also has the benefit of being really small compared to the current emscripten port of Darco.
from draco.
@hccampos We still need to do more comprehensive testing, but the preliminary results that we have indicate that the current version of Draco offers in general either slightly better or equivalent compression to O3DGC under the same quality settings for meshes with positions only. For meshes with texture coordinates and normals, we have observed about 1.1-1.2X compression gain.
Draco has been in general significantly faster (about 2-3X faster encoding, and 1.5-3X faster decoding, C++ only). We have not compared the javascript decoding performance yet, but we would expect the performance gain there to be either the same or even better compared to the C++ implementation.
Size of the Draco javascript decoder is indeed bigger and as stated in this Issue, we plan to make it smaller.
I don't currently have any comparison with OpenCTM, but we did some measurements quite long time ago and OpenCTM provided in general significantly worse compression (compared to both Draco and O3DGC) with about the same performance as Draco for decoding, but encoding was much slower.
from draco.
OpenCTM provided in general significantly worse compression
OpenCTM has different compression modes, some are not so good. The best I believe is named M2 -- the one that uses LZMA on top of quantization -- which I suspect should be similar in performance to Darco.
For meshes with texture coordinates and normals, we have observed about 1.1-1.2X compression gain.
My understanding is that the state of the art in normal compression is to use the derived normals of the surface based on its connectivity structure (e.g. smooth normals) and then also incorporate hard edges or creases and then have a correction factor on top of that to support arbitrary normals. This can result in almost no data required for normals. OpenCTM does most of this (derived normals from connectivity + correct factors) but skips the hard edge/creases technique. Does Darco do this? My reading of the code suggested it may not.
from draco.
I just checked the OpenCTM code and the good codec is MG2. Here is the smooth normal computation code along with the correction factors that I believe is partly responsible for the MG2's compression ratio performance:
https://github.com/Danny02/OpenCTM/blob/master/lib/compressMG2.c#L421
https://github.com/Danny02/OpenCTM/blob/master/lib/compressMG2.c#L487
from draco.
By default OpenCTM uses MG2 (the good one) and compression level 1 (the best compression ratio is 9, but 5 probably achieves most benefits.) I suspect the slower decompression may be related to normal computation, but I haven't studied it in detail.
from draco.
@bhouston The measurements we did in the past were indeed using the MG2 compression. While we don't have any official comparison numbers right now, you can check some benchmarks of OpenCTM vs. O3DGC done for glTF here.The difference was rather big, especially for scanned models, which was the same case for Draco as far as I remember.
As for normals, we did some experiments with predictors based on geometric normals (normals defined by the positions of vertices), but the results were mixed. As expected the technique worked well for models where the normals are strongly correlated with the geometry, but in practice we found out that people usually used normals only when they were significantly different from the geometric normals, in which case this predictor performed much worse compared to other available options. In the end we decided not to include this predictor in the public release, but we may add it on a later date. What we have for normals is a specialized encoder that transforms them into octahedral coordinates where we can encode them using efficient transforms.
On the other hand, we do use positions as an input to predictors for texture coordinates, which usually works better than other options (only for -cl 7 or higher)
from draco.
As for normals, we did some experiments with predictors based on geometric normals (normals defined by the positions of vertices), but the results were mixed. As expected the technique worked well for models where the normals are strongly correlated with the geometry, but in practice we found out that people usually used normals only when they were significantly different from the geometric normals, in which case this predictor performed much worse compared to other available options. In the end we decided not to include this predictor in the public release, but we may add it on a later date. What we have for normals is a specialized encoder that transforms them into octahedral coordinates where we can encode them using efficient transforms.
Interesting, I've never used predictors before in such a fashion -- neat idea. I understand what you mean and I can see their usefulness in reducing entropy.
I am not sure position correlates with normals in general, except in the simplest of cases.
In professionally created models for games, nearly no one actually creates normals by hand though and this is an important point. They specify the normals indirectly based on assuming first that the object is smooth but then specifying exceptions to that smooth surface with either tagging edges as "creases" (true, false, or a scalar) or by specifying smoothing groups. These are very simple measures that can be put into a geometric vertex normal computation which then generates the complex normals of the resulting object. Creases and smooth groups are very low entropy measures compared to the derived normals and they correspond to the real world design features in 3D objects. I am sure that they are immensely better than any method that compresses the raw normals unless the compression method is one that derives the underlying creases or smoothing groups.
When it comes to predictors I know that the angle across an edge between adjacent faces is a predictor of an edge being a crease or smoothing groups boundary. And then creases and smoothing groups are very often completely predictive of normals in the context of professionally created models.
FBX, the main transfer format for games, stores both edge crease and smoothing group data, rather than normals in most cases, both for compactness but also because that is the low entropy data artists want to work with.
Derived creases and smooth groups from a scanned objects (whose position data is already slightly erroneous and the triangles poorly structured) is likely one of those hard problems that I do not think anyone should invest too much time into.
My use case (professionally created models) is fairly different than yours.
What we have for normals is a specialized encoder that transforms them into octahedral coordinates where we can encode them using efficient transforms.
I am very curious. What paper reference do you have for that?
I do think that scanned objects are much harder to compression than professionally created CAD or polygon models that are common in engineering and video gaming. Your approaches are reasonable for that use case.
from draco.
Great answers.
Tagging edges would be somehow more difficult as we currently don't have a support for per-edge attributes, but that's something that can be added.
Just enumerate the edges based on first time each is encountered while traversing the facets in order. Creases are stored as scalars in FBX between 0 and 1, but it could be highly quantized just a few bits each (2 may be sufficient for most cases?) and then compressed using any method and it will add up to almost nothing while having massive flexibility.
Only 3DS Max creates smoothing groups. All other tools (Maya, Softimage, Blender, etc) use crease weights.
from draco.
Sorry to bring this up but... where were the comparisons after all? I can't find them neither in README nor homepage.
from draco.
Related Issues (20)
- Compilation Error Involving StatusOr and Status Classes in Draco Library HOT 2
- Gstatic, CDNs, and per-domain caching policy HOT 1
- Download links are broken HOT 1
- Support for quads HOT 1
- C++ Documentation Improvement
- Severity Code Description The project file line does not display status Error C2440 "Initialize" : Unable to transfer from "void *" to "tinygltf: : GetFileSizeFunction draco_decoder F: \ resource \ Draco - 1.5.6 \ SRC \ Draco \ IO \ gltf_decoder.cc 489 HOT 2
- Optimizing 3D data with Draco Geometry Compression HOT 1
- Android Studio Build Error
- tex_coord data has no compression
- A problem about pointcloudsequentialencoder HOT 1
- I failed to compress the downsampled point cloud file with draco_encoder. HOT 1
- ld: unknown option: --start-group
- Apple new Privacy manifest files In May HOT 2
- How to open a gltf/glb file, then compress and save it, in C++
- OSS-Fuzz issue 68890
- Only the positions of the point cloud data in the ply file got compressed
- Large amounts of dynamic memory allocation. HOT 1
- Encoding speed issue when using draco_encoder.js in web browser environment.
- The status.h header file is not compiled HOT 1
- Build Failure of Draco 1.3.6 on red hat 8 with gcc-11 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from draco.