Giter VIP home page Giter VIP logo

crunch's People

Contributors

bagnell avatar leandros avatar nwnk avatar richgel999 avatar tomerb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crunch's Issues

Decoding DXT5_CCxY images

How are YCoCg (DXT5_CCxY) encoded textures to be decoded in a shader?
I've tried the standard decode (taken from the original NV paper), but I'm not seeing the right results. When I encode to YCoCg manually before compressing to DXT5 with NVTT library, all is as expected.
The resulting DDS images produced by this method is clearly visually different from what crunch outputs.

I know YCoCg encoding is mentioned to be experimental. So does crunch apply do some extra tricks that require more work to decode? Or is this an issue with the encoder?

Bug in write_dds

When writing uncompressed DDS files, crunch is setting the DDSD_LINEARSIZE flag instead of DDSD_PITCH. This confuses some software into believing the dwLinearSizeOrPitch field contains the number of bytes in the top level mipmap instead of the row pitch (such as DeViL, which crashes reading uncompressed DDS files generated by crunch).

CRNLIB_ASSERT(num_threads <= cMaxThreads) fails on many-proc computers

I managed to "fix" this locally by bumping the cMaxThreads value in crn_threading_win32.h to 64.
Is it safe to bump this? Is it just for a sanity check, or are other things driven by it somehow?

Note: Could be this is a bug in an old version of the library. Haven't tried to repro on latest main branch here, since this is a very legacy project.

The offending line seems to be:
https://github.com/BinomialLLC/crunch/blob/master/crnlib/crn_image_utils.cpp#L605
It's not using the crn_get_max_helper_threads function when calling task_pool tp; tp.init(g_number_of_processors - 1);

Texture distribution

Hi. I would like to show you my tool www.Photopea.com . You can use it as a viewer of .DDS files (works even on your phone). It supports BC1, BC2, BC3 and BC7 (DX10) compressions.

I also have a question about the strategy of the texture distribution. I am new to this area.

First, we want textures to be small "on the wire" (on a DVD / HDD / delivered over the internet). Next, we want them to be small in the GPU memory. I think it is clear, that any non-GPU-ish lossy compression (such as JPG or WebP) can achieve much better quality/size ratio, than any DXTx format (even zipped DXTx). So JPG or WebP is more suiteble for using "on the wire".

I often see developers directly distributing textues in DXTx format (DDS files) "on the wire". The usual excuse is, that decoding JPG and encoding it into DXTx (at the moment of using the texture) would be too time-consuming (while DXTx can be copied to the GPU without any modifications).

I implemented a very naive DXT1 compression into Photopea (File - Export - DDS) and it is surprisingly fast (1 MPx texture takes 80 ms to encode). So I feel like compressing textures (to DXTx) right before sending them to the GPU makes sense. So what is the purpose of the DDS format? Why do developers distribute textures in the DDS "on the wire", when there are better compression methods?

ambiguous calls prevents compilation (gcc, clang)

Ambiguous calls prevents compilation (gcc, clang):

crn_vector.cpp:26:53: error: call of overloaded ‘next_pow2(size_t&)’ is ambiguous
          new_capacity = math::next_pow2(new_capacity);
                                                     ^
In file included from crn_core.h:173:0,
                 from crn_vector.cpp:3:
crn_math.h:84:21: note: candidate: crnlib::uint32 crnlib::math::next_pow2(crnlib::uint32)
       inline uint32 next_pow2(uint32 val)
                     ^~~~~~~~~
crn_math.h:95:21: note: candidate: crnlib::uint64 crnlib::math::next_pow2(crnlib::uint64)
       inline uint64 next_pow2(uint64 val)
crn_vector.cpp:25:60: error: call of overloaded ‘is_power_of_2(size_t&)’ is ambiguous
       if ((grow_hint) && (!math::is_power_of_2(new_capacity)))
                                                            ^
In file included from crn_core.h:173:0,
                 from crn_vector.cpp:3:
crn_math.h:59:19: note: candidate: bool crnlib::math::is_power_of_2(crnlib::uint32)
       inline bool is_power_of_2(uint32 x) { return x && ((x & (x - 1U)) == 0U); }
                   ^~~~~~~~~~~~~
crn_math.h:60:19: note: candidate: bool crnlib::math::is_power_of_2(crnlib::uint64)
       inline bool is_power_of_2(uint64 x) { return x && ((x & (x - 1U)) == 0U); }
                   ^~~~~~~~~~~~~

Is there a need to keep both uint32/uint64 versions at the same time?

Ubuntu build: wrong compiler settings

Hi.Thanks for the awesome lib. However, on Linux (Ubuntu 16.04, G++ 6.3.0) there are 2 issues:

  1. Compiler flags -pg and -fomit-frame-pointer can't live together (compiler error)

  2. crn_vector.cpp lines 25/26: size_t must be explicitly cast to uint64.

Getting random polka dots on some mipmap levels

Hi, I'm having a weird issue where I get some colored dots added to some mipmap levels of my textures. It appears to be pretty random: those textures were converted automatically by our scripts on a server, but if I try to convert them locally on my computer with the same parameters, I don't have the issue. Additionnaly, some textures have the problem and some don't (for example, the floor has been treated the same way and is fine)

image

Issue with converting KTX file to DDS

Hi, I have this KTX file:
84a41266.zip

It seems to be valid KTX2 file according to specification http://wiki.xentax.com/index.php/KTX_Image

I've also checked it with "ktxinfo.exe" tool from official Khronos Software https://github.com/KhronosGroup/KTX-Software
and it seems to print some info:

identifier: «KTX 20»\r\n\x1A\n
vkFormat: VK_FORMAT_UNDEFINED
typeSize: 1
pixelWidth: 2048
pixelHeight: 2048
pixelDepth: 0
layerCount: 0
faceCount: 1
levelCount: 12
supercompressionScheme: KTX_SS_ZSTD
dataFormatDescriptor.byteOffset: 0x170
dataFormatDescriptor.byteLength: 44
keyValueData.byteOffset: 0x19c
keyValueData.byteLength: 132
supercompressionGlobalData.byteOffset: 0
supercompressionGlobalData.byteLength: 0

But it can't be converted correctly with crunch.

I'm getting "Error: Unable to read KTX file" error while trying to parse it.

Can you add support for this file format to crunch?

non-square textures missing 1x1 mip level, which WebGL needs to render

I was unable to get a non-square texture with mipmaps to render in WebGL unless I squared it myself beforehand.
The non-square texture coming out of crunch looks healthy to me.
Screen Shot 2021-11-20 at 12 31 02 PM

The idea came from someone who had this issue with ETC textures, and their solution seems to work (for android, where ETC is most common):
google/etc2comp#31

PVRTC ofcourse strictly requires square textures, so the bug is simply not possible there.

Building crunch on linux

Hello

I'm getting the following error when I try to build crunch on Linux using the Code Blocks Linux Workspace?

||=== Build: Debug in crnlib (compiler: GNU GCC Compiler) ===|
....
/home/brian/crunch/crnlib/../inc/crn_decomp.h|2578|error: cast from ‘void’ to ‘crnd::ptr_bits {aka unsigned int}’ loses precision [-fpermissive]|*
/home/brian/crunch/crnlib/../inc/crn_decomp.h||In function ‘const void* crnd::crnd_get_level_data(const void_, crnd::uint32, crnd::uint32, crnd::uint32_)’:|
/home/brian/crunch/crnlib/../inc/crn_decomp.h|2822|warning: converting ‘false’ to pointer type ‘const void_’ [-Wconversion-null]|
/home/brian/crunch/crnlib/../inc/crn_decomp.h|2827|warning: converting ‘false’ to pointer type ‘const void_’ [-Wconversion-null]|
/home/brian/crunch/crnlib/../inc/crn_decomp.h|2830|warning: converting ‘false’ to pointer type ‘const void*’ [-Wconversion-null]|
||=== Build failed: 3 error(s), 14 warning(s) (0 minute(s), 7 second(s)) ===|

Define for no threading

Since we already threaded resource handling it would be nice with a switch for picking the no threading alternative even if CRNLIB_USE_WIN32_API is defined ( which seems to affect more than just threading ).

Please consider CC0 license instead

Hi! Some colleagues of mine work on VR gaming, and they were excited that you chose to dedicate this work to the public domain. Thank you!

I'm a license nerd, since I worked for years on getting Wikipedia's image database properly licensed. Unfortunately, the status of "public domain dedications" is a bit fuzzy. It might hold in the USA and EU, but worldwide, there's no such standard. Even at small companies and non-profits, there are legal departments that have to be careful about these matters.

It's true, Public Domain declarations are relatively low risk. But if you want to give the worldwide community maximum rights in a legally tested way, the current best option is to use the Creative Commons Zero license.

Integrating with OSS-Fuzz

Greetings crunch contributors/maintainers/enthusiasts,

We’re reaching out because your project is an important part of the open source ecosystem, and we’d like to invite you to integrate with our fuzzing service, OSS-Fuzz. OSS-Fuzz is a free fuzzing infrastructure you can use to identify security vulnerabilities and stability bugs in your project. OSS-Fuzz will:

  • Continuously run all the fuzzers you write.
  • Alert you when it finds issues.
  • Automatically close issues after they’ve been fixed by a commit.

Many widely used open source projects like OpenSSL, FFmpeg, LibreOffice, and ImageMagick are fuzzing via OSS-Fuzz, which helps them find and remediate critical issues.

Even though typical integrations can be done in < 100 LoC, we have a reward program in place which aims to recognize folks who are not just contributing to open source, but are also working hard to make it more secure.

We want to stress that anyone who meets the eligibility criteria and integrates a project with OSS-Fuzz is eligible for a reward.

To help you getting started, we attached our internal fuzzer for your project that you are welcome to use directly, or to use it as a starting point.

If you're not interested in integrating with OSS-Fuzz, it would be helpful for us to understand why—lack of interest, lack of time, or something else—so we can better support projects like yours in the future.

If we’ve missed your question in our FAQ, feel free to reply or reach out to us at [email protected].

Thanks!

The OSS-Fuzz Team


#include <cstddef>
#include <cstdint>
#include <string>
#include "third_party/crunch/inc/crnlib.h"
#include "third_party/crunch/inc/dds_defs.h"

extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
  crn_uint32 crn_size = static_cast<crn_uint32>(size);
  void *dds = crn_decompress_crn_to_dds(data, crn_size);
  if (!dds) {
    return 0;
  }
  crn_texture_desc tex_desc;

  // See crnlib.h where cCRNMaxFaces and cCRNMaxLevels are defined for details
  // on the library/file limits used within crunch.
  crn_uint32 *images[cCRNMaxFaces * cCRNMaxLevels];
  bool success = crn_decompress_dds_to_images(dds, crn_size, images, tex_desc);
  crn_free_block(dds);
  if (!success) {
    return 0;
  }
  crn_free_all_images(images, tex_desc);
  return 0;
}

Adaptive Size

hi,
The size of the endpoint and selector codebooks is calculated based on the total number of blocks in the image and the quality parameter and image format , while the actual complexity of the image isn’t evaluated and isn’t taken into account.I want to control
codebooks size by the complexity of the image( the lower the complexity of the image, the lower the size of codebooks), could you give me some suggestions?

thanks

xGxR sets R channel to 0 instead of 255.

DXT5nm probably isn't used much anymore, but setting the red channel to 255 instead of 0 lets you use it interchangeably with DXT1, or BC5, or whatever in the same shader. Like this:

normal.x = tex.x * tex.a;
normal.y = tex.y;
normal.z = sqrt(1 - normal.x * normal.x - normal.y * normal.y);

diff --git a/crnlib/crn_image_utils.cpp b/crnlib/crn_image_utils.cpp
index 409e675..d222a08 100644
--- a/crnlib/crn_image_utils.cpp
+++ b/crnlib/crn_image_utils.cpp
@@ -1075,7 +1075,7 @@ namespace crnlib
                   }
                   case image_utils::cConversion_To_xGxR:
                   {
-                     dst.r = 0;
+                     dst.r = 255;
                      dst.g = src.g;
                      dst.b = 0;
                      dst.a = src.r;

Self assignment creating possible bug

In crn_tree_clusterizer.h are multiple self-assignments (starting with line 421), assigning left_weight and right_weight to itself.

Either, the assignment was meant to assign to another variable or it's superfluous. Either way, it should be fixed. I would send you a PR, but due to not knowing what was intended, I leave it untouched for now.

Compressed KTX glTypeSize

When writing a KTX DXT1 texture, the glTypeSize in the header is equal to zero.
Shouldn't it be equal to one? Based on the KTX format spec (2.4 paragraph)

[Suggestion] Drop VS-specific files and use CMake to generate them instead

Pushing Visual Studio files seems like a bad idea as it enforces a specific IDE and keeps Crunch from being used in a wider project using any other IDE. Also by using CMake to generate your project's files you don't have to worry about maintaining your solution as new versions of VS comes out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.