Giter VIP home page Giter VIP logo

Comments (7)

jmvalin avatar jmvalin commented on September 27, 2024

Keep in mind that opus_compare is not designed to be a general quality assessment tool. Rather, it is only meant to be used to evaluate the decoder output on the official test vectors (not other samples).

from opus.

heshpdx avatar heshpdx commented on September 27, 2024

Thank you for your insights. I think this is still the best tool for our purposes. Could you share some insights on how I can relax the comparison to be a little more tolerant for my large file comparison? Which variables in opus_compare.c could we play with? I see TEST_WIN_SIZE and TEST_WIN_STEP - can you share intuition on what changing those would do? If you have other suggestions, I welcome them.

from opus.

jmvalin avatar jmvalin commented on September 27, 2024

If your goal is just to make it a bit more lenient, then I think the simplest thing to do would be just to change the threshold to something you're OK with. Near the end of the file, you'll see the following two lines:
err=pow(err/nframes,1.0/16);
Q=100*(1-0.5log(1+err)/log(1.13));
You could simply change the first to be
err=leniency
pow(err/nframes,1.0/16);
or something like that.
Are you trying to test just the decoder or also the encoder. opus_compare was designed to evaluate the decoder, which has an "almost bit exact" definition, i.e. decoders will only differ by rounding error. If you're comparing encoders, then you can expect much larger differences.

from opus.

heshpdx avatar heshpdx commented on September 27, 2024

Thank you! I will play around with this. I tried a couple values of LENIENCY and the failure above turned into:
LENIENCY=0.5:

Test vector PASSES
Opus quality metric: 31.2 % (internal weighted error is 0.183283)

and LENIENCY=0.3

Test vector PASSES
Opus quality metric: 57.3 % (internal weighted error is 0.109970)

This allows me to set a tolerance after listening to the output and figuring out if it is acceptable for our needs.

For your other question, I am performing both encode and decode. I take a .wav file, encode it with opus, then take that encoded bitstream and decode it with opus. The final decoded file is what we run opus_compare on, to compare the audio from two different systems (CPUs, ISA, compiler, OS, whatever). We are looking to ensure that the same work was accomplished. Because system differences and a lossy algorithm can build up for long tests, I wanted to allow for slightly higher tolerance. I have listened to the audio for the ones that do not pass with the RFC/opus standard code, and they sound just fine to my ear. So this leniency idea is very appropriate.

Do you have any guidance on choosing leniency values? We want to keep the bounds tight enough, because SPEC CPU is used by compiler writers to ensure that new code generation flows don't break functionality.

from opus.

jmvalin avatar jmvalin commented on September 27, 2024

If you have an encoder in the loop, then you can get larger but valid differences. As an example, try compiling one encoder with --enable-fixed-point but not the other one and see the difference. It's going to be kinda big. But then again you might want to compare "apples to apples", in which case such a difference may not be something you want to accept. I guess my main question would be what you're trying to catch with that test. Are you trying to detect if someone cheated and used a lower complexity setting to get better speed or are you just trying to check that the build isn't completely broken and producing garbled audio.

from opus.

heshpdx avatar heshpdx commented on September 27, 2024

Everyone will build with the same options, and everyone will run opus with the same flags. So, it is the second one you mention: making sure that the math is correct enough and the audio matches within some bounds so there is no garbled audio (which we already caught once!)

from opus.

heshpdx avatar heshpdx commented on September 27, 2024

Update: changing the threshold worked for us, and allowed benchmark output verification to succeed on a myriad of systems and compilers. I acknowledge that opus_compare is very strict in it's audio quality comparison, magnitudes more than what can be perceived by the human ear. Thank you for your technical contributions!

from opus.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.