Giter VIP home page Giter VIP logo

Comments (15)

asik avatar asik commented on July 24, 2024

How are they not deterministic? Unless I'm mistaken, a single division or multiplication should yield the same result across any compliant implementation of the floating point standard. Determinism issues arise in intermediate results across multiple operations due to FPU extended precision, and possibly in implementations of transcendental functions.

from fixedmath.net.

Jure-BB avatar Jure-BB commented on July 24, 2024

For example
public static explicit operator Fix64(double value) { return new Fix64((long)(value * ONE)); }

At value * ONE multiplication, ONE is implicitly converted to double. Then two double values are multiplied. That multiplication between doubles is not deterministic (result can be different on different CPUs because different precision can be used for floating point operations).

Consequentially resulting Fix64 value can be different on different platforms for the same input value.

from fixedmath.net.

Jure-BB avatar Jure-BB commented on July 24, 2024

Thanks for that information. I have to check that about single division or multiplication.

from fixedmath.net.

Jure-BB avatar Jure-BB commented on July 24, 2024

I am not entirely sure single operations yield same result. Note that I have never really studied that topic so I may be wrong (and in that case I apologize in advance for wasting your time).

This is the information I gathered.

From C# specification:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type.

Eric Lippert describes floating point behavior as

The C# compiler, the jitter and the runtime all have broad lattitude to give you more accurate results than are required by the specification, at any time, at a whim -- they are not required to choose to do so consistently and in fact they do not.

Then Shawn Hargreaves writes that 32 bit float value can be converted to 64 bit or 80 bit precision when pushed into register before doing multiplication.

Would't result be different when higher internal precision is used?

Shawn confuses me a little with his next sentence:

Even though the multiply and add instructions are using 64 bit internal precision, the results are immediately converted back to 32 bit format, so this does not affect the result.

And lastly, albeit a bit unrelated, this post shows how casting from decimal to double can produce different results for different decimal representations of same number. Just to show how unpredictable it can be.

from fixedmath.net.

asik avatar asik commented on July 24, 2024

Intermediate result precision doesn't matter in this case because the intermediate result is never used - it is immediately truncated back to a 64-bit number, which is the same as performing the operation in 64-bit precision. At least that's my understanding.

I'd be happy to change the implementation "in any case" if there was an obvious way to do the conversion without losing precision and without doing any floating-point math, but I'm not seeing it.

from fixedmath.net.

Jure-BB avatar Jure-BB commented on July 24, 2024

I think it might be possible for resulting truncated 64-bit number to be different in some cases. But I need to check it some more to be sure.

The only way I can think of to do conversion without relaying on builtin types is with BitConverter class. To directly write all the bits.

I may need a day or two to really research this issue (if it even is an issue).

from fixedmath.net.

asik avatar asik commented on July 24, 2024

Thanks for your interest, if you find anything I'd be curious! Among others I read https://blogs.msdn.microsoft.com/davidnotario/2005/08/08/clr-and-floating-point-some-answers-to-common-questions/ , which explains the floating-point model of the CLR. It seems quite clear that precision issues arise from use of intermediate values, and that conversions to and from internal representation should preserve value.

from fixedmath.net.

Jure-BB avatar Jure-BB commented on July 24, 2024

I haven't found any articles that would simply clarify this issue without too much highly technical terms terms and equations. So I asked a question on stackowerflow. Answer was posted before I edited the question, so that 'yes' part is a little confusing. Please also check comments under the answer.

From what I gathered is that extended precision has to have 2*significandBits+2 bits of significands for resulting value to be completely identical to value calculated with non-extended precision (for elementary operations +, -, *, /) (or +3 bits instead of +2 if we also want to calculate sqrt).

  • 32-bit float has 24-bit significand/mantissa
  • 64-bit double has 52-bit significand/mantissa
  • x87 80-bit extended precision has 64-bit significand/mantissa

So if I understand this correctly, we need at least 24_2+2=50-bit significand if we want to make 32-bit float operations in higher precision. Both 64-bit with 52-bit significand & x87 80-bit extended precision are sufficient.
But for double, we need 52_2+2=106-bit significand. So when 64-bit double calculation is done in x87 80-bit extended precision there is no guarantee that truncated result will be completely identical to the result of calculation done in 64-bit. (we need 106-bit significand, but x87 80-bit extended precision only has 64-bit significand)

I think the safest way would be to convert using BitConverter class. As there is probably performance penalty maybe it would be good to leave current implementation as separate method ToDoubleFast & FromDoubleFast. Here is example of float to fixed point conversion using BitConverter.

I also posted more direct question on stackoverflow but I did not get any satisfactory answer for that one.

from fixedmath.net.

asik avatar asik commented on July 24, 2024

Thanks for the research! Although it does appear that this is right in general, one of the Stack Overflow links you gave also has this reply by Stephen Cannon (and CodesInChaos basically agreed in the comments):

1 << FRACTION_SHIFT is an exact power of two, and therefore represented exactly in floating point.

Multiplication by an exact power of two is exact (unless overflow/underflow occurs, but in that case there isn't a meaningful fixed-point representation anyway, so you don't care). So the only possible source of rounding is the conversion to integer, which is fully specified by C#; so not only is the result deterministic, but you will get portable identical results.

This seems obvious, as floating point numbers use a base 2 mantissa + exponent representation, and so multiplying by a power of 2 is just an addition on the exponent.

I have a strong dislike for replacing obvious and fast code with tricky and slow code. If I get time I'll check this on my end, but I'd like to see a concrete counter-example to my initial assumption that this operation is deterministic.

One thing this got me thinking though, is that this cast operator doesn't check for overflow, and the conv.i8 it emits is unspecified in case of overflow. I only wrote happy path tests for it so in effect the behavior of casts with values outside the range is currently unspecified. This is probably bad. I'm wondering if the desired behavior would be to throw OverflowException or just saturate to Max/Min Value. Probably the former.

from fixedmath.net.

asik avatar asik commented on July 24, 2024

On an related note, if you can ship your application as an x64 .NET Core application, bundling the runtime with it, I think you can just use double and float to your heart's content and not worry about determinism. x64 emits SSE fp code, which doesn't have the rounding problems x87 has; and bundling the runtime should ensure two runs of the same program are compiled in exactly the same way. You might have to verify that it emits the same fp code on different platforms but I don't see why it wouldn't. At that point the only remaining problem is transcendental functions, which could have different implementations on different platforms; if you really need portability across different OSes, maybe the solution is to write them in portable C, cross-compile it and P/Invoke it; slow but should do the trick.

If I was writing a lockstep cross-platform multiplayer game that requires determinism, this is where I would probably go rather than use fixed point math (which is problematic, saturating arithmetic does all sorts of weird things to your equations. Ex.: x * 2 / 2 = x doesn't hold if x * 2 saturates). I wrote this library mainly out of curiosity but I don't think I would actually use it.

from fixedmath.net.

Jure-BB avatar Jure-BB commented on July 24, 2024

I think I can't ship it as x64 only. Really i would have to check OS statistics, and I think there is still a lot of 32-bit mobile devices. Would also need to check if Mono emits same code.

It is basically turn based game with some real time events. I only need determinism for replays between devices and server replays that validate score. No multiplayer.

Performance is not a problem as game logic that need to be deterministic is not demanding. Values are well in range of Q31.32 fixed type so saturation is not a problem in my case. So far what I have read writing fully deterministic floating-point code is no easy task even in C. And I haven't really used any non-managed language. From few blog post I read on the topic, fixed point math is usually recommended.

That special case for multiplication does look interesting, but it still looks kind of suspicious based on some comments in that thread. My knowledge of internal workings of FPU and FP math is pretty basic, so I don't know how to verify that special case by myself. I will try to get some more answers on this. What about division, do you think the same special case applies? I fully agree that simpler and more performant code is much more preferred. I just need to be sure it is completely deterministic across different platforms. And there is not much and even some conflicting info about this topic, which is really confusing. I will try to research it some more.

Thank you for really good support on this issue!

from fixedmath.net.

asik avatar asik commented on July 24, 2024

What about division, do you think the same special case applies?

So, floating point is a base-2 number. A number like 2^32 is represented as (exponent: 32, mantissa: 1). If you multiply this with another float (exp, man), the result is (exp + 32, man * 1). For division, the result is (expo - 32, man * 1). This is an exact result, so there's no rounding. If there's no rounding, rounding mode doesn't matter. Or to put it differently, multiplying the mantissa by 1 doesn't change the mantissa, so it doesn't matter how many bits it has.

Now that I think about it, munging directly with the floating point bits will not be portable. If some CPUs (esp. ARM cpus on mobile) use big-endian floats, I would need to somehow detect the endianness to do it correctly.

from fixedmath.net.

Jure-BB avatar Jure-BB commented on July 24, 2024

Ah I think I understand. If floating point number is multiplied/divided with power of two number, mantissa of FP number doesn't change. Only the exponent changes (point moves). Because both numbers are base two numbers. Great explanation of this math trick! That was confusing me all this time.

You are also right about little/big-endian issue for BitConverter method conversions.

Thank you again! I think this issue can be closed :)

from fixedmath.net.

asik avatar asik commented on July 24, 2024

All right, let me know if you run into any other issues. I've created another one just so I remember to fix the overflow behavior on those operators. #9

from fixedmath.net.

jpgordon00 avatar jpgordon00 commented on July 24, 2024

I'm a layman developer...does FP.ToFloat and FP.FromFloat preserve determinism?

from fixedmath.net.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.