• Do we need floating point numbers?

    From Wayne morellini@21:1/5 to All on Tue Sep 6 00:01:36 2022
    One of the principles of design I apply, is getting rid of floating point numbers where practical (often where possible), and also maths. One wastes a great deal of chip space, the other a great deal of processing time. I prefer to design
    things procedurally with integers. A lot of waste just drips away. Getting rid of processing got some tables, helps a lot.

    The only real help with floating point, is you can use powers of decades. A big floating point numbers is matched over
    by a bigger integer number. Where floating point allows the magnitude to be referenced, reducing digits, well isn't that literally the same as chopping off some bits of precision. Which makes you think, just factor the code so integer could
    be used instead in a faster way. Everything might be formatted for floating point, but couldn't it be done better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From dxforth@21:1/5 to Wayne morellini on Tue Sep 6 19:00:38 2022
    On 6/09/2022 5:01 pm, Wayne morellini wrote:
    Where floating point allows the magnitude to be referenced, reducing digits, well isn't that literally the same as chopping off some bits of precision.

    A 32-bit float has a dynamic range 10^80
    A 32-bit integer has a dynamic range 10^9

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Wayne morellini@21:1/5 to dxforth on Tue Sep 6 03:28:05 2022
    On Tuesday, September 6, 2022 at 7:00:42 PM UTC+10, dxforth wrote:
    On 6/09/2022 5:01 pm, Wayne morellini wrote:
    Where floating point allows the magnitude to be referenced, reducing digits, well isn't that literally the same as chopping off some bits of precision.
    A 32-bit float has a dynamic range 10^80
    A 32-bit integer has a dynamic range 10^9

    But still only 32 bits of values.

    So, 10 to 80 is equivalent to under 270 bits in minium ot maximum value.

    However, 10 bits or less is usable for many things. I should have specified Needed on processors most of the time. 270 bits can be used, or floating
    point emulated often if it needs to be used. But, the point is, to design around using it as you can, then emulate otherwise. Anybody with
    applications needing floating point can buy something with an FPU in it.
    Even then, the emulation through a mass array,. might not be bad.

    Ok, some will need it, and most might need it a bit sometimes.

    I am going to illustrate something, that steered me on the correct path.

    In an Apple II magazine decades ago, a guy illustrated using integers to
    draw a circle through procedural stepping, much faster than using
    floating point calculations the standard way. To the normal
    Programmer you had to calculate the floating point position of the new
    pixel next in the circle, where the other guy calculated the relative offset
    to the last pixel using simple integer calculation steps. The floating
    point over use was totally un-needed, but some presume it is only done that
    way. One might say, but, an FPU will do it in the sand amount of time these
    days. But, the example shows that FPU code is exponentially more, and
    that's an indicator the hardware to do it as quick might have to be exponentially more complex, which indeed, it is. A good FPU might be worth
    tens of thousands of integer addition circuits, and thousands of CPUs, as we have around here. So, it's not a saving when you need performance from other things more than the FPU. So, does it make sense to rely on floating point and and FPU hardware when you don't need it.

    .

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)