• Precision Tail-off?

    From Stephen Tucker@21:1/5 to All on Tue Feb 14 07:09:37 2023
    Hi,

    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets bigger.

    I have two questions:
    1. Is there a straightforward explanation for this or is it a bug?
    2. Is the same behaviour exhibited in Python 3.x?

    For your information, the first 20 significant figures of the cube root in question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456.789 ** (1.0 / 3.0)
    49.79338592181744
    123456789. ** (1.0 / 3.0)
    497.9338592181743
    123456789000. ** (1.0 / 3.0)
    4979.338592181743
    123456789000000. ** (1.0 / 3.0)
    49793.38592181742
    123456789000000000. ** (1.0 / 3.0)
    497933.8592181741
    123456789000000000000. ** (1.0 / 3.0)
    4979338.59218174
    123456789000000000000000. ** (1.0 / 3.0)
    49793385.9218174
    123456789000000000000000000. ** (1.0 / 3.0)
    497933859.2181739
    123456789000000000000000000000. ** (1.0 / 3.0)
    4979338592.181739
    123456789000000000000000000000000. ** (1.0 / 3.0)
    49793385921.81738
    123456789000000000000000000000000000. ** (1.0 / 3.0)
    497933859218.1737
    123456789000000000000000000000000000000. ** (1.0 / 3.0)
    4979338592181.736
    123456789000000000000000000000000000000000. ** (1.0 / 3.0) 49793385921817.36
    123456789000000000000000000000000000000000000. ** (1.0 / 3.0) 497933859218173.56
    123456789000000000000000000000000000000000000000. ** (1.0 / 3.0) 4979338592181735.0
    123456789000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+16
    123456789000000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+17
    123456789000000000000000000000000000000000000000000000000. ** (1.0 /
    3.0)
    4.979338592181733e+18
    123456789000000000000000000000000000000000000000000000000000. ** (1.0 / 3.0)
    4.979338592181732e+19
    123456789000000000000000000000000000000000000000000000000000000. **
    (1.0 / 3.0)
    4.9793385921817313e+20
    ----------------------------------------------

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From elvis-85792@notatla.org.uk@21:1/5 to Stephen Tucker on Tue Feb 14 10:49:05 2023
    On 2023-02-14, Stephen Tucker <stephen_tucker@sil.org> wrote:
    123456789000000000000000000000000000000000000000000000000000000. **
    (1.0 / 3.0)
    4.9793385921817313e+20
    ----------------------------------------------

    What do you get for this?
    int ( 1.0 * 123456789000000000000000000000000000000000000)
    I think you are exceeding what you can expect from your floating
    point storage before you've done the cube root.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Oscar Benjamin@21:1/5 to Stephen Tucker on Tue Feb 14 11:17:20 2023
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org> wrote:

    Hi,

    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets bigger.

    I have two questions:
    1. Is there a straightforward explanation for this or is it a bug?
    2. Is the same behaviour exhibited in Python 3.x?

    For your information, the first 20 significant figures of the cube root in question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0) 49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly equal
    to 1/3 and likewise the other float cannot have as many accurate
    digits as is suggested by the number of zeros shown. Therefore you
    should compare what exactly it means for the numbers you really have
    rather than comparing with an exact cube root of the number that you
    intended. Here I will do this with SymPy and calculate many more
    digits than are needed. First here is the exact cube root:

    In [29]: from sympy import *

    In [30]: n = 123456789000000000000000000000000000000000

    In [31]: cbrt(n).evalf(50)
    Out[31]: 49793385921817.447440261250171604380899353243631762

    So that's 50 digits of the exact cube root of the exact number and the
    first 20 match what you showed. However in your calculation you use
    floats so the exact expression that you evaluate is:

    In [32]: e = Pow(Rational(float(n)), Rational(1.0/3.0), evaluate=False)

    In [33]: print(e) 123456788999999998830049821836693930508288**(6004799503160661/18014398509481984)

    Neither base or exponent is really the number that you intended it to
    be. The first 50 decimal digits of this number are:

    In [34]: e.evalf(50)
    Out[34]: 49793385921817.360106660998131166304296436896582873

    All of the digits in the calculation you showed match with the first
    digits given here. The output from the float calculation is correct
    given what the inputs actually are and also the available precision
    for 64 bit floats (53 bits or ~16 decimal digits).

    The reason that the results get further from your expectations as the
    base gets larger is because the exponent is always less than 1/3 and
    the relative effect of that difference is magnified for larger bases.
    You can see this in a series expansion of a^x around x=1/3. Using
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You should
    expect this difference to grow approximately linearly as you keep
    adding more zeros in the base.

    --
    Oscar

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Weatherby,Gerard@21:1/5 to All on Tue Feb 14 12:58:42 2023
    Use Python3

    Use the decimal module: https://docs.python.org/3/library/decimal.html


    From: Python-list <python-list-bounces+gweatherby=uchc.edu@python.org> on behalf of Stephen Tucker <stephen_tucker@sil.org>
    Date: Tuesday, February 14, 2023 at 2:11 AM
    To: Python <python-list@python.org>
    Subject: Precision Tail-off?
    *** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***

    Hi,

    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets bigger.

    I have two questions:
    1. Is there a straightforward explanation for this or is it a bug?
    2. Is the same behaviour exhibited in Python 3.x?

    For your information, the first 20 significant figures of the cube root in question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456.789 ** (1.0 / 3.0)
    49.79338592181744
    123456789. ** (1.0 / 3.0)
    497.9338592181743
    123456789000. ** (1.0 / 3.0)
    4979.338592181743
    123456789000000. ** (1.0 / 3.0)
    49793.38592181742
    123456789000000000. ** (1.0 / 3.0)
    497933.8592181741
    123456789000000000000. ** (1.0 / 3.0)
    4979338.59218174
    123456789000000000000000. ** (1.0 / 3.0)
    49793385.9218174
    123456789000000000000000000. ** (1.0 / 3.0)
    497933859.2181739
    123456789000000000000000000000. ** (1.0 / 3.0)
    4979338592.181739
    123456789000000000000000000000000. ** (1.0 / 3.0)
    49793385921.81738
    123456789000000000000000000000000000. ** (1.0 / 3.0)
    497933859218.1737
    123456789000000000000000000000000000000. ** (1.0 / 3.0)
    4979338592181.736
    123456789000000000000000000000000000000000. ** (1.0 / 3.0) 49793385921817.36
    123456789000000000000000000000000000000000000. ** (1.0 / 3.0) 497933859218173.56
    123456789000000000000000000000000000000000000000. ** (1.0 / 3.0) 4979338592181735.0
    123456789000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+16
    123456789000000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+17
    123456789000000000000000000000000000000000000000000000000. ** (1.0 /
    3.0)
    4.979338592181733e+18
    123456789000000000000000000000000000000000000000000000000000. ** (1.0 / 3.0)
    4.979338592181732e+19
    123456789000000000000000000000000000000000000000000000000000000. **
    (1.0 / 3.0)
    4.9793385921817313e+20
    ----------------------------------------------
    -- https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!kSE4mNp5KxTEp6SKzpQeBukScLYsmEoDfLpSTuc2Zv8Z3pZQhTm0usq-k4eVquxM08u8VSUX1X6id9IICJHA2B4mzw$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/
    python-list__;!!Cn_UX_p3!kSE4mNp5KxTEp6SKzpQeBukScLYsmEoDfLpSTuc2Zv8Z3pZQhTm0usq-k4eVquxM08u8VSUX1X6id9IICJHA2B4mzw$>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Stephen Tucker on Tue Feb 14 16:46:54 2023
    Stephen Tucker <stephen_tucker@sil.org> writes:
    123456789000000000000000000000000000000000000000000. ** (1.0 / 3.0) >4.979338592181734e+16

    Perhaps you have prior knowledge of SQL?
    There is something like this in some dialects there.

    |
    |SELECT 1 / 7;
    |+--------+
    || 1 / 7 |
    |+--------+
    || 0.1429 |
    |+--------+
    |
    |SELECT 1.00000000000000000000000000 / 7;
    |+----------------------------------+
    || 1.00000000000000000000000000 / 7 | decimal(27,26) / int(1) |+----------------------------------+
    || 0.142857142857142857142857142857 | decimal(31,30) |+----------------------------------+
    |
    from an SQL console with type annotations added by me

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Torrie@21:1/5 to Stephen Tucker on Tue Feb 14 15:50:41 2023
    On 2/14/23 00:09, Stephen Tucker wrote:
    I have two questions:
    1. Is there a straightforward explanation for this or is it a bug?
    To you 1/3 may be an exact fraction, and the definition of raising a
    number to that power means a cube root which also has an exact answer,
    but to the computer, 1/3 is 0.333333333333333 repeating in decimal,
    which is some other fraction in binary. And even rational numbers like
    0.2, which are precise and exact, are not in binary
    (0.01010101010101010101). 0.2 is .0011011011011011011 on and on forever.

    IEEE floating point has very well known limitations. All languages that
    use IEEE floating point will be subject to these limitations. So it's
    not a bug in the sense that all languages will exhibit this behavior.

    2. Is the same behaviour exhibited in Python 3.x?
    Yes. And Java, C++, and any other language that uses IEEE floating point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Weatherby,Gerard@21:1/5 to Stephen Tucker on Wed Feb 15 14:36:24 2023
    All languages that use IEEE floating point will indeed have the same limitations, but it is not true that Python3 only uses IEEE floating point. Using the Decimal class and cribbing a method from StackOverflow, https://stackoverflow.com/questions/
    47191533/how-to-efficiently-calculate-cube-roots-using-decimal-in-python


    import decimal
    from decimal import Decimal

    decimal.getcontext().prec = 1_000_000


    def cube_root(A: Decimal):
    guess = (A - Decimal(1)) / Decimal(3)
    x0 = (Decimal(2) * guess + A / Decimal(guess * guess)) / Decimal(3.0)
    while 1:
    xn = (Decimal(2) * x0 + A / Decimal(x0 * x0)) / Decimal(3.0)
    if xn == x0:
    break
    x0 = xn
    return xn


    float_root = 5 ** (1.0 / 3)
    float_r3 = float_root * float_root * float_root
    print(5 - float_r3)
    five = Decimal(5.0)
    r = cube_root(five)
    decimal_r3 = r * r * r
    print(5 - decimal_r3)

    ----
    8.881784197001252e-16
    1E-999999

    From: Python-list <python-list-bounces+gweatherby=uchc.edu@python.org> on behalf of Michael Torrie <torriem@gmail.com>
    Date: Tuesday, February 14, 2023 at 5:52 PM
    To: python-list@python.org <python-list@python.org>
    Subject: Re: Precision Tail-off?
    *** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***

    On 2/14/23 00:09, Stephen Tucker wrote:
    I have two questions:
    1. Is there a straightforward explanation for this or is it a bug?
    To you 1/3 may be an exact fraction, and the definition of raising a
    number to that power means a cube root which also has an exact answer,
    but to the computer, 1/3 is 0.333333333333333 repeating in decimal,
    which is some other fraction in binary. And even rational numbers like
    0.2, which are precise and exact, are not in binary
    (0.01010101010101010101). 0.2 is .0011011011011011011 on and on forever.

    IEEE floating point has very well known limitations. All languages that
    use IEEE floating point will be subject to these limitations. So it's
    not a bug in the sense that all languages will exhibit this behavior.

    2. Is the same behaviour exhibited in Python 3.x?
    Yes. And Java, C++, and any other language that uses IEEE floating point.

    -- https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jjhLqksliV_IjxQAHxXvdnOLB00sJU_hfHNIfK2U1NK-yO2X2kOxJtk6nbqEzXZkyOPBOaMdIlz_sHGkpA$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/python-
    list__;!!Cn_UX_p3!jjhLqksliV_IjxQAHxXvdnOLB00sJU_hfHNIfK2U1NK-yO2X2kOxJtk6nbqEzXZkyOPBOaMdIlz_sHGkpA$>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter Pearson@21:1/5 to Oscar Benjamin on Thu Feb 16 15:40:19 2023
    On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org> wrote:
    [snip]
    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets bigger. [snip]

    For your information, the first 20 significant figures of the cube root in >> question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0)
    49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly equal
    to 1/3 ...
    [snip]
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You should
    expect this difference to grow approximately linearly as you keep
    adding more zeros in the base.

    Marvelous. Thank you.


    --
    To email me, substitute nowhere->runbox, invalid->com.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Tucker@21:1/5 to All on Fri Feb 17 11:01:06 2023
    As a follow-up to my previous message, I have just produced the following
    log on IDLE, for your information:
    ------------------------------
    math.e ** (math.log
    (123456789000000000000000000000000000000000000000000) / 3) 4.979338592181741e+16
    10 ** (math.log10 (123456789000000000000000000000000000000000000000000)
    / 3)
    4.979338592181736e+16
    123456789000000000000000000000000000000000000000000 ** (1.0 / 3.0) 4.979338592181734e+16
    123456789e42 ** (1.0 / 3.0)
    4.979338592181734e+16
    ------------------------------

    Stephen Tucker.


    On Fri, Feb 17, 2023 at 10:27 AM Stephen Tucker <stephen_tucker@sil.org>
    wrote:

    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    Consider an integer N consisting of a finitely-long string of digits in
    base 10.

    Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
    call it RootN.

    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The *only *difference between RootN and RootNZZZ is that the decimal
    point in RootNZZZ is one place further to the right than the decimal point
    in RootN.

    None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.

    I rest my case.

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.

    Stephen Tucker.


    On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:

    On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>
    wrote:
    [snip]
    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets
    bigger.
    [snip]

    For your information, the first 20 significant figures of the cube
    root in
    question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0)
    49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly equal
    to 1/3 ...
    [snip]
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You should
    expect this difference to grow approximately linearly as you keep
    adding more zeros in the base.

    Marvelous. Thank you.


    --
    To email me, substitute nowhere->runbox, invalid->com.
    --
    https://mail.python.org/mailman/listinfo/python-list



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Tucker@21:1/5 to All on Fri Feb 17 10:27:08 2023
    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    Consider an integer N consisting of a finitely-long string of digits in
    base 10.

    Consider the infinitely-precise cube root of N (yes I know that it could
    never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
    call it RootN.

    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The *only *difference between RootN and RootNZZZ is that the decimal point
    in RootNZZZ is one place further to the right than the decimal point in
    RootN.

    None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.

    I rest my case.

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.

    Stephen Tucker.


    On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:

    On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>
    wrote:
    [snip]
    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets
    bigger.
    [snip]

    For your information, the first 20 significant figures of the cube root
    in
    question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0)
    49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly equal
    to 1/3 ...
    [snip]
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You should expect this difference to grow approximately linearly as you keep
    adding more zeros in the base.

    Marvelous. Thank you.


    --
    To email me, substitute nowhere->runbox, invalid->com.
    --
    https://mail.python.org/mailman/listinfo/python-list


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Passin@21:1/5 to Stephen Tucker on Fri Feb 17 08:51:55 2023
    On 2/17/2023 5:27 AM, Stephen Tucker wrote:
    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    Consider an integer N consisting of a finitely-long string of digits in
    base 10.

    What you are not considering is that the IEEE standard is about trying
    to achieve a balance between resource use (memory and registers),
    precision, speed of computation, reliability (consistent and stable
    results), and compatibility. So there have to be many tradeoffs. One
    of them is the use of binary representation. It has never been about
    achieving ideal mathematical perfection for some set of special cases.

    Want a different set of tradeoffs? Fine, go for it. Python has Decimal
    and rational libraries among others. They run more slowly than IEEE,
    but maybe that's a good tradeoff for you. Use a symbolic math library.
    Trap special cases of interest to you and calculate them differently.
    Roll your own. Trouble is, you have to know one heck of a lot to roll
    your own, and it may take decades of debugging to get it right. Even
    then it won't have hardware assistance like IEEE floating point usually has.

    Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
    call it RootN.

    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The *only *difference between RootN and RootNZZZ is that the decimal point
    in RootNZZZ is one place further to the right than the decimal point in RootN.

    None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.

    I rest my case.

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.

    Stephen Tucker.


    On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:

    On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>
    wrote:
    [snip]
    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets
    bigger.
    [snip]

    For your information, the first 20 significant figures of the cube root
    in
    question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0)
    49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly equal
    to 1/3 ...
    [snip]
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You should
    expect this difference to grow approximately linearly as you keep
    adding more zeros in the base.

    Marvelous. Thank you.


    --
    To email me, substitute nowhere->runbox, invalid->com.
    --
    https://mail.python.org/mailman/listinfo/python-list


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Weatherby,Gerard@21:1/5 to Stephen Tucker on Fri Feb 17 14:39:42 2023
    IEEE did not define a standard for floating point arithmetics. They designed multiple standards, including a decimal float point one. Although decimal floating point (DFP) hardware used to be manufactured, I couldn’t find any current manufacturers.
    There was a company that seemed to be active until a few years ago, but they seem to have gone dark: https://twitter.com/SilMinds



    From: Python-list <python-list-bounces+gweatherby=uchc.edu@python.org> on behalf of Thomas Passin <list1@tompassin.net>
    Date: Friday, February 17, 2023 at 9:02 AM
    To: python-list@python.org <python-list@python.org>
    Subject: Re: Precision Tail-off?
    *** Attention: This is an external email. Use caution responding, opening attachments or clicking on links. ***

    On 2/17/2023 5:27 AM, Stephen Tucker wrote:
    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    Consider an integer N consisting of a finitely-long string of digits in
    base 10.

    What you are not considering is that the IEEE standard is about trying
    to achieve a balance between resource use (memory and registers),
    precision, speed of computation, reliability (consistent and stable
    results), and compatibility. So there have to be many tradeoffs. One
    of them is the use of binary representation. It has never been about
    achieving ideal mathematical perfection for some set of special cases.

    Want a different set of tradeoffs? Fine, go for it. Python has Decimal
    and rational libraries among others. They run more slowly than IEEE,
    but maybe that's a good tradeoff for you. Use a symbolic math library.
    Trap special cases of interest to you and calculate them differently.
    Roll your own. Trouble is, you have to know one heck of a lot to roll
    your own, and it may take decades of debugging to get it right. Even
    then it won't have hardware assistance like IEEE floating point usually has.

    Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
    call it RootN.

    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The *only *difference between RootN and RootNZZZ is that the decimal point
    in RootNZZZ is one place further to the right than the decimal point in RootN.

    None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.

    I rest my case.

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.

    Stephen Tucker.


    On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:

    On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>
    wrote:
    [snip]
    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets
    bigger.
    [snip]

    For your information, the first 20 significant figures of the cube root
    in
    question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0)
    49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly equal
    to 1/3 ...
    [snip]
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You should
    expect this difference to grow approximately linearly as you keep
    adding more zeros in the base.

    Marvelous. Thank you.


    --
    To email me, substitute nowhere->runbox, invalid->com.
    --
    https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/
    python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$>


    -- https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/python-
    list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter Pearson@21:1/5 to Stephen Tucker on Fri Feb 17 15:57:29 2023
    On Fri, 17 Feb 2023 10:27:08, Stephen Tucker wrote:[Head-posting undone.]
    On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:
    On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>
    wrote:
    [snip]
    I have just produced the following log in IDLE (admittedly, in Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets
    bigger.
    [snip]

    For your information, the first 20 significant figures of the cube root >> in
    question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0)
    49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly equal
    to 1/3 ...
    [snip]
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You should
    expect this difference to grow approximately linearly as you keep
    adding more zeros in the base.

    Marvelous. Thank you.
    [snip]
    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The *only *difference between RootN and RootNZZZ is that the decimal point
    in RootNZZZ is one place further to the right than the decimal point in RootN.

    None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.

    I rest my case.
    [snip]


    I believe the pivotal point of Oscar Benjamin's explanation is
    that within the constraints of limited-precision binary floating-point
    numbers, the exponent of 1/3 cannot be represented precisely, and
    is in practice represented by something slightly smaller than 1/3;
    and accordingly, when you multiply your argument by 1000, its not-quit-cube-root gets multiplied by something slightly smaller
    than 10, which is why the number of figures matching the "right"
    answer gets steadily smaller.

    Put slightly differently, the crux of the problem lies not in the
    complicated process of exponentiation, but simply in the failure
    to represent 1/3 exactly. The fact that the exponent is slightly
    less than 1/3 means that you would observe the steady loss of
    agreement that you report, even if the exponentiation process
    were perfect.

    --
    To email me, substitute nowhere->runbox, invalid->com.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Torrie@21:1/5 to Stephen Tucker on Fri Feb 17 08:38:58 2023
    On 2/17/23 03:27, Stephen Tucker wrote:
    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    No matter how you do it, there are always tradeoffs and inaccuracies
    moving from real numbers in base 10 to base 2. That's just the nature
    of the math. Any binary floating point representation is going to have problems. There are techniques for mitigating this: https://en.wikipedia.org/wiki/Floating-point_error_mitigation
    It's interesting to note that the article points out that floating point
    error was first talked about in the 1930s. So no matter what binary
    scheme you choose there will be error. That's just the nature of
    converting a real from one base to another.

    Also we weren't clear on this, but the IEEE standard is not just
    implemented in software. It's the way your CPU represents floating point numbers in silicon. And in your GPUs (where speed is preferred to
    precision). So it's not like Python could just arbitrarily do something different unless you were willing to pay a huge penalty for speed. For
    example the decimal module which is arbitrary precision, but quite slow.

    Have you tried the numpy cbrt() function? It is probably going to be
    more accurate than using power to 0.3333.

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.
    Rest assured the IEEE committee that formalized the format decades ago
    knew all about the limitations and trade-offs. Over the years CPUs have increased in capacity and now we can use 128-bit floating point numbers
    which mitigate some of the accuracy problems by simply having more
    binary digits. But the fact remains that some rational numbers in
    decimal are irrational in binary, so arbitrary decimal precision using
    floating point is not possible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From avi.e.gross@gmail.com@21:1/5 to All on Fri Feb 17 12:46:52 2023
    Stephen,

    What response do you expect from whatever people in the IEEE you want?

    The specific IEEE standards were designed and agreed upon by groups working
    in caveman times when the memory and CPU time were not so plentiful. The
    design of many types, including floating point, had to work decently if not perfectly so you stored data in ways the hardware of the time could process
    it efficiently.

    Note all kinds of related issues about what happens if you want an integer larger than fits into 16 bits or 32 bits or even 64 bits. A Python integer
    was designed to be effectively unlimited and uses as much storage as needed.
    It can also get ever slower when doing things like looking for gigantic
    primes. But it does not have overflow problems.

    So could you design a floating point data type with similar features? It
    would be some complex data structure that keeps track of the number of bit/bytes/megabytes currently being used to store the mantissa or exponent parts and then have some data structure that holds all the bits needed. When doing any arithmetic like addition or division or more complex things, it
    would need to compare the two objects being combined and calculate how to perhaps expand/convert one to match the other and then do umpteen steps to generate the result in as many pieces/steps as needed and create a data structure that holds the result, optionally trimming off terminal parts not needed or wanted. Then you would need all relevant functions that accept regular floating point to handle these numbers and generate these numbers.

    Can that be done well? Well, sure, but not necessarily WELL. Some would
    point you to the Decimal type. It might take a somewhat different tack on
    how to do this. But everything comes with a cost.

    Perhaps the response from the IEEE would be that what they published was
    meant for some purposes but not yours. It may be that a group needs to formulate a new standard but leave the old ones in place for people willing
    to use them as their needs are more modest.

    As an analogy, consider the lowly char that stored a single character in a byte. II mean good old ASCII but also EBCDIC and the ISO family like ISO
    8859-1 and so on. Those standards focused in on the needs of just a few languages and if you wanted to write something in a mix of languages, it
    could be a headache as I have had time I had to shift within one document to say ISO 8859-8 to include some Hebrew, and ISO 8859-3 for Esperanto and so
    on while ISO8859-1 was fine for English, French, German, Spanish and many others. For some purposes, I had to use encodings like shift JIS to do
    Japanese as many Asian languages were outside what ISO was doing.

    The solutions since then vary but tend to allow or require multiple bytes
    per character. But they retain limits and if we ever enter a Star Trek
    Universe with infinite diversity and more languages and encodings, we might need to again enlarge our viewpoint and perhaps be even more wasteful of our computing resources to accommodate them all!

    Standards are often not made to solve ALL possible problems but to make
    clear what is supported and what is not required. Mathematical arguments can
    be helpful but practical considerations and the limited time available (as these darn things can take YEARS to be agreed on) are often dominant.
    Frankly, by the tie many standards, such as for a programming language, are finalized, the reality in the field has often changed. The language may
    already have been supplanted largely by others for new work, or souped up
    with not-yet-standard features.

    I am not against striving for ever better standards and realities. But I do think a better way to approach this is not to reproach what was done but ask
    if we can focus on the near-future and make it better.

    Arguably, there are now multiple features out there such as Decimal and they may be quite different. That often happens without a standard. But if you
    now want everyone to get together and define a new standard that may break
    some implementations, ...

    As I see it, many computer courses teach the realities as well as the mathematical fantasies that break down in the real world. One of those that tend to be stressed is that floating point is not exact and that comparison operators need to be used with caution. Often the suggestion is to subtract
    one number from another and check if the result is fairly close to zero as
    in the absolute value is less than an IEEE standard number where the last
    few bits are ones. For more complex calculations where the errors can accumulate, you may need to choose a small number with more such bits near
    the end.

    Extended precision arithmetic is perhaps cheaper now and can be done for a reasonable number of digits. It probably is not realistic to do most such calculations for billions of digits, albeit some of the calculations for the first googolplex digits of pi might indeed need such methods, as soon as we
    fin a way to keep that many digits in memory give the ten to the 80th or so particles we think are in our observable universe. But knowing pi to that precision may not be meaningful if an existing value already is so precise
    that given an exact number for the diameter of something the size of the universe (Yes, I know this is nonsense) you could calculate the
    circumference (ditto) to less than the size (ditto) of a proton. Any errors
    in such a measurement would be swamped by all kinds of things such as uncertainties in what we can measure, or niggling details about how space expands irregularly in the area as we speak and so on.

    So if you want a new IEEE (or other such body) standard, would you be
    satisfied with a new one for say a 16,384 byte monstrosity that holds
    gigantic numbers with lots more precision, or hold out for a relatively flexible and unlimited version that can be expanded until your computer or planet runs out of storage room and provides answers after a few billion
    years when used to just add two of them together?



    -----Original Message-----
    From: Python-list <python-list-bounces+avi.e.gross=gmail.com@python.org> On Behalf Of Stephen Tucker
    Sent: Friday, February 17, 2023 5:27 AM
    To: python-list@python.org
    Subject: Re: Precision Tail-off?

    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    Consider an integer N consisting of a finitely-long string of digits in base 10.

    Consider the infinitely-precise cube root of N (yes I know that it could
    never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's call
    it RootN.

    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The *only *difference between RootN and RootNZZZ is that the decimal point
    in RootNZZZ is one place further to the right than the decimal point in
    RootN.

    None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.

    I rest my case.

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.

    Stephen Tucker.


    On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:

    On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
    On Tue, 14 Feb 2023 at 07:12, Stephen Tucker
    <stephen_tucker@sil.org>
    wrote:
    [snip]
    I have just produced the following log in IDLE (admittedly, in
    Python
    2.7.10 and, yes I know that it has been superseded).

    It appears to show a precision tail-off as the supplied float gets
    bigger.
    [snip]

    For your information, the first 20 significant figures of the cube
    root
    in
    question are:
    49793385921817447440

    Stephen Tucker.
    ----------------------------------------------
    123.456789 ** (1.0 / 3.0)
    4.979338592181744
    123456789000000000000000000000000000000000. ** (1.0 / 3.0)
    49793385921817.36

    You need to be aware that 1.0/3.0 is a float that is not exactly
    equal to 1/3 ...
    [snip]
    SymPy again:

    In [37]: a, x = symbols('a, x')

    In [38]: print(series(a**x, x, Rational(1, 3), 2))
    a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

    You can see that the leading relative error term from x being not
    quite equal to 1/3 is proportional to the log of the base. You
    should expect this difference to grow approximately linearly as you
    keep adding more zeros in the base.

    Marvelous. Thank you.


    --
    To email me, substitute nowhere->runbox, invalid->com.
    --
    https://mail.python.org/mailman/listinfo/python-list

    --
    https://mail.python.org/mailman/listinfo/python-list

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to Stephen Tucker on Fri Feb 17 13:42:18 2023
    On 2/17/23 5:27 AM, Stephen Tucker wrote:
    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    Consider an integer N consisting of a finitely-long string of digits in
    base 10.

    Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
    call it RootN.

    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The key factor here is IEEE floating point is storing numbers in BINARY,
    not DECIMAL, so a multiply by 1000 will change the representation of the number, and thus the possible resolution errors.

    Store you numbers in IEEE DECIMAL floating point, and the variations by multiplying by powers of 10 go away.


    The *only *difference between RootN and RootNZZZ is that the decimal point
    in RootNZZZ is one place further to the right than the decimal point in RootN.

    No, since the floating point number is stored as a fraction times a
    power of 2, the fraction has changed as well as the power of 2.


    None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.

    Only if the storage format was DECIMAL.


    I rest my case.

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.

    That is why they have developed the Decimal Floating point format, to
    handle people with those sorts of problems.

    They just aren't common enough for many things to have adopted the use
    of it.


    Stephen Tucker.

    --
    Richard Damon

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Oscar Benjamin@21:1/5 to Stephen Tucker on Fri Feb 17 18:53:38 2023
    On Fri, 17 Feb 2023 at 10:29, Stephen Tucker <stephen_tucker@sil.org> wrote:

    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
    [snip]

    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.

    Their response would be that they are well aware of what you are
    saying and knew about this already since before writing any standards.
    The basic limitation of the IEEE standard in this respect is that it
    describes individual operations rather than composite operations. Your calculation involves composing operations, specifically:

    result = x ** (n / d)

    The problem is that there is more than one operation so we have to
    evaluate this in two steps:

    e = n / d
    result = x ** e

    Now the problem is that although n / d is correctly rounded e has a
    small error because the exact value of n / d cannot be represented. In
    the second operation taking this slightly off value of e as the
    intended input means that the correctly rounded result for x ** e is
    not the closest float to the true value of the *compound* operation.
    The exponentiation operator in particular is very sensitive to changes
    in the exponent when the base is large so the tiny error in e leads to
    a more noticeable relative error in x ** e.

    The only way to prevent this in full generality is to to have a system
    in which no intermediate inexact operations are computed eagerly which
    means representing expressions symbolically in some way. That is what
    the SymPy code I showed does:

    In [6]: from sympy import cbrt

    In [7]: e = cbrt(123456789000000000000000000000000000000000)

    In [8]: print(e)
    100000000000*123456789**(1/3)

    In [9]: e.evalf(50)
    Out[9]: 49793385921817.447440261250171604380899353243631762

    Because the *entire* expression is represented here *exactly* as e it
    is then possible to evaluate different parts of the expression
    repeatedly with different levels of precision and it is necessary to
    do that for full accuracy in this case. Here evalf will use more than
    50 digits of precision internally so that at the end you have a result specified to 50 digits but where the error for the entire expression
    is smaller than the final digit. If you give it a more complicated
    expression then it will use even more digits internally for deeper
    parts of the expression tree because that is what is needed to get a
    correctly rounded result for the expression as a whole.

    This kind of symbolic evaluation is completely outside the scope of
    what the IEEE floating point standards are for. Any system based on
    fixed precision and eager evaluation will show the same problem that
    you have identified. It is very useful though to have a system with
    fixed precision and eager evaluation despite these limitations. The
    context for which the IEEE standards are mainly intended (e.g. FPU instructions) is one in which fixed precision and eager evaluation are
    the only option.

    --
    Oscar

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter J. Holzer@21:1/5 to Stephen Tucker on Fri Feb 17 20:50:03 2023
    On 2023-02-17 10:27:08 +0000, Stephen Tucker wrote:
    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    Consider an integer N consisting of a finitely-long string of digits in
    base 10.

    Consider the infinitely-precise cube root of N (yes I know that it could never be computed

    However, computers exist to compute. Something which can never be
    computed is outside of the realm of computing.

    unless N is the cube of an integer, but this is a mathematical
    argument, not a computational one), also in base 10. Let's call it
    RootN.

    Now consider appending three zeroes to the right-hand end of N (let's call
    it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

    The *only *difference between RootN and RootNZZZ is that the decimal point
    in RootNZZZ is one place further to the right than the decimal point in RootN.

    No. In mathematics there is no such thing as a decimal point. The only difference is that RootNZZZ is RootN*10. But there is nothing special
    about 10. You could multiply your original number by 512 and then the
    new cube root would differ by a factor of 8 (which would show up as
    shifted "binary point"[1] in binary but completely different digits in
    decimal) or you could multiply by 1728 and then you would need base 12
    to get the same digits with a shifted "duodecimal point".

    hp

    [1] It's really unfortunate that the point which separates the integer
    and the fractional part of a number is called a "decimal point" in
    English. Makes it hard to talk about non-integer numbers in other
    bases.

    --
    _ | Peter J. Holzer | Story must make more sense than reality.
    |_|_) | |
    | | | hjp@hjp.at | -- Charles Stross, "Creative writing
    __/ | http://www.hjp.at/ | challenge!"

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEETtJbRjyPwVTYGJ5k8g5IURL+KF0FAmPv2mYACgkQ8g5IURL+ KF2A3Q/+Ps9RFngdIazfQYsMPamNsjAb971lJtEXYSIkOA5qiovBgEPHZJizMF7P 8TgTETluSMAEwvEYGuXpwYloYsKTKKyaFawibTeQdzmyPy2FQARK9BBjzPRgAK6R x0n+aym3TQTTOkeXcuvevUmE7yvQQMH1oR9Jje2z3amflQch0mCzA76nIPiury0J b51KaJNaQcwTeKw6lda0NCEFZL5jhcxXbBE/0cKEwWupXrY1Kg3BKvS83wW1A9rO FOi3sRM0RJdwQE8RK3SyyRkyShYjHZ/WP/JOEo6sk+BDjNXwSjI7GdTyvRoajYSN CQZZMdwdhrfm5MFaWOwJUfrsdak2F7CM009UbQtCPkAmqoH+KHdwN1OLS+TT+0mr ed1j3mo2elqX2hXnOhKCQFEddeQ/x/eDraVIg8BHAAZxCcor1emfr8NpsQiLUfBM bXcHb20X6RU0XI8lDllC/6b8mlQ4xQL/F9LmYCP/PGB1sLdbm+uKjafMQQjUazLu xGYa7WxpMmGeWADjNhXLh24A/p8Ow6pWJohQGZoiw5NrSdZxYOvNK0EguVjWYMkM 13V/ltM239RmV7TgLR8TjBkhJPDG90aPPeG17WvDY/O06eZfTqMfMCZ6aHffzvMI urJBVJdlUVGS9jfHRyTu5eE3fYy4VPElTB2B76n
  • From Peter J. Holzer@21:1/5 to Gerard on Fri Feb 17 20:58:31 2023
    On 2023-02-17 14:39:42 +0000, Weatherby,Gerard wrote:
    IEEE did not define a standard for floating point arithmetics. They
    designed multiple standards, including a decimal float point one.
    Although decimal floating point (DFP) hardware used to be
    manufactured, I couldn’t find any current manufacturers.

    Doesn't IBM any more? Their POWER processors used to implement decimal
    FP (starting with POWER8, if I remember correctly).

    hp

    --
    _ | Peter J. Holzer | Story must make more sense than reality.
    |_|_) | |
    | | | hjp@hjp.at | -- Charles Stross, "Creative writing
    __/ | http://www.hjp.at/ | challenge!"

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEETtJbRjyPwVTYGJ5k8g5IURL+KF0FAmPv3GcACgkQ8g5IURL+ KF0R1A/9E9WC0tplmtjWVipEvaFLEdqIE+M41o+fSu4YmEZd+8zfZ4meUOdz+T90 JaB0C2UO4fPY3esIOLjsWoJ0gFdVP4gUkLkF5yL9j6PYUZ6IMo2686F3gbU7MiIK hPxWY3vXQaTAEkY5EhHRd8zP/n7QUpAMGeUVM8RQzyLrDPdvB5Mjbm4WGYnsaGak 6u5lOHsPn3FNDRTZqzIGFiNroCAA4ksRO8ACpXjEke1LcTArZdM4IZEH19VfkWHD eaUC29r7+VO0HvPKwip0piwERrJknSH8p8gpPqQS/hoNvHyuLTcYBwK1rDc5Q5hG UNu5Hd2U1Lf+eL3U+iDMisqMC/E/lr6iLkF+nj+ExOznGW+3A3dRxa67jZloixyk AmaIWKGL386hIxqYwSpfEWENX555vtctvR4ogVxH4e5QGmBG+TqBG+d2Ffkj1bpz aVgiwiOCbH+nCL2wi3n8Owsxt1GPRnfcznKc2b9JI3Gik1m1OAgcO/CfUlL5S+7d 3wYqEaGoiXq6lt2IAi9FUy+y0UJH6NusSWxjByJNpNjO6Nb+s55n4khT/mZNKr0M JALwg0xSbIdjlRZkq2PQXPRaTrruO7r7FUAxl4gYPwizP8UH5r7m+UtJU7Fe6IOa K+vI/7bSM388g+UiMdyKL5KZ681Qa4lWqhH26gO
  • From Peter J. Holzer@21:1/5 to Michael Torrie on Fri Feb 17 20:38:44 2023
    On 2023-02-17 08:38:58 -0700, Michael Torrie wrote:
    On 2/17/23 03:27, Stephen Tucker wrote:
    Thanks, one and all, for your reponses.

    This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.

    No matter how you do it, there are always tradeoffs and inaccuracies
    moving from real numbers in base 10 to base 2.

    This is phrased ambiguosly. So just to clarify:

    Real numbers are not in base 10. Or base 2 or base 37 or base e. A
    positional system (which uses a base) is just a convenient way to write
    a small subset of real numbers. By using any base you limit yourself to rational numbers (no e or π or √2) and in fact only those rational
    numbers where the denominator is a power of the base.

    Converting numbers from one base to another with any finite precision
    will generally involve rounding - so do that as little as possible.


    That's just the nature of the math. Any binary floating point
    representation is going to have problems.

    Any decimal floating point representation is also going to have
    problems.

    There is nothing magical about base 10. It's just what we are used to
    (which also means that we are used to the rounding errors and aren't
    surprised by them as much).

    Also we weren't clear on this, but the IEEE standard is not just
    implemented in software. It's the way your CPU represents floating point numbers in silicon. And in your GPUs (where speed is preferred to precision). So it's not like Python could just arbitrarily do something different unless you were willing to pay a huge penalty for speed.

    I'm pretty sure that compared to the interpreter overhead of CPython the overhead of a software FP implementation (whether binary or decimal)
    would be rather small, maybe negligible.


    Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.
    Rest assured the IEEE committee that formalized the format decades ago
    knew all about the limitations and trade-offs. Over the years CPUs have increased in capacity and now we can use 128-bit floating point numbers

    The very first IEEE compliant processor (the Intel 8087) had an 80 bit
    extended type (in fact it did all computations in 80 bit and only
    rounded down to 64 or 32 bits when storing the result). By the 1990s, 96
    and 128 bit was quite common.

    which mitigate some of the accuracy problems by simply having more
    binary digits. But the fact remains that some rational numbers in
    decimal are irrational in binary,

    Be careful: "Rational" and "irrational" have a standard meaning in
    mathematics and it's independent of base.

    hp

    --
    _ | Peter J. Holzer | Story must make more sense than reality.
    |_|_) | |
    | | | hjp@hjp.at | -- Charles Stross, "Creative writing
    __/ | http://www.hjp.at/ | challenge!"

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEETtJbRjyPwVTYGJ5k8g5IURL+KF0FAmPv178ACgkQ8g5IURL+ KF14Fw/+LRu27Jm4KkYRaovJoaa9CTsey4HnLOdOypg5E8B4n2iA860DtTCsqvzn IvSHbvkNGJQ7s4dALbGdXURjUVdemMEilxe+dym5zBAkCcqxjTifinsFkXWcd2Me IzT86VDBWWHowCI5/j06eX1+ftBANym8D6gYgsOSCntJqDt/DPDxc4Q6788weVyU N49VryQdX4UDqFf6J3WTlJTekgssHGSsAIb4N5ZHHcgBsPBrcBu6HCf7v/QS3xmd HIRSpDaFkiKMSJr8/Zgu6csXUqcrT4hm1OZGdZ5a1vSPP9fwneoH/FbaZvWZEE5o itFmv0RR446sc64Rg6S6fSk5Nip18BNidGOgHctxoMWuFCNtd7a2H0alURVJuWzq Ee1WKmlk3a7CSdAbOaJ2V3XiJpSAODEZaDTeXlwfjJHhQ06/enkiP88nvuumVukK Sr5uMK35RnOYFvm0w9/MkzbWYjNu+r793ruEGxi3w75aT+AbJi4lhiV64nx29FhN 6eK3/QwNCo8jcDSTjLxBIwlPR9TmXCpDLD/RjE2Yd/oV5Yo7CrBPG9ORWYSOH2yO 4TcgsILoM4KjXc8IYRFxE4o89l9GPPQVUi29DZlzvSF9l+OJ4EXhKWZyUjxKxxHJ j4pp2ZyzVENaPF+F5zHqpjyz9E+mqobnCOlWjax
  • From Mats Wichmann@21:1/5 to Richard Damon on Fri Feb 17 13:57:06 2023
    On 2/17/23 11:42, Richard Damon wrote:
    On 2/17/23 5:27 AM, Stephen Tucker wrote:

    The key factor here is IEEE floating point is storing numbers in BINARY,
    not DECIMAL, so a multiply by 1000 will change the representation of the number, and thus the possible resolution errors.

    Store you numbers in IEEE DECIMAL floating point, and the variations by multiplying by powers of 10 go away.

    The development of the original IEEE standard led eventually to
    consistent implementation in hardware (when they implement floating
    point at all, which embedded/IoT class chips in particular often don't)
    that aligned with how languages/compilers treated floating point, so
    that's been a really successful standard, whatever one might feel about
    the tradeoffs. Standards are all about finding a mutually acceptable way forward, once people admit there is no One Perfect Answer.

    Newer editions of 754 (since 2008) have added this decimal floating
    point representation, which is supported by some software such as IBM
    and Intel floating-point libraries. Hardware support has been slower to arrive. The only ones I've heard of have been the IBM z series
    (mainframes) and somebody else mentioned Power though I'd never seen
    that. It's possible some of the GPU lines may be going this direction.

    As far as Python goes... the decimal module has this comment:

    It is a complete implementation of Mike Cowlishaw/IBM's General
    Decimal Arithmetic Specification.

    Cowlishaw was the editor of the 2008 and 2019 editions of IEEE 754, fwiw.

    And... this topic as a whole comes up over and over again, like
    everywhere. See Stack Overflow for some amusement.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to Richard Damon on Fri Feb 17 12:36:31 2023
    On 2023-02-17, Richard Damon <Richard@Damon-Family.org> wrote:
    [...]

    Perhaps this observation should be brought to the attention of the IEEE. I >> would like to know their response to it.

    That is why they have developed the Decimal Floating point format, to
    handle people with those sorts of problems.

    They just aren't common enough for many things to have adopted the
    use of it.

    Back before hardware floating point was common, support for deciaml
    floating point was very common. All of the popular C, Pascal, and
    BASIC compilers (for microcomputers) I remember let you choose (at
    compile time) whether you wanted to use binary floating point or
    decimal (BCD) floating point. People doing scientific stuff usually
    chose binary because it was a little faster and you got more
    resolution for the same amount of stoage. If you were doing
    accounting, you chose BCD (or used fixed-point).

    Once hardware (binary) floating point became common, support for
    software BCD floating point just sort of went away...

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to Mats Wichmann on Fri Feb 17 14:03:09 2023
    On 2023-02-17, Mats Wichmann <mats@wichmann.us> wrote:

    And... this topic as a whole comes up over and over again, like
    everywhere.

    That's an understatement.

    I remember it getting rehashed over and over again in various USENET
    groups 35 years ago when when the VAX 11/780 BSD machine on which I
    read news exchanged postings with peers using a half-dozen dial-up
    modems and UUCP.

    One would have thought it would be a time-saver when David Goldberg
    wrote "the paper" in 1991, and you could tell people to go away and
    read this:

    https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
    https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf

    It didn't help.

    Every fall, the groups were again full of a new crop of people who had
    just discovered all sorts of bugs in the way <software/hardware>
    implemented floating point, and pointing them to a nicely written
    document that explained it never did any good.

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Angelico@21:1/5 to python-list@python.org on Sat Feb 18 12:44:34 2023
    On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list <python-list@python.org> wrote:

    On 18/02/23 7:42 am, Richard Damon wrote:
    On 2/17/23 5:27 AM, Stephen Tucker wrote:
    None of the digits in RootNZZZ's string should be different from the
    corresponding digits in RootN.

    Only if the storage format was DECIMAL.

    Note that using decimal wouldn't eliminate this particular problem,
    since 1/3 isn't exactly representable in decimal either.

    To avoid it you would need to use an algorithm that computes nth
    roots directly rather than raising to the power 1/n.


    It's somewhat curious that we don't really have that. We have many
    other inverse operations - addition and subtraction (not just "negate
    and add"), multiplication and division, log and exp - but we have exponentiation without an arbitrary-root operation. For square roots,
    that's not a problem, since we can precisely express the concept
    "raise to the 0.5th power", but for anything else, we have to raise to
    a fractional power that might be imprecise.

    But maybe, in practice, this isn't even a problem?

    ChrisA

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to Chris Angelico on Fri Feb 17 18:08:42 2023
    Chris Angelico <rosuav@gmail.com> writes:
    To avoid it you would need to use an algorithm that computes nth
    roots directly rather than raising to the power 1/n.
    It's somewhat curious that we don't really have that.

    This *could* give an exact answer, but doesn't...

    Python 3.9.2 (default, Feb 28 2021, 17:03:44) ...
    >>> from fractions import *
    >>> 125**Fraction("1/3")
    4.999999999999999

    I wouldn't want it to give an integer though: too much datatype
    confusion.

    There are some symbolic math packages that can compute in arbitrary
    number fields, but that goes way beyond fulfilling the naive
    expectations of people wanting cuberoot(125) to be exactly 5.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Greg Ewing@21:1/5 to Richard Damon on Sat Feb 18 14:37:44 2023
    On 18/02/23 7:42 am, Richard Damon wrote:
    On 2/17/23 5:27 AM, Stephen Tucker wrote:
    None of the digits in RootNZZZ's string should be different from the
    corresponding digits in RootN.

    Only if the storage format was DECIMAL.

    Note that using decimal wouldn't eliminate this particular problem,
    since 1/3 isn't exactly representable in decimal either.

    To avoid it you would need to use an algorithm that computes nth
    roots directly rather than raising to the power 1/n.

    --
    Greg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael Torrie@21:1/5 to Grant Edwards on Fri Feb 17 19:54:29 2023
    On 2/17/23 15:03, Grant Edwards wrote:
    Every fall, the groups were again full of a new crop of people who had
    just discovered all sorts of bugs in the way <software/hardware>
    implemented floating point, and pointing them to a nicely written
    document that explained it never did any good.

    But to be fair, Goldberg's article is pretty obtuse and formal for most
    people, even programmers. I don't need lots of formal proofs as he
    shows. Just a summary is sufficient I'd think. Although I've been
    programming for many years, I have no idea what he means with most of
    the notation in that paper.

    Although I have a vague notion of what's going on, as my last post
    shows, I don't know any of the right terminology.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Oscar Benjamin@21:1/5 to Chris Angelico on Sat Feb 18 03:52:51 2023
    On Sat, 18 Feb 2023 at 01:47, Chris Angelico <rosuav@gmail.com> wrote:

    On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list <python-list@python.org> wrote:

    On 18/02/23 7:42 am, Richard Damon wrote:
    On 2/17/23 5:27 AM, Stephen Tucker wrote:
    None of the digits in RootNZZZ's string should be different from the
    corresponding digits in RootN.

    Only if the storage format was DECIMAL.

    Note that using decimal wouldn't eliminate this particular problem,
    since 1/3 isn't exactly representable in decimal either.

    To avoid it you would need to use an algorithm that computes nth
    roots directly rather than raising to the power 1/n.


    It's somewhat curious that we don't really have that. We have many
    other inverse operations - addition and subtraction (not just "negate
    and add"), multiplication and division, log and exp - but we have exponentiation without an arbitrary-root operation. For square roots,
    that's not a problem, since we can precisely express the concept
    "raise to the 0.5th power", but for anything else, we have to raise to
    a fractional power that might be imprecise.

    Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:

    >>> np.cbrt(123456789000000000000000000000000000000000000000000000000000000.)
    4.979338592181745e+20

    SymPy can also evaluate any rational power either exactly or to any
    desired accuracy. Under the hood SymPy uses mpmath for the approximate numerical evaluation part of this and mpmath can also be used directly
    with its cbrt and nthroot functions to do this working with any
    desired precision.

    But maybe, in practice, this isn't even a problem?

    I'd say it's a small problem. Few people would use such a feature but
    it would be have a little usefulness for those people if it existed.
    Libraries like mpmath and SymPy provide this and can offer a big step
    up for those who are really concerned about exactness or accuracy
    though so there are already options for those who care. These are a
    lot slower though than working with plain old floats but on the other
    hand offer vastly more than a math.cbrt function could offer to
    someone who needs something more accurate than x**(1/3).

    For those who are working with floats the compromise is clear: errors
    can accumulate in calculations. Taking the OPs example to the extreme,
    the largest result that does not overflow is:

    >>> (123456789. * 10**300) ** (1.0 / 3.0)
    4.979338592181679e+102

    Only the last 3 digits are incorrect so the error is still small. It
    is not hard to find other calculations where *all* the digits are
    wrong though:

    >>> math.cos(3)**2 + math.sin(3)**2 - 1
    -1.1102230246251565e-16

    So if you want to use floats then you need to learn to deal with this
    as appropriate for your use case. IEEE standards do their best to make
    results reproducible across machines as well as limiting avoidable
    local errors so that global errors in larger operations are *less
    likely* to dominate the result. Their guarantees are only local though
    so as soon as you have more complicated calculations you need your own
    error analysis somehow. IEEE guarantees are in that case also useful
    for those who actually want to do a formal error analysis.

    --
    Oscar

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter J. Holzer@21:1/5 to Oscar Benjamin on Sat Feb 18 12:16:01 2023
    On 2023-02-18 03:52:51 +0000, Oscar Benjamin wrote:
    On Sat, 18 Feb 2023 at 01:47, Chris Angelico <rosuav@gmail.com> wrote:
    On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
    To avoid it you would need to use an algorithm that computes nth
    roots directly rather than raising to the power 1/n.


    It's somewhat curious that we don't really have that. We have many
    other inverse operations - addition and subtraction (not just "negate
    and add"), multiplication and division, log and exp - but we have exponentiation without an arbitrary-root operation. For square roots, that's not a problem, since we can precisely express the concept
    "raise to the 0.5th power", but for anything else, we have to raise to
    a fractional power that might be imprecise.

    Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:

    Yes, but that's a special case. Chris was talking about arbitrary
    (integer) roots. My calculator has a button labelled [x√y], but my
    processor doesn't have an equivalent operation. Come to think of it, it
    doesn't even have a a y**x operation - just some simpler operations
    which can be used to implement it. GCC doesn't inline pow(y, x) on
    x86/64 - it just calls the library function.

    hp

    --
    _ | Peter J. Holzer | Story must make more sense than reality.
    |_|_) | |
    | | | hjp@hjp.at | -- Charles Stross, "Creative writing
    __/ | http://www.hjp.at/ | challenge!"

    -----BEGIN PGP SIGNATURE-----

    iQIzBAABCgAdFiEETtJbRjyPwVTYGJ5k8g5IURL+KF0FAmPws2wACgkQ8g5IURL+ KF1xMw//TGzbHOf6wkE/ATmXpihl81myM1pHRyKDi9XMWzrxS/cyw5s5qbinRS/8 Hv4wCc01a0pPBpkGjKs+HwGxHDQywAly1HyBCPDuBhh8WaU0RN3VhZBLGeKPA4s7 NnNV/eTjMcHydC8AoCswPiEhK0Osk4POvOSe41uyMUkasjeizo5r+g0nZ9y+UHR+ hpXoAn4CuSg9YryaDhcE/LnzNgFzpM7W/12LZ2EBXsJsY4Bss9sdOK/OwMtLCjr4 i/ew1/oGKoF9XkLz5IpBtox77xz9hs/joKFU6baiLKB3YbgT6yUECH8azmxQK8DJ Z9G3TXMZ6cdxa+ikmP7BGbIZ273q2piMnQyS1ktABYkB+nzo5gqByhibEyBB1MS7 CmaZDgaq5/mtDL3+zaVkr/OI2WdHx4OiT2fP0Tzj0TeMnDYgAQ9sogKa8NJT3y3F ErWleQBbpEnvQPE3W0RFSwmU02zrTJdJuxRiIcKokFLKJoRoRPBVry4c4K36Mdkj twlLuDkdOjtPoHQXbbPCU1G5EOzwOn7ci+BzUxAVpNfSejroWhkfLs41VAIZmE/A QJPx56dWhUcrqaJMC0UtQKKEK3F95Q8PMINh1UUdxyBleGFjXbTFpbxFmt1GKHX0 qehACQvwCn/h5cFhp6aQFvRsDIzqPgob/7BTHdD
  • From Oscar Benjamin@21:1/5 to Peter J. Holzer on Sat Feb 18 12:46:46 2023
    On Sat, 18 Feb 2023 at 11:19, Peter J. Holzer <hjp-python@hjp.at> wrote:

    On 2023-02-18 03:52:51 +0000, Oscar Benjamin wrote:
    On Sat, 18 Feb 2023 at 01:47, Chris Angelico <rosuav@gmail.com> wrote:
    On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
    To avoid it you would need to use an algorithm that computes nth
    roots directly rather than raising to the power 1/n.


    It's somewhat curious that we don't really have that. We have many
    other inverse operations - addition and subtraction (not just "negate
    and add"), multiplication and division, log and exp - but we have exponentiation without an arbitrary-root operation. For square roots, that's not a problem, since we can precisely express the concept
    "raise to the 0.5th power", but for anything else, we have to raise to
    a fractional power that might be imprecise.

    Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:

    Yes, but that's a special case. Chris was talking about arbitrary
    (integer) roots. My calculator has a button labelled [x√y], but my processor doesn't have an equivalent operation.

    All three of SymPy, mpmath and gmpy2 can do this as accurately as
    desired for any integer root:

    >>> n = 123456789000000000000000000000000000000000000000000000000000000

    >>> sympy.root(n, 6)
    1000000000*13717421**(1/6)*3**(1/3)
    >>> sympy.root(n, 6).evalf(50)
    22314431635.562095902499928269233656421704825692573

    >>> mpmath.root(n, 6)
    mpf('22314431635.562096')
    >>> mpmath.mp.dps = 50
    >>> mpmath.root(n, 6)
    mpf('22314431635.562095902499928269233656421704825692572746')

    >>> gmpy2.root(n, 6)
    mpfr('22314431635.562096')
    >>> gmpy2.get_context().precision = 100
    >>> gmpy2.root(n, 6)
    mpfr('22314431635.56209590249992826924',100)

    There are also specific integer only root routines like
    sympy.integer_nthroot or gmpy2.iroot.

    >>> gmpy2.iroot(n, 6)
    (mpz(22314431635), False)
    >>> sympy.integer_nthroot(n, 6)
    (22314431635, False)

    Other libraries like the stdlib math module and numpy define some
    specific examples like cbrt or isqrt but not a full root or iroot.
    What is lacking is a plain 64-bit floating point routine like:

    def root(x: float, n: int) -> float:
    return x ** (1/n) # except more accurate than this

    It could be a good candidate for numpy and/or the math module. I just
    noticed from the docs that the math module has a new in 3.11 cbrt
    function that I didn't know about which suggests that a root function
    might also be considered a reasonable addition in future. Similarly
    isqrt was new in 3.8 and it is not a big leap from there to see
    someone adding iroot.

    --
    Oscar

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)