4.979338592181744123.456789 ** (1.0 / 3.0)
49.79338592181744123456.789 ** (1.0 / 3.0)
497.9338592181743123456789. ** (1.0 / 3.0)
4979.338592181743123456789000. ** (1.0 / 3.0)
49793.38592181742123456789000000. ** (1.0 / 3.0)
497933.8592181741123456789000000000. ** (1.0 / 3.0)
4979338.59218174123456789000000000000. ** (1.0 / 3.0)
49793385.9218174123456789000000000000000. ** (1.0 / 3.0)
497933859.2181739123456789000000000000000000. ** (1.0 / 3.0)
4979338592.181739123456789000000000000000000000. ** (1.0 / 3.0)
49793385921.81738123456789000000000000000000000000. ** (1.0 / 3.0)
497933859218.1737123456789000000000000000000000000000. ** (1.0 / 3.0)
4979338592181.736123456789000000000000000000000000000000. ** (1.0 / 3.0)
3.0)123456789000000000000000000000000000000000. ** (1.0 / 3.0) 49793385921817.36
123456789000000000000000000000000000000000000. ** (1.0 / 3.0) 497933859218173.56
123456789000000000000000000000000000000000000000. ** (1.0 / 3.0) 4979338592181735.0
123456789000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+16
123456789000000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+17
123456789000000000000000000000000000000000000000000000000. ** (1.0 /
4.979338592181732e+19123456789000000000000000000000000000000000000000000000000000. ** (1.0 / 3.0)
(1.0 / 3.0)123456789000000000000000000000000000000000000000000000000000000. **
(1.0 / 3.0)123456789000000000000000000000000000000000000000000000000000000. **
4.9793385921817313e+20
----------------------------------------------
Hi,
I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets bigger.
I have two questions:
1. Is there a straightforward explanation for this or is it a bug?
2. Is the same behaviour exhibited in Python 3.x?
For your information, the first 20 significant figures of the cube root in question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
123456789000000000000000000000000000000000. ** (1.0 / 3.0) 49793385921817.36
4.979338592181744123.456789 ** (1.0 / 3.0)
49.79338592181744123456.789 ** (1.0 / 3.0)
497.9338592181743123456789. ** (1.0 / 3.0)
4979.338592181743123456789000. ** (1.0 / 3.0)
49793.38592181742123456789000000. ** (1.0 / 3.0)
497933.8592181741123456789000000000. ** (1.0 / 3.0)
4979338.59218174123456789000000000000. ** (1.0 / 3.0)
49793385.9218174123456789000000000000000. ** (1.0 / 3.0)
497933859.2181739123456789000000000000000000. ** (1.0 / 3.0)
4979338592.181739123456789000000000000000000000. ** (1.0 / 3.0)
49793385921.81738123456789000000000000000000000000. ** (1.0 / 3.0)
497933859218.1737123456789000000000000000000000000000. ** (1.0 / 3.0)
4979338592181.736123456789000000000000000000000000000000. ** (1.0 / 3.0)
3.0)123456789000000000000000000000000000000000. ** (1.0 / 3.0) 49793385921817.36
123456789000000000000000000000000000000000000. ** (1.0 / 3.0) 497933859218173.56
123456789000000000000000000000000000000000000000. ** (1.0 / 3.0) 4979338592181735.0
123456789000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+16
123456789000000000000000000000000000000000000000000000. ** (1.0 / 3.0) 4.979338592181734e+17
123456789000000000000000000000000000000000000000000000000. ** (1.0 /
4.979338592181732e+19123456789000000000000000000000000000000000000000000000000000. ** (1.0 / 3.0)
(1.0 / 3.0)123456789000000000000000000000000000000000000000000000000000000. **
123456789000000000000000000000000000000000000000000. ** (1.0 / 3.0) >4.979338592181734e+16
I have two questions:To you 1/3 may be an exact fraction, and the definition of raising a
1. Is there a straightforward explanation for this or is it a bug?
2. Is the same behaviour exhibited in Python 3.x?Yes. And Java, C++, and any other language that uses IEEE floating point.
I have two questions:To you 1/3 may be an exact fraction, and the definition of raising a
1. Is there a straightforward explanation for this or is it a bug?
2. Is the same behaviour exhibited in Python 3.x?Yes. And Java, C++, and any other language that uses IEEE floating point.
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org> wrote:[snip]
[snip]I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets bigger. [snip]
For your information, the first 20 significant figures of the cube root in >> question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
49793385921817.36123456789000000000000000000000000000000000. ** (1.0 / 3.0)
You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 ...
SymPy again:
In [37]: a, x = symbols('a, x')
In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.
(123456789000000000000000000000000000000000000000000) / 3) 4.979338592181741e+16math.e ** (math.log
/ 3)10 ** (math.log10 (123456789000000000000000000000000000000000000000000)
4.979338592181734e+16123456789000000000000000000000000000000000000000000 ** (1.0 / 3.0) 4.979338592181734e+16
123456789e42 ** (1.0 / 3.0)
Thanks, one and all, for your reponses.
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
Consider an integer N consisting of a finitely-long string of digits in
base 10.
Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
call it RootN.
Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
The *only *difference between RootN and RootNZZZ is that the decimal
point in RootNZZZ is one place further to the right than the decimal point
in RootN.
None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.
I rest my case.
Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.
Stephen Tucker.
On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:
On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>wrote:
[snip]
bigger.I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets
[snip]
root in
For your information, the first 20 significant figures of the cube
[snip]question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
49793385921817.36123456789000000000000000000000000000000000. ** (1.0 / 3.0)
You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 ...
SymPy again:
In [37]: a, x = symbols('a, x')
In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.
Marvelous. Thank you.
--
To email me, substitute nowhere->runbox, invalid->com.
--
https://mail.python.org/mailman/listinfo/python-list
On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>wrote:
[snip]
bigger.I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets
[snip]
in
For your information, the first 20 significant figures of the cube root
question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
49793385921817.36123456789000000000000000000000000000000000. ** (1.0 / 3.0)
You need to be aware that 1.0/3.0 is a float that is not exactly equal[snip]
to 1/3 ...
SymPy again:
In [37]: a, x = symbols('a, x')
In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should expect this difference to grow approximately linearly as you keep
adding more zeros in the base.
Marvelous. Thank you.
--
To email me, substitute nowhere->runbox, invalid->com.
--
https://mail.python.org/mailman/listinfo/python-list
Thanks, one and all, for your reponses.
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
Consider an integer N consisting of a finitely-long string of digits in
base 10.
Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
call it RootN.
Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in RootN.
None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.
I rest my case.
Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.
Stephen Tucker.
On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:
On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>wrote:
[snip]
bigger.I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets
[snip]
in
For your information, the first 20 significant figures of the cube root
[snip]question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
49793385921817.36123456789000000000000000000000000000000000. ** (1.0 / 3.0)
You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 ...
SymPy again:
In [37]: a, x = symbols('a, x')
In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.
Marvelous. Thank you.
--
To email me, substitute nowhere->runbox, invalid->com.
--
https://mail.python.org/mailman/listinfo/python-list
Thanks, one and all, for your reponses.
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
Consider an integer N consisting of a finitely-long string of digits in
base 10.
Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
call it RootN.
Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in RootN.
None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.
I rest my case.
Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.
Stephen Tucker.
On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$>
On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>wrote:
[snip]
bigger.I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets
[snip]
in
For your information, the first 20 significant figures of the cube root
[snip]question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
49793385921817.36123456789000000000000000000000000000000000. ** (1.0 / 3.0)
You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 ...
SymPy again:
In [37]: a, x = symbols('a, x')
In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.
Marvelous. Thank you.
--
To email me, substitute nowhere->runbox, invalid->com.
--
https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/
On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson <pkpearson@nowhere.invalid> wrote:[snip]
On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker <stephen_tucker@sil.org>wrote:
[snip]
bigger.I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets
[snip]
[snip]
For your information, the first 20 significant figures of the cube root >> in
question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
49793385921817.36123456789000000000000000000000000000000000. ** (1.0 / 3.0)
You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 ...
SymPy again:
In [37]: a, x = symbols('a, x')
In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.
Marvelous. Thank you.
Now consider appending three zeroes to the right-hand end of N (let's call[snip]
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in RootN.
None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.
I rest my case.
Thanks, one and all, for your reponses.
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.Rest assured the IEEE committee that formalized the format decades ago
On Tue, 14 Feb 2023 11:17:20 +0000, Oscar Benjamin wrote:
On Tue, 14 Feb 2023 at 07:12, Stephen Tuckerwrote:
<stephen_tucker@sil.org>
[snip]
bigger.I have just produced the following log in IDLE (admittedly, in
Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets
[snip]
in
For your information, the first 20 significant figures of the cube
root
question are:
49793385921817447440
Stephen Tucker.
----------------------------------------------
4.979338592181744123.456789 ** (1.0 / 3.0)
49793385921817.36123456789000000000000000000000000000000000. ** (1.0 / 3.0)
You need to be aware that 1.0/3.0 is a float that is not exactly[snip]
equal to 1/3 ...
SymPy again:
In [37]: a, x = symbols('a, x')
In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You
should expect this difference to grow approximately linearly as you
keep adding more zeros in the base.
Marvelous. Thank you.
--
To email me, substitute nowhere->runbox, invalid->com.
--
https://mail.python.org/mailman/listinfo/python-list
Thanks, one and all, for your reponses.
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
Consider an integer N consisting of a finitely-long string of digits in
base 10.
Consider the infinitely-precise cube root of N (yes I know that it could never be computed unless N is the cube of an integer, but this is a mathematical argument, not a computational one), also in base 10. Let's
call it RootN.
Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in RootN.
None of the digits in RootNZZZ's string should be different from the corresponding digits in RootN.
I rest my case.
Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.
Stephen Tucker.
Thanks, one and all, for your reponses.[snip]
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
Consider an integer N consisting of a finitely-long string of digits in
base 10.
Consider the infinitely-precise cube root of N (yes I know that it could never be computed
unless N is the cube of an integer, but this is a mathematical
argument, not a computational one), also in base 10. Let's call it
RootN.
Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in RootN.
IEEE did not define a standard for floating point arithmetics. They
designed multiple standards, including a decimal float point one.
Although decimal floating point (DFP) hardware used to be
manufactured, I couldn’t find any current manufacturers.
On 2/17/23 03:27, Stephen Tucker wrote:
Thanks, one and all, for your reponses.
This is a hugely controversial claim, I know, but I would consider this behaviour to be a serious deficiency in the IEEE standard.
No matter how you do it, there are always tradeoffs and inaccuracies
moving from real numbers in base 10 to base 2.
That's just the nature of the math. Any binary floating point
representation is going to have problems.
Also we weren't clear on this, but the IEEE standard is not just
implemented in software. It's the way your CPU represents floating point numbers in silicon. And in your GPUs (where speed is preferred to precision). So it's not like Python could just arbitrarily do something different unless you were willing to pay a huge penalty for speed.
Perhaps this observation should be brought to the attention of the IEEE. I would like to know their response to it.Rest assured the IEEE committee that formalized the format decades ago
knew all about the limitations and trade-offs. Over the years CPUs have increased in capacity and now we can use 128-bit floating point numbers
which mitigate some of the accuracy problems by simply having more
binary digits. But the fact remains that some rational numbers in
decimal are irrational in binary,
On 2/17/23 5:27 AM, Stephen Tucker wrote:
The key factor here is IEEE floating point is storing numbers in BINARY,
not DECIMAL, so a multiply by 1000 will change the representation of the number, and thus the possible resolution errors.
Store you numbers in IEEE DECIMAL floating point, and the variations by multiplying by powers of 10 go away.
It is a complete implementation of Mike Cowlishaw/IBM's GeneralDecimal Arithmetic Specification.
[...]
Perhaps this observation should be brought to the attention of the IEEE. I >> would like to know their response to it.
That is why they have developed the Decimal Floating point format, to
handle people with those sorts of problems.
They just aren't common enough for many things to have adopted the
use of it.
And... this topic as a whole comes up over and over again, like
everywhere.
On 18/02/23 7:42 am, Richard Damon wrote:
On 2/17/23 5:27 AM, Stephen Tucker wrote:
None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.
Only if the storage format was DECIMAL.
Note that using decimal wouldn't eliminate this particular problem,
since 1/3 isn't exactly representable in decimal either.
To avoid it you would need to use an algorithm that computes nth
roots directly rather than raising to the power 1/n.
To avoid it you would need to use an algorithm that computes nthIt's somewhat curious that we don't really have that.
roots directly rather than raising to the power 1/n.
On 2/17/23 5:27 AM, Stephen Tucker wrote:
None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.
Only if the storage format was DECIMAL.
Every fall, the groups were again full of a new crop of people who had
just discovered all sorts of bugs in the way <software/hardware>
implemented floating point, and pointing them to a nicely written
document that explained it never did any good.
On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list <python-list@python.org> wrote:
On 18/02/23 7:42 am, Richard Damon wrote:
On 2/17/23 5:27 AM, Stephen Tucker wrote:
None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.
Only if the storage format was DECIMAL.
Note that using decimal wouldn't eliminate this particular problem,
since 1/3 isn't exactly representable in decimal either.
To avoid it you would need to use an algorithm that computes nth
roots directly rather than raising to the power 1/n.
It's somewhat curious that we don't really have that. We have many
other inverse operations - addition and subtraction (not just "negate
and add"), multiplication and division, log and exp - but we have exponentiation without an arbitrary-root operation. For square roots,
that's not a problem, since we can precisely express the concept
"raise to the 0.5th power", but for anything else, we have to raise to
a fractional power that might be imprecise.
But maybe, in practice, this isn't even a problem?
On Sat, 18 Feb 2023 at 01:47, Chris Angelico <rosuav@gmail.com> wrote:
On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
To avoid it you would need to use an algorithm that computes nth
roots directly rather than raising to the power 1/n.
It's somewhat curious that we don't really have that. We have many
other inverse operations - addition and subtraction (not just "negate
and add"), multiplication and division, log and exp - but we have exponentiation without an arbitrary-root operation. For square roots, that's not a problem, since we can precisely express the concept
"raise to the 0.5th power", but for anything else, we have to raise to
a fractional power that might be imprecise.
Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:
On 2023-02-18 03:52:51 +0000, Oscar Benjamin wrote:
On Sat, 18 Feb 2023 at 01:47, Chris Angelico <rosuav@gmail.com> wrote:
On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
To avoid it you would need to use an algorithm that computes nth
roots directly rather than raising to the power 1/n.
It's somewhat curious that we don't really have that. We have many
other inverse operations - addition and subtraction (not just "negate
and add"), multiplication and division, log and exp - but we have exponentiation without an arbitrary-root operation. For square roots, that's not a problem, since we can precisely express the concept
"raise to the 0.5th power", but for anything else, we have to raise to
a fractional power that might be imprecise.
Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:
Yes, but that's a special case. Chris was talking about arbitrary
(integer) roots. My calculator has a button labelled [x√y], but my processor doesn't have an equivalent operation.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 78:46:43 |
Calls: | 6,716 |
Files: | 12,247 |
Messages: | 5,357,835 |