Hi math wizards!
I have developed an extended precision math package with 448 bits. The
basic operations are written all in assembly, and I am now approx twice
as fast as MPFR.
BUT...
(yes there is always a "but" :-)
MPFR does more square roots than me per millisecond!
Surely MPFR has a better algo than my pathetic newton iteration... I was
at 1720 square roots/ms, and MPFR is at 6899. I have tried to improve it
with Halley's method and now I am at 2200/ms, still MUCH TOO SLOW.
Can you point me to some method that would be more appropiate?
Thanks in advance!
P.S.
Here are the timings for the 4 operations:
MPFR
The four operations with 5000000 iterations. Mon Jan 11 15:32:50 2021 Addition : 2.2837e-05 ms. (43788.58869) per ms Subtraction : 1.39152e-05 ms. (71863.86110) per ms Multiplication: 4.5159e-05 ms. (22143.98016) per ms
Mult. (int) : 1.59318e-05 ms. (62767.54667) per ms Division : 0.00011619 ms. (8606.57784) per ms
Division(int) : 4.0373e-05 ms. (24769.02881) per ms
My software:
The four operations with 5000000 iterations. Mon Jan 11 15:35:28 2021 Addition : 1.00136e-05 ms. (99864.18471) per ms Subtraction : 9.862e-06 ms. (101399.31048) per ms Multiplication: 2.06244e-05 ms. (48486.25899) per ms
Mult. (int) : 6.87133e-06 ms. (145532.16261) per ms Squaring : 2.18158e-05 ms. (45838.33735) per ms Division : 0.000120662 ms. (8287.59960) per ms
Division(int) : 4.5838e-05 ms. (21815.96056) per ms
Note that I also use newton iteration for division... I am almost as
slow as MPFR...
MPFR does more square roots than me per millisecond!
Surely MPFR has a better algo than my pathetic newton iteration...
MPFR does more square roots than me per millisecond!
Surely MPFR has a better algo than my pathetic newton iteration...
Its algorithm is described here
https://www.mpfr.org/algorithms.pdf page 13. it refers to integer sqrt, which is described somewhere else...
And the source code is public.
I don't know about 448 bits, but the general idea is probably to distinguish even and odd exponents, take the integer square root of the significand using an extravagantly accurate but cheap to compute initial approximation to the integer square root, and then iterate a fixed number of times. (no need to test for convergence)
This may be the wrong news group to ask this question.
RJF
Le 11/01/2021 à 21:27, Richard Fateman a écrit :
MPFR does more square roots than me per millisecond!
Surely MPFR has a better algo than my pathetic newton iteration...
Its algorithm is described here
https://www.mpfr.org/algorithms.pdf page 13. it refers to integer sqrt, which is described somewhere else...
And the source code is public.
I don't know about 448 bits, but the general idea is probably to distinguish
even and odd exponents, take the integer square root of the significand using
an extravagantly accurate but cheap to compute initial approximation to the
integer square root, and then iterate a fixed number of times. (no need to test for convergence)
This may be the wrong news group to ask this question.
RJF
Thanks for your answer. I have tried several methods:
1 simple newton approx
2 More sophisticated root finding method (also newton approx)
They all work but are slow.
I started browsing the mpfr code and it is based on gmp, so their
methods aren't appliczable to me since I use a fixed number of bits
(448). But browsing their code I found the code for their log function,
and they use the AGM (arithmetic, geometric mean), what I used long ago
but was stuck with problems in the last bits. I am using
log (x): let W = (x-1)/(x+1) then
log(x)/2 = w + (w^3)/3 + (w^5)/5 + (w^7)/7 ...
but I need around 80 iterations to get to 448 bits precision... I
thought of somehow accelerating this by calculating symbollically the
next step, then deriving a faster formula for convergence but I do not
know enough maths to do that.
Thanks for your answer
jacob
Le 11/01/2021 à 21:27, Richard Fateman a écrit :
MPFR does more square roots than me per millisecond!
Surely MPFR has a better algo than my pathetic newton iteration...
Its algorithm is described here
https://www.mpfr.org/algorithms.pdf page 13. it refers to integer sqrt, which is described somewhere else...
And the source code is public.
I don't know about 448 bits, but the general idea is probably to distinguish
even and odd exponents, take the integer square root of the significand using
an extravagantly accurate but cheap to compute initial approximation to the
integer square root, and then iterate a fixed number of times. (no need to test for convergence)
This may be the wrong news group to ask this question.
RJF
Thanks for your answer. I have tried several methods:
1 simple newton approx
2 More sophisticated root finding method (also newton approx)
They all work but are slow.
I started browsing the mpfr code and it is based on gmp, so their
methods aren't appliczable to me since I use a fixed number of bits
(448). But browsing their code I found the code for their log function,
and they use the AGM (arithmetic, geometric mean), what I used long ago
but was stuck with problems in the last bits. I am using
log (x): let W = (x-1)/(x+1) then
log(x)/2 = w + (w^3)/3 + (w^5)/5 + (w^7)/7 ...
but I need around 80 iterations to get to 448 bits precision... I
thought of somehow accelerating this by calculating symbollically the
next step, then deriving a faster formula for convergence but I do not
know enough maths to do that.
Thanks for your answer
jacob
Is there is a particular important calculation that requires exactly 448 bits?
RJF
perhaps https://www.sollya.org/ can help?
-JimC
Le 15/01/2021 à 20:01, James Cloos a écrit :
perhaps https://www.sollya.org/ can help?
-JimC
Thank you very much!
Very interesting
I have repeatedly posted one-line programs in Mathematica that lose
a decimal digit about every 3 times through a loop. The computation, repeated, is
x= 2*x-x
What was the rounding mode?
Round towards bullshit?
Richard Fateman schrieb:
I have repeatedly posted one-line programs in Mathematica that lose
a decimal digit about every 3 times through a loop. The computation, repeated, is
x= 2*x-x
On Tuesday, January 19, 2021 at 11:22:47 PM UTC-8, Mostowski Collapse wrote:
What was the rounding mode?
Round towards bullshit?
Richard Fateman schrieb:
I have repeatedly posted one-line programs in Mathematica that lose
a decimal digit about every 3 times through a loop. The computation, repeated, is
x= 2*x-x
I think that may be a fair description of the default (and only) rounding mode.
If you have a copy of Mathematica, try this
x=1.00000000000000000000
Do[x = 2*x - x, 120]
If[x==0.0,BS]
returns BS.
Le 20/01/2021 à 16:42, Richard Fateman a écrit :
On Tuesday, January 19, 2021 at 11:22:47 PM UTC-8, Mostowski Collapse wrote: >>> What was the rounding mode?
Round towards bullshit?
Richard Fateman schrieb:
I have repeatedly posted one-line programs in Mathematica that lose
a decimal digit about every 3 times through a loop. The computation, repeated, is
x= 2*x-x
I think that may be a fair description of the default (and only) rounding mode.
If you have a copy of Mathematica, try this
x=1.00000000000000000000
Do[x = 2*x - x, 120]
If[x==0.0,BS]
returns BS.
This program
1 #include "qhead.h"
2 int main(void)
3 {
4 Qfloat q[1],tmp[1];
5
6 asctoq("1.00000000000000000000",q,NULL);
7 for (int i=0; i<1200000; i++) {
8 qmul(q,qtwo,tmp); // q= q*2
9 qsub(q,tmp,q); // q = (q*2) - q
10 }
11 qprint(q);
12 }
prints (after 1 200 000 iterations)
1
The syntax is shown using the raw library. Using my compiler system
(lcc-win) you can use these floats as a primitive type:
8
9 q = (q*2) - q;
The difference is the use of the backtick ` at end
of the real number.
This tells mathematica to use machine precision avoiding the issue.
x = 1.00000000000000000000;
Do[x = 2*x - x, 120];
Print[x]
0.*10^37
Did somebody already file a bug?This is supposedly a feature, not a bug. I reported it many years ago, in a published review
The result doesn't make any sense,
neither with radix=10 nor with radix=2.
Nasser M. Abbasi schrieb:
x = 1.00000000000000000000;
Do[x = 2*x - x, 120];
Print[x]
0.*10^37
On Wednesday, January 20, 2021 at 12:24:51 PM UTC-8, Mostowski Collapse wrote:
Did somebody already file a bug?This is supposedly a feature, not a bug. I reported it many years ago, in a published review
The result doesn't make any sense,
neither with radix=10 nor with radix=2.
Nasser M. Abbasi schrieb:
x = 1.00000000000000000000;
Do[x = 2*x - x, 120];
Print[x]
0.*10^37
of Mathematica, circa 1992.
The value x has many interesting properties including x==x+1, also x==1&&x==2 is True.
It is possible to counteract this phenomenon by explicit calls to SetPrecision,
but how are you supposed to know that. No other system that I am
aware of requires such a thing.
RJF
On Saturday, January 16, 2021 at 3:40:43 PM UTC-8, jacob navia wrote:
Le 15/01/2021 ? 20:01, James Cloos a ?crit :
perhaps https://www.sollya.org/ can help?
-JimC
Thank you very much!
Very interesting
The rationale for particular sizes of exponent and mantissa was a subject for discussion in the
IEEE floating-point committee. As I recall, the number of electrons in the universe is
not so relevant. I forget the details. certainly 32 bits for the sign is excessive. Dunno
about the exponent. It would be nice if you also allowed for IEEE special items like
NaN, infinity, signed zero. etc.
If you are doing a sequence of floating-point computations in which each operation loses
some information to round-off, the usefulness of a particular length mantissa
depends on how long a sequence you have.
I have repeatedly posted one-line programs in Mathematica that lose
a decimal digit about every 3 times through a loop. The computation, repeated, is
x= 2*x-x
Richard Fateman <fateman@gmail.com> wrote:
The rationale for particular sizes of exponent and mantissa was a subject for discussion in the
IEEE floating-point committee. As I recall, the number of electrons in the universe is
not so relevant. I forget the details. certainly 32 bits for the sign is excessive. Dunno
about the exponent. It would be nice if you also allowed for IEEE special items like
NaN, infinity, signed zero. etc.
If you are doing a sequence of floating-point computations in which each operation loses
some information to round-off, the usefulness of a particular length mantissa
depends on how long a sequence you have.
I have repeatedly posted one-line programs in Mathematica that lose
a decimal digit about every 3 times through a loop. The computation, repeated, is
x= 2*x-x
This is very artifical example. AFAICS x = 4*x(1 - 1) would be
a bit closer to real life. It naturaly looses 1 bit of
accuracy every iteration and with Mathematica probably 2 bits
per iteration.
antispam@math.uni.wroc.pl schrieb:
Richard Fateman <fateman@gmail.com> wrote:
The rationale for particular sizes of exponent and mantissa was a subject for discussion in the
IEEE floating-point committee. As I recall, the number of electrons in the universe is
not so relevant. I forget the details. certainly 32 bits for the sign is excessive. Dunno
about the exponent. It would be nice if you also allowed for IEEE special items like
NaN, infinity, signed zero. etc.
If you are doing a sequence of floating-point computations in which each operation loses
some information to round-off, the usefulness of a particular length mantissa
depends on how long a sequence you have.
I have repeatedly posted one-line programs in Mathematica that lose
a decimal digit about every 3 times through a loop. The computation, repeated, is
x= 2*x-x
This is very artifical example. AFAICS x = 4*x(1 - 1) would be
a bit closer to real life. It naturaly looses 1 bit of
accuracy every iteration and with Mathematica probably 2 bits
per iteration.
I suspect a typo somewhere:
ITERATES(4*x*(1 - 1), x, 1, 10)
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 307 |
Nodes: | 16 (2 / 14) |
Uptime: | 67:19:23 |
Calls: | 6,915 |
Files: | 12,379 |
Messages: | 5,431,811 |