This is a Monte Carlo program. You may not succeed with
the values of x and b given. You can change them as long
as b is not 0 or -2 .
In Forth Dimensions 6 number 6, Grossman published an article
about the pollard method of factoring numbers.
See the excellent explanation via http://soton.mpeforth.com/flag/fd/index.html
He was obliged to implement quad precision operators.
The example 4225910033 was at the limit, gaining a factor
2 from using unsigned numbers. This took 40 seconds.
The same example takes less that 400 uS on my AMD 64 bit.
More over this implementation can handle double the amount
of digits (64 bit signed) despite it has no need to resort
to double precision except the standard UM/MOD UM* .
\ ----------------------------------
\ Pollard (rho) factoring method
: GCD BEGIN OVER MOD DUP WHILE SWAP REPEAT DROP ;
VARIABLE n
VARIABLE b 1 b !
VARIABLE x 6 x !
: next DUP UM* n @ UM/MOD DROP b @ + ;
: try SWAP next SWAP next next 2DUP = ABORT" failure" ;
: factorize DUP n ! x @ DUP next
BEGIN 2DUP - ABS n @ GCD DUP 1 = WHILE DROP try REPEAT
NIP NIP GCD ;
\ REGRESS 4294967297 factorize S: 641
\ REGRESS 4225910033 factorize S: 65011
\ REGRESS 1,000,000,007 DUP * factorize S: 1,000,000,007
\ ----------------------------------
The REGRESS lines are tests or examples.
This is a Monte Carlo program. You may not succeed with
the values of x and b given. You can change them as long
as b is not 0 or -2 .
On Wednesday, December 7, 2022 at 3:18:01 PM UTC+1, none albert wrote:Pollard rho will always give an answer if increasing start values at failure.
[..]
This is a Monte Carlo program. You may not succeed withFORTH> 1111 factorize . 11 ok
the values of x and b given. You can change them as long
as b is not 0 or -2 .
FORTH> 1111 11 /mod . . 101 0 ok
FORTH> 11111 dup factorize /mod . . 271 0 ok
FORTH> 111111 dup factorize /mod . . 10101 0 ok
FORTH> 1111111 dup factorize /mod . . 4649 0 ok
FORTH> 11111111 dup factorize /mod . . 1010101 0 ok
FORTH> 111111111 dup factorize /mod . . 12345679 0 ok
FORTH> 1111111111 dup factorize /mod . . 101010101 0 ok
FORTH> 11111111111 dup factorize /mod . . 513239 0 ok
FORTH> 111111111111 dup factorize /mod . . 10101010101 0 ok
FORTH> 1111111111111 dup factorize /mod . . 20964360587 0 ok
FORTH> 11111111111111 dup factorize /mod . . 1010101010101 0 ok
FORTH> 111111111111111 dup factorize /mod . . 3584229390681 0 ok
FORTH> 1111111111111111 dup factorize /mod . . 101010101010101 0 ok
FORTH> 11111111111111111 dup factorize /mod . . 5363222357 0 ok
FORTH> 111111111111111111 dup factorize /mod . . 10101010101010101 0 ok FORTH> 1111111111111111111 dup factorize /mod . .
Hmmm, apparently it's not only 0 and -2 that are special ...
-marcel
Pollard rho will always give an answer if increasing start values at failure.
https://forthmath.blogspot.com/2020/01/about-pollard-rho.html
lehs schrieb am Donnerstag, 8. Dezember 2022 um 10:07:20 UTC+1:Thanks! You may have to increase several times.
Pollard rho will always give an answer if increasing start values at failure.
https://forthmath.blogspot.com/2020/01/about-pollard-rho.htmlThanks! I did not see "the tree in the forest". I am now gathering that with unchanged
function (x^2+1)mod n, simply stepping up x0 after failure guarantees to get a factorization. So after max 2 rounds there is a result, correct?
BTW I like your web page, I did not know it existed.
"minf...@arcor.de" <minf...@arcor.de> writes:I guess 64 bit singles is close to the limit for Pollard rho.
You are probably more versed in mathematics than I am,Pollard's rho is conceptually simple but runs out of gas sooner than
so what do you suggest: simply stepping parameters up, random
generation, or?
other, fancier algorithms do. I think the satisfying thing to do is
implement the fancier algorithms. I confess I've implemented Pollard
Rho (not in Forth) but haven't gone further.
Neal Koblitz's book "A Course in Number Theory and Cryptography" has a
good introduction to the subject, up through the Quadratic Sieve (QS).
It unfortunately doesn't discuss the Number Field Sieve.
You are probably more versed in mathematics than I am,
so what do you suggest: simply stepping parameters up, random
generation, or?
"minf...@arcor.de" <minforth@arcor.de> writes:
You are probably more versed in mathematics than I am,
so what do you suggest: simply stepping parameters up, random
generation, or?
Pollard's rho is conceptually simple but runs out of gas sooner than
other, fancier algorithms do. I think the satisfying thing to do is >implement the fancier algorithms. I confess I've implemented Pollard
Rho (not in Forth) but haven't gone further.
Neal Koblitz's book "A Course in Number Theory and Cryptography" has a
good introduction to the subject, up through the Quadratic Sieve (QS).
It unfortunately doesn't discuss the Number Field Sieve.
I can't fathom that you think that I present here [Pollard rho] the
end-all of factorization algorithms.
Implementing fancier algorithm is not satisfying at all. You soon
realize that you need to be a good mathematician (I think I am) but
that you need invest years of your life to get at the frontier.
I challenge you to present an algorithm that has a better
performance to conciseness ratio than what I presented here.
albert@cherry.(none) (albert) writes:
I can't fathom that you think that I present here [Pollard rho] the
end-all of factorization algorithms.
Of course I don't think that. It's a nice simple way to factor numbers
that are a little too big for trial division.
Implementing fancier algorithm is not satisfying at all. You soonThe book I mentioned discusses ECM and the Quadratic Sieve in addition
realize that you need to be a good mathematician (I think I am) but
that you need invest years of your life to get at the frontier.
to the rho method and a few others. Those aren't the frontier and don't
take years to understand or implement, given a reasonable math
background (a long way from the frontier). The remaining algorithm is
the NFS which I don't understand, but which doesn't seem to be alien technology either.
I challenge you to present an algorithm that has a betterI agree that the fancier methods take more code. But they have better asymptotic performance so you can factor bigger numbers with them.
performance to conciseness ratio than what I presented here.
The question is, when do numbers start becoming "bigger numbers"?
When I start thinking about elliptic curve methods for factorization e.g. https://github.com/sethtroisi/gmp-ecm
and doing it in Forth, the lack of bignum math becomes apparent. Instead
of reinventing wheels, bignum library support a la GMP MP is required
plus an adequate I/O and API interface on Forth system side.
But well, this would still be far away from the alleged frontier I fear .. ;-)
On Saturday, December 10, 2022 at 9:07:15 AM UTC+1, minf...@arcor.de wrote: [..]
The question is, when do numbers start becoming "bigger numbers"?I started out with iSPICE having arbitrary precision, but found out that there
When I start thinking about elliptic curve methods for factorization e.g. https://github.com/sethtroisi/gmp-ecm
and doing it in Forth, the lack of bignum math becomes apparent. Instead
of reinventing wheels, bignum library support a la GMP MP is required
plus an adequate I/O and API interface on Forth system side.
But well, this would still be far away from the alleged frontier I fear .. ;-)
are fundamental numerical decisions made in the SPICE foundations
that make that not a solution to anything. It is still possible to set a compile-time
constant to get 80-bit precision (native FPU format), but the only noticeable difference is a 2x slowdown.
Marcel Hendrix schrieb am Samstag, 10. Dezember 2022 um 12:34:22 UTC+1:[..]
On Saturday, December 10, 2022 at 9:07:15 AM UTC+1, minf...@arcor.de wrote:
BTW there is also that funny double-double format that uses pairs of normal _float64 numbers - f.ex. available for Julia and Python. It might present an interesting
exercise for Forth too because it could be implemented with already existing fp-stack infrastructure.
The question is, when do numbers start becoming "bigger numbers"?
When I start thinking about elliptic curve methods for factorization e.g. https://github.com/sethtroisi/gmp-ecm
"minf...@arcor.de" <minforth@arcor.de> writes:
The question is, when do numbers start becoming "bigger numbers"?
When I start thinking about elliptic curve methods for factorization e.g.
https://github.com/sethtroisi/gmp-ecm
To factor the number N by trial division, you need around sqrt(N)
operations. The rho method takes around 4throot(N), so you can probably
get out to 35 or 40 decimal digits with it if you can let it run for a
while. State of the art methods can do maybe 150 digits on a small
cluster of workstations. I believe the record was a big distributed >computation that took some months to factor a number of around 230
digits.
I agree that trying to do it in Forth probably gets in the way of the
math. You could look at the MIRACL library, that implements a bunch of
those algorthms, up through the Quadratic Sieve (QS) but not the Number
Field Sieve (NFS), iirc. https://github.com/miracl/MIRACL
"minf...@arcor.de" <minf...@arcor.de> writes:
The question is, when do numbers start becoming "bigger numbers"?To factor the number N by trial division, you need around sqrt(N)
When I start thinking about elliptic curve methods for factorization e.g. https://github.com/sethtroisi/gmp-ecm
operations. The rho method takes around 4throot(N), so you can probably
get out to 35 or 40 decimal digits with it if you can let it run for a
while. State of the art methods can do maybe 150 digits on a small
cluster of workstations. I believe the record was a big distributed computation that took some months to factor a number of around 230
digits.
I agree that trying to do it in Forth probably gets in the way of the
math. You could look at the MIRACL library, that implements a bunch of
those algorthms, up through the Quadratic Sieve (QS) but not the Number
Field Sieve (NFS), iirc. https://github.com/miracl/MIRACL
We have implemented the Multiple Polynomial Quadratic Sieve on
transputers. (1996)
It was more a demonstration of parallelism showing
the solutions found by the transputer in a triangular display
indicated by colors. However the primes involved in the sieve
are relatively small and can be handled by 32 bit processors,
such as the transputers.
To be honest I borrowed the polynomials from ucbasic and understood
them poorly.
(3 transputers sieving in parallel)
On Monday, December 12, 2022 at 11:46:07 AM UTC+1, none albert wrote:
[..]
We have implemented the Multiple Polynomial Quadratic Sieve onAccording to the original source code, mpqss/config.frt,
transputers. (1996)
It was more a demonstration of parallelism showing
the solutions found by the transputer in a triangular display
indicated by colors. However the primes involved in the sieve
are relatively small and can be handled by 32 bit processors,
such as the transputers.
To be honest I borrowed the polynomials from ucbasic and understood
them poorly.
(3 transputers sieving in parallel)
\ The following defines the number of primes contained in the bit table
\ For a 32 bit processor multiply by 32.
\ The number of primes is related to the maximum prime MaxP
#40 =: MaxB \ Length in CELLs of a bit array
\ >> This may no longer be applicable to the new Vnumber system. <<
\ We want to handle 100 digit numbers, Maxv =20 allows squaring
\ #20 =: MaxV \ Length in CELLs of Vnumber
#40 =: MaxV \ Length in CELLs of Vnumber
So we used 1280 bits arbitrary precision to factor 100-digit numbers.
On Monday, December 12, 2022 at 11:46:07 AM UTC+1, none albert wrote:
[..]
We have implemented the Multiple Polynomial Quadratic Sieve on
transputers. (1996)
It was more a demonstration of parallelism showing
the solutions found by the transputer in a triangular display
indicated by colors. However the primes involved in the sieve
are relatively small and can be handled by 32 bit processors,
such as the transputers.
To be honest I borrowed the polynomials from ucbasic and understood
them poorly.
(3 transputers sieving in parallel)
According to the original source code, mpqss/config.frt,
\ The following defines the number of primes contained in the bit table
\ For a 32 bit processor multiply by 32.
\ The number of primes is related to the maximum prime MaxP
#40 =: MaxB \ Length in CELLs of a bit array
\ >> This may no longer be applicable to the new Vnumber system. <<
\ We want to handle 100 digit numbers, Maxv =20 allows squaring
\ #20 =: MaxV \ Length in CELLs of Vnumber
#40 =: MaxV \ Length in CELLs of Vnumber
So we used 1280 bits arbitrary precision to factor 100-digit numbers.
-marcel
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 52:29:22 |
Calls: | 6,712 |
Calls today: | 5 |
Files: | 12,243 |
Messages: | 5,355,179 |
Posted today: | 1 |