I've parallelized my sieve of Erastosthenes to all cores. The parallel threads do al the punching of the non-prime multiplex beyond sqrt( max
). I found that the algorithm is limited by the memory bandwith since
the bit-range between sqrt( max ) and max is to large to fit inside
the cache. So I partitioned the each thread to a number of bit-ranges
that fits inside the L2-cache. With that I got a speedup of factor 28
when calculating about the first 210 million primes, i.e. all primes
<= 2 ^ 32.
On my Zen4 16 core PC under Windows with clang-cl 16.0.5 producing the
primes without any file output is only 0.28s. On my Linux-PC, a Zen2 64
core CPU producing the same number of primes is about 0.09s.
The file output is done with my own buffering. With that writing the mentioned prime number range results in a 2.2GB file which is written
in about two seconds with a PCIe 4.0 SSD.
The first parameter is the highest prime candidate to be generated,
it can be prefixed with 0x for hex values. The second number is the
name of the output file; it can be "" to suppress file output. The
third parameter is the number of punching threads; if it is left the
number of threads defaults to the number of threads reported by the
runtime.
Am 10.12.2023 um 22:48 schrieb Vir Campestris:
- You don't need to store the odd numbers
All primes except 2 are odd.
- You only need a bitmask to save the sieve.
I'm using the "scan"-lambda which does serarch through the sieve
as fast as possible with countr_zero, which maps to a single CPU
-instruction on modern processors.
... reading that right you've assumed that unsigned is the same
size as size_t, and that the size is 64 bits.
I've got a type word_t that is a size_t, but it can also
be a uint8_t to test what's the performance difference.
Well, I went away and tried it. I didn't write anything nearly as sophisticated as yours - I didn't worry about threads and caching. The
code follows. You'll have to unwrap it in a few places.
It occurred to me I could go further - not just optimise it out to store
odd numbers only, but also miss out the ones divisible by 3, or other
higher primes. The results were:
- Storing all numbers: 9.494 seconds to run the sieve
- Storing odd numbers only: 4.455. That's over twice as fast.
- Excluding also the ones divisible by 3: 3.468. That's an improvement,
but not as much as I expected. Perhaps it's got coo complicated. I don't
have a high powered PC.
Am 11.12.2023 um 18:12 schrieb Vir Campestris:
Because all primes except 2 are odd you don't need to store the _even_
numbers, which is what I meant to say. Or else you only need to store
the odd ones.
I'll try that the next days; but I don't expect a significant change.
Your constants are 64 bit hexadecimal numbers (from counting the
digits) and are unsigned (the u suffix) but there's no guarantee that
size_t will be the same size as unsigned, nor that it will be 64 bit.
The constants are wide enough for 64 bit size_t-s but it won't make
a functional difference if you define word_t as a uint8_t. With an
uint8_t word_t the code runs about 1/6-th slower; that's less than
I expected.
On 11/12/2023 17:19, Bonita Montero wrote:
Am 11.12.2023 um 18:12 schrieb Vir Campestris:Well, I went away and tried it. I didn't write anything nearly as sophisticated as yours - I didn't worry about threads and caching. The
Because all primes except 2 are odd you don't need to store the
_even_ numbers, which is what I meant to say. Or else you only need
to store the odd ones.
I'll try that the next days; but I don't expect a significant change.
code follows. You'll have to unwrap it in a few places.
It occurred to me I could go further - not just optimise it out to store
odd numbers only, but also miss out the ones divisible by 3, or other
higher primes. The results were:
- Storing all numbers: 9.494 seconds to run the sieve
- Storing odd numbers only: 4.455. That's over twice as fast.
- Excluding also the ones divisible by 3: 3.468. That's an improvement,
but not as much as I expected. Perhaps it's got coo complicated. I don't
have a high powered PC.
Now the code runs 0.15s on the 16 core Zen4 system, that's +87% faster.
On th4e 64 core Zen2 system the time to produce the first 21 million
primes is about 0.05 seconds.
On 12/13/2023 7:16 AM, Vir Campestris wrote:
On 11/12/2023 17:19, Bonita Montero wrote:
Am 11.12.2023 um 18:12 schrieb Vir Campestris:
Because all primes except 2 are odd you don't need to store the
_even_ numbers, which is what I meant to say. Or else you only
need to store the odd ones.
I'll try that the next days; but I don't expect a significant change.
Well, I went away and tried it. I didn't write anything nearly as
sophisticated as yours - I didn't worry about threads and
caching. The code follows. You'll have to unwrap it in a few places.
It occurred to me I could go further - not just optimise it out to
store odd numbers only, but also miss out the ones divisible by 3,
or other higher primes. The results were:
- Storing all numbers: 9.494 seconds to run the sieve
- Storing odd numbers only: 4.455. That's over twice as fast.
- Excluding also the ones divisible by 3: 3.468. That's an
improvement, but not as much as I expected. Perhaps it's got coo
complicated. I don't have a high powered PC.
Another way to attempt to optimize is that except for 2 and 3, all
primes are of the form 6n+1 or 6n-1.
On my system my code takes about 20 seconds to produce 1e9
primes. [...]
A more compact form is to consider numbers mod 30, in which case
numbers that are 0 mod {2, 3, or 5} can be ignored. Conveniently
there are just 8 such values - 1, 7, 11, 13, 17, 19, 23, and 29 -
so each block of 30 can be represented in one 8-bit byte. Doing
that gives a 25% reduction in space compared to a 6n+/-1 scheme.
Vir Campestris <vir.campestris@invalid.invalid> writes:
On my system my code takes about 20 seconds to produce 1e9
primes. [...]
Do you mean 1e9 primes or just the primes less than 1e9? To
do the first 1e9 primes a sieve would need to go up to about
23.9e9 (so half that many bits if only odd numbers were
represented).
On 23/12/2023 18:30, Tim Rentsch wrote:
A more compact form is to consider numbers mod 30, in which case
numbers that are 0 mod {2, 3, or 5} can be ignored. Conveniently
there are just 8 such values - 1, 7, 11, 13, 17, 19, 23, and 29 -
so each block of 30 can be represented in one 8-bit byte. Doing
that gives a 25% reduction in space compared to a 6n+/-1 scheme.
I found that on my system the modulo operation was so slow this wasn't
worth doing.
On 23/12/2023 18:21, Tim Rentsch wrote:
Vir Campestris <vir.campestris@invalid.invalid> writes:
On my system my code takes about 20 seconds to produce 1e9
primes. [...]
Do you mean 1e9 primes or just the primes less than 1e9? To
do the first 1e9 primes a sieve would need to go up to about
23.9e9 (so half that many bits if only odd numbers were
represented).
Primes up to 1e9.
I have another idea though, watch this space...
On 12/26/2023 1:27 AM, Bonita Montero wrote:
Am 26.12.2023 um 06:39 schrieb Chris M. Thomasson:
Remember when Intel first started hyperthreading and god damn threads
could false share with each other (on the stacks!) the low and high 64
byte parts of the 128 byte cache lines? A workaround was to
artificially offset the threads stacks via alloca.
If you have one core and two threads there's no false sharing.
So, are you familiar with Intel's early hyper threading problem? There
was false sharing between the hyperhtreads. The workaround did improve performance by quite a bit. IIRC, my older appcore project had this workaround incorporated into it logic. I wrote that sucker back in very
early 2000's. Humm... I will try to find the exact line.
Am 27.12.2023 um 06:59 schrieb Chris M. Thomasson:
Something about false interference between threads that should not even
be interacting with one another to begin with. It was a problem. The fix
was based on alloca, so that is something to ponder on.;
Like with any SMT-core there could be cache-thrashing between the
cores. The L1 data cache was only 8kB, maybe only two was associative
that could be thrashing between the cores. But I've no clue what this
would have to to with alloca().
Am 29.12.2023 um 05:58 schrieb Chris M. Thomasson:
On 12/28/2023 7:17 PM, Bonita Montero wrote:
Am 29.12.2023 um 00:38 schrieb Chris M. Thomasson:
The use of alloca to try to get around the problem in their
(Intel's) early hyperthreaded processors was real, and actually
helped. It was in the Intel docs.
I don't believe it.
Why not?
Because I don't understand what's different with the access pattern
of alloca() and usual stack allocation. And if I google for "alloca
Pentium 4 site:intel.com" I can't find anything that fits.
Am 29.12.2023 um 05:58 schrieb Chris M. Thomasson:
On 12/28/2023 7:17 PM, Bonita Montero wrote:
Am 29.12.2023 um 00:38 schrieb Chris M. Thomasson:
The use of alloca to try to get around the problem in their (Intel's)
early hyperthreaded processors was real, and actually helped. It was
in the Intel docs.
I don't believe it.
Why not?
Because I don't understand what's different with the access pattern
of alloca() and usual stack allocation. And if I google for "alloca
Pentium 4 site:intel.com" I can't find anything that fits.
Depending on how the code is written, no modulo operations need
to be done, because they will be optimized away and done at
compile time. If we look at multiplying two numbers represented
by bits in bytes i and j, the two numbers are
i*30 + a
j*30 + b
for some a and b in { 1, 7, 11, 13 17, 19, 23, 29 }.
The values we're interested in are the index of the product and
the residue of the product, namely
(i*30+a) * (j*30+b) / 30 (for the index)
(i*30+a) * (j*30+b) % 30 (for the residue)
Any term with a *30 in the numerator doesn't contribute to
the residue, and also can be combined with the by-30 divide
for computing the index. Thus these expressions can be
rewritten as
i*30*j + i*b + j*a + (a*b/30) (for the index)
a*b%30 (for the residue)
When a and b have values that are known at compile time,
neither the divide nor the remainder result in run-time
operations being done; all of that heavy lifting is
optimized away and done at compile time. Of course there
are some multiplies, but they are cheaper than divides, and
also can be done in parallel. (The multiplication a*b also
can be done at compile time.)
The residue needs to be turned into a bit mask to do the
logical operation on the byte of bits, but here again that
computation can be optimized away and done at compile time.
Does that all make sense?
Am 29.12.2023 um 05:58 schrieb Chris M. Thomasson:
On 12/28/2023 7:17 PM, Bonita Montero wrote:
Am 29.12.2023 um 00:38 schrieb Chris M. Thomasson:
The use of alloca to try to get around the problem in their (Intel's)
early hyperthreaded processors was real, and actually helped. It was
in the Intel docs.
I don't believe it.
Why not?
Because I don't understand what's different with the access pattern
of alloca() and usual stack allocation. And if I google for "alloca
Pentium 4 site:intel.com" I can't find anything that fits.
On 12/29/2023 2:09 PM, Chris M. Thomasson wrote:
On 12/29/2023 8:01 AM, Scott Lurndal wrote:
page 2-35 in the_Intel Pentium 4 Processor Optimization_
manual.
I think it was in chapter 5 of Developing Multithreaded Applications: A
Platform Consistent Approach cannot remember the damn section right now.
Wait a minute! I might have found it, lets see:
https://www.intel.com/content/dam/www/public/us/en/documents/training/developing-multithreaded-applications.pdf
Ahhh section 5.3! Nice! I read this a while back, before 2005.
Am 29.12.2023 um 23:12 schrieb Chris M. Thomasson:
Wait a minute! I might have found it, lets see:
https://www.intel.com/content/dam/www/public/us/en/documents/training/developing-multithreaded-applications.pdf
Ahhh section 5.3! Nice! I read this a while back, before 2005.
This is nonsense what it says if the cache is really four-way
associative like in the other paper mentioned here. And of
course that has nothing to do with false sharing
Am 30.12.2023 um 05:56 schrieb Kaz Kylheku:
If it's four way associative, you star to have a performance problem as
soon as five things collide on it. ...
With four-way associativity that's rather unlikely.
Am 29.12.2023 um 13:56 schrieb David Brown:
There is nothing different from alloca() and ordinary stack
allocations. But alloca() makes it quick and easy to make large
allocations, and to do so with random sizes (or at least sizes that
differ significantly between threads, even if they are running the
same code).
I've got my own class overflow_array<> which is like an array<> and a vector<>. If you append more than the internal array<> can handle the
objects are moved to an internal vector. I think Boost's small_array
is similar to that.
Without alloca(), you'd need to do something like arranging to call a
recursive function a random number of times before it then calls the
next bit of code in your thread. alloca() is simply far easier and
faster.
You've got strange ideas. alloca() has been completely removed from the
Linux kernel.
The point is that if there's a fixed upper limit you would
allocate you could allocate it always statically.
Am 30.12.2023 um 06:51 schrieb Kaz Kylheku:
Under four-way associativity, five isn't greater than four?
Two-way associativeness would leave no space if both threads have
synchronous stack frames. With four-way associativeness there's
not much likehood for that to happen.
On 12/29/2023 8:45 PM, Bonita Montero wrote:
Am 29.12.2023 um 18:29 schrieb Kaz Kylheku:
I explained it. The allocation is not used. When you call alloca(n),
the stack pointer moves by n bytes. If you then call a function,
its stack frame will be offset by that much (plus any alignment if
n is not aligned).
According to the paper Scott mentioned the associativity of the
Pentium 4's L1 data cache is four. With that it's not necessary
to have such aliasing preventions.
Huh? Wow, you really need to write Intel a letter about it wrt their
older hyperthreaded processors! Although, it seems like you simply do
not actually _understand_ what is going on here...
Intel's suggestions for how to mitigate the problem in their earlier hyperhtreaded processors actually worked wrt improving performance. Keep
in mind this was a while back in 2004-2005. I am happy that the way back machine has my older code.
Am 30.12.2023 um 21:35 schrieb Kaz Kylheku:
My comment makes it clear that there are two thread stacks
vying for that cache line, plus a couple of other accesses
that are not thread stacks.
With four way associativity that's rather unlikely to happen.
Am 31.12.2023 um 08:01 schrieb Kaz Kylheku:
On 2023-12-31, Bonita Montero <Bonita.Montero@gmail.com> wrote:
Am 30.12.2023 um 21:35 schrieb Kaz Kylheku:
My comment makes it clear that there are two thread stacks
vying for that cache line, plus a couple of other accesses
that are not thread stacks.
With four way associativity that's rather unlikely to happen.
What? The associativity of the cache is not the determiner
of program behavior; if the program accesses five different
areas whose addresses are the same modulo 65536 bytes, ...
The set size is smaller than 64kB an of course there can be aliasing
but with four way associativity it's unlikely that there's a need to
control aliasing. And if the set size would be 64kB aliasing would be
even less likely through page colouring.
Am 30.12.2023 um 05:56 schrieb Kaz Kylheku:
If it's four way associative, you star to have a performance problem as
soon as five things collide on it. ...
With four-way associativity that's rather unlikely.
On 29/12/2023 17:04, Bonita Montero wrote:
You've got strange ideas. alloca() has been completely removed from the
Linux kernel.
Citation? You are usually wrong in your claims, or at least mixed-up,
so I won't trust you without evidence. (That does not mean you are
wrong here - I don't know either way.)
Of course, avoiding alloca() within the kernel is, again, utterly
irrelevant to the point under discussion.
Am 31.12.2023 um 19:44 schrieb Scott Lurndal:
Why? ...
Because there must be set-conflicts on the same line index.
That's rather unlikely with four-way associativeness.
David Brown <david.brown@hesbynett.no> writes:
On 29/12/2023 17:04, Bonita Montero wrote:
You've got strange ideas. alloca() has been completely removed from the
Linux kernel.
Citation? You are usually wrong in your claims, or at least mixed-up,
so I won't trust you without evidence. (That does not mean you are
wrong here - I don't know either way.)
Kernel threads generally run with a small, fixed stack. Stack-based
dynamic allocation is generally avoided. I don't know of any
hard restrictions, but I suspect that Linus would look askance
at it.
Of course, avoiding alloca() within the kernel is, again, utterly
irrelevant to the point under discussion.
Very true
Am 06.01.2024 um 04:21 schrieb Chris M. Thomasson:
Are you trying to tell me that the aliasing problem on those older Intel
hyperthreaded processors and the workaround (from Intel) was a myth?
lol. ;^)
Intel just made a nerd-suggestion. With four-way associativity
there's no frequent aliasing problem in the L1 data dache of
Pentium 4.
On 1/7/2024 9:48 PM, Bonita Montero wrote:
Am 07.01.2024 um 21:46 schrieb Chris M. Thomasson:
I know that they had a problem and the provided workaround from Intel
really did help out. ...
Absolutely not, not with four way associativity.
Whatever you say; Sigh. I am done with this.
Am 07.01.2024 um 21:46 schrieb Chris M. Thomasson:
I know that they had a problem and the provided workaround from Intel
really did help out. ...
Absolutely not, not with four way associativity.
On 24/12/2023 08:36, Tim Rentsch wrote:
Depending on how the code is written, no modulo operations need
to be done, because they will be optimized away and done at
compile time. If we look at multiplying two numbers represented
by bits in bytes i and j, the two numbers are
i*30 + a
j*30 + b
for some a and b in { 1, 7, 11, 13 17, 19, 23, 29 }.
The values we're interested in are the index of the product and
the residue of the product, namely
(i*30+a) * (j*30+b) / 30 (for the index)
(i*30+a) * (j*30+b) % 30 (for the residue)
Any term with a *30 in the numerator doesn't contribute to
the residue, and also can be combined with the by-30 divide
for computing the index. Thus these expressions can be
rewritten as
i*30*j + i*b + j*a + (a*b/30) (for the index)
a*b%30 (for the residue)
When a and b have values that are known at compile time,
neither the divide nor the remainder result in run-time
operations being done; all of that heavy lifting is
optimized away and done at compile time. Of course there
are some multiplies, but they are cheaper than divides, and
also can be done in parallel. (The multiplication a*b also
can be done at compile time.)
The residue needs to be turned into a bit mask to do the
logical operation on the byte of bits, but here again that
computation can be optimized away and done at compile time.
Does that all make sense?
Right now, no. But that's me. I'll flag it to read again when I've
had a better night's sleep.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (1 / 15) |
Uptime: | 117:29:06 |
Calls: | 6,704 |
Calls today: | 4 |
Files: | 12,235 |
Messages: | 5,349,333 |