• Re: Sieve of Erastosthenes optimized to the max

    From Vir Campestris@21:1/5 to Bonita Montero on Sun Dec 10 21:48:16 2023
    On 10/12/2023 09:46, Bonita Montero wrote:
    I've parallelized my sieve of Erastosthenes to all cores. The parallel threads do al the punching of the non-prime multiplex beyond sqrt( max
    ). I found that the algorithm is limited by the memory bandwith since
    the bit-range between sqrt( max ) and max is to large to fit inside
    the cache. So I partitioned the each thread to a number of bit-ranges
    that fits inside the L2-cache. With that I got a speedup of factor 28
    when calculating about the first 210 million primes, i.e. all primes
    <= 2 ^ 32.
    On my Zen4 16 core PC under Windows with clang-cl 16.0.5 producing the
    primes without any file output is only 0.28s. On my Linux-PC, a Zen2 64
    core CPU producing the same number of primes is about 0.09s.
    The file output is done with my own buffering. With that writing the mentioned prime number range results in a 2.2GB file which is written
    in about two seconds with a PCIe 4.0 SSD.
    The first parameter is the highest prime candidate to be generated,
    it can be prefixed with 0x for hex values. The second number is the
    name of the output file; it can be "" to suppress file output. The
    third parameter is the number of punching threads; if it is left the
    number of threads defaults to the number of threads reported by the
    runtime.

    <snip code>

    Just so happens I've been thinking about this. I find your code
    impenetrable - there are no comments at all. But two things occurred to
    me quite quickly:
    - You don't need to store the odd numbers
    - You only need a bitmask to save the sieve.

    I think you're doing the latter, though it's not obvious. I think you're storing bits in an array of size_t, and that will be what
    0xAAAAAAAAAAAAAAACu and 0xAAAAAAAAAAAAAAAAu are about. However if I am
    reading that right you've assumed that unsigned is the same size as
    size_t, and that the size is 64 bits.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Bonita Montero on Mon Dec 11 17:12:00 2023
    On 11/12/2023 03:15, Bonita Montero wrote:
    Am 10.12.2023 um 22:48 schrieb Vir Campestris:

    - You don't need to store the odd numbers

    All primes except 2 are odd.

    I had a really bad night, and I was half asleep yesterday.

    Because all primes except 2 are odd you don't need to store the _even_
    numbers, which is what I meant to say. Or else you only need to store
    the odd ones.

    - You only need a bitmask to save the sieve.

    I'm using the "scan"-lambda which does serarch through the sieve
    as fast as possible with countr_zero, which maps to a single CPU
    -instruction on modern processors.

    ... reading that right you've assumed that unsigned is the same
    size as size_t, and that the size is 64 bits.

    I've got a type word_t that is a size_t, but it can also
    be a uint8_t to test what's the performance difference.


    Your constants are 64 bit hexadecimal numbers (from counting the digits)
    and are unsigned (the u suffix) but there's no guarantee that size_t
    will be the same size as unsigned, nor that it will be 64 bit.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Vir Campestris on Wed Dec 13 15:25:42 2023
    On 13/12/2023 15:16, Vir Campestris wrote:
    Well, I went away and tried it. I didn't write anything nearly as sophisticated as yours - I didn't worry about threads and caching. The
    code follows. You'll have to unwrap it in a few places.

    It occurred to me I could go further - not just optimise it out to store
    odd numbers only, but also miss out the ones divisible by 3, or other
    higher primes. The results were:
    - Storing all numbers: 9.494 seconds to run the sieve
    - Storing odd numbers only: 4.455. That's over twice as fast.
    - Excluding also the ones divisible by 3: 3.468. That's an improvement,
    but not as much as I expected. Perhaps it's got coo complicated. I don't
    have a high powered PC.

    And no sooner had I posted that than I realised I'd got the optimisation settings wrong.

    With -Ofast fed to g++ the version that doesn't store multiples of 3 is _slower_ than the odds only one!

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Bonita Montero on Wed Dec 13 15:16:17 2023
    On 11/12/2023 17:19, Bonita Montero wrote:
    Am 11.12.2023 um 18:12 schrieb Vir Campestris:

    Because all primes except 2 are odd you don't need to store the _even_
    numbers, which is what I meant to say. Or else you only need to store
    the odd ones.

    I'll try that the next days; but I don't expect a significant change.

    Well, I went away and tried it. I didn't write anything nearly as
    sophisticated as yours - I didn't worry about threads and caching. The
    code follows. You'll have to unwrap it in a few places.

    It occurred to me I could go further - not just optimise it out to store
    odd numbers only, but also miss out the ones divisible by 3, or other
    higher primes. The results were:
    - Storing all numbers: 9.494 seconds to run the sieve
    - Storing odd numbers only: 4.455. That's over twice as fast.
    - Excluding also the ones divisible by 3: 3.468. That's an improvement,
    but not as much as I expected. Perhaps it's got coo complicated. I don't
    have a high powered PC.

    Your constants are 64 bit hexadecimal numbers (from counting the
    digits) and are unsigned (the u suffix) but there's no guarantee that
    size_t will be the same size as unsigned, nor that it will be 64 bit.

    The constants are wide enough for 64 bit size_t-s but it won't make
    a functional difference if you define word_t as a uint8_t. With an
    uint8_t word_t the code runs about 1/6-th slower; that's less than
    I expected.

    Most likely what you are seeing there is that you've gone from being
    memory bound to being compute bound. The CPU cache will make sure the
    main memory accesses are 128 bits (or whatever your bus width is)

    Andy
    --
    #include <cassert>
    #include <chrono>
    #include <cmath>
    #include <iostream>
    #include <thread>
    #include <vector>

    size_t PrimeCount = 1e8;

    // this is a convenience class for timing.
    class stopwatch {
    std::chrono::time_point<std::chrono::steady_clock> mBegin, mEnd; public:
    stopwatch() {}

    // record the start of an interval
    void start()
    {
    mBegin = std::chrono::steady_clock::now();
    }

    // record the end of an interval
    void stop()
    {
    mEnd = std::chrono::steady_clock::now();
    }

    // report the difference between the last start and stop
    double delta() const
    {
    return std::chrono::duration_cast<std::chrono::milliseconds>(mEnd -
    mBegin).count() / 1000.0;
    }
    };

    // the bitmask will be stores in a class that looks like this
    class Primes
    {
    public:
    Primes(size_t size) {};
    virtual ~Primes() {};
    virtual void unprime(size_t value) = 0; // marks a number
    as not prime
    virtual bool get(size_t value) const = 0; // gets how a
    number is marked
    virtual size_t size() const = 0; // Size of the store
    };

    // Basic store. Stores all the primes in a vector<bool>
    class PrimesVB: public Primes
    {
    std::vector<bool> mStore;
    public:
    PrimesVB(size_t size) : Primes(size), mStore(size, true) {};
    virtual void unprime(size_t value) override
    {
    mStore[value] = false;
    }

    virtual bool get(size_t value) const override
    {
    return mStore[value];
    }

    virtual size_t size() const override
    {
    return mStore.size();
    }
    };

    class Primes2: public PrimesVB
    {
    size_t mSize;
    public:
    Primes2(size_t size) : PrimesVB((size + 1) / 2), mSize(size) {};
    virtual void unprime(size_t value) override
    {
    if (value & 1)
    PrimesVB::unprime(value / 2);
    }

    virtual bool get(size_t value) const override
    {
    if (value <= 2) return true;
    return (value & 1) && PrimesVB::get(value / 2);
    }

    virtual size_t size() const override
    {
    return mSize;
    }
    };

    class Primes23: public PrimesVB
    {
    // Size of the map. This is a lot bigger than the size of the
    bitmap.
    size_t mSize;

    // each "page" contains 2 "lines". A line is a prime we are
    storing.
    // this map is where we store each number, depending on its
    value modulo 6.
    // -1 is known not to be prime, and isn't stored.
    // we store 7, 13, 18 in "line" 0; 11, 17, 23 in "line" 1.
    // the others are either divisible by 2 or 3, and don't need to
    be stored.
    static constexpr int mPosMap[6] = {-1, 0, -1, -1, -1, 1};
    public:
    Primes23(size_t size) : PrimesVB((size + 2) / 3), mSize(size) {};
    virtual void unprime(size_t value) override
    {
    if (value <= 3) return;
    auto page = value / 6;
    auto line = mPosMap[value % 6];
    if (line >= 0)
    {
    PrimesVB::unprime(page * 2 + line);
    }
    }

    virtual bool get(size_t value) const override
    {
    if (value <= 3) return true;

    auto page = value / 6; // 5=0 7=1 11=1 13=2
    auto line = mPosMap[value % 6]; // 5=2 7=0 11=2 13=0
    if (line >= 0)
    {
    return PrimesVB::get(page * 2 + line);
    }
    else
    {
    return false;
    }
    }

    virtual size_t size() const override
    {
    return mSize;
    }
    };


    // Simple sieve of Eratosthenes function
    void sieve(Primes& store)
    {
    const size_t storesize = store.size();
    const size_t maxfilter = std::ceil(std::sqrt(storesize));

    size_t prime = 2;
    while (prime < maxfilter)
    {
    // Unmark all the multiples
    for (size_t inval = prime*2; inval < storesize; inval+=
    prime)
    store.unprime(inval);

    // Find the next prime to filter with
    while (!store.get(++prime) && prime < maxfilter) {};
    }
    }

    // Benchmark function. Returns the constructed collection for validation. template <typename storetype> storetype test(const char* classname)
    {
    stopwatch s;
    s.start();
    storetype store(PrimeCount);
    s.stop();
    std::cout << classname << " construct " << s.delta() << "
    seconds." << std::endl;

    s.start();
    sieve(store);
    s.stop();
    std::cout << classname << " sieve " << s.delta() << " seconds."
    << std::endl;

    return store;
    }

    int main()
    {
    auto vb = test<PrimesVB>("vector<bool>");
    auto p2 = test<Primes2>("Storing odds only");
    auto p23 = test<Primes23>("Not storing 2s and 3s");

    // double check they all match.
    for (auto p = 1; p < PrimeCount; ++p)
    {
    assert(vb.get(p) == p2.get(p));
    assert(vb.get(p) == p23.get(p));
    }

    return 0;
    }

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to All on Thu Dec 14 15:06:14 2023
    I tried a few more collections.

    The slowest on my system is vector<bool> 12.165s for 1e9 primes.

    Having a manual bitmask, and storing in uint64_t is a lot faster -
    almost twice as fast (6.659).

    A uint32_t is a little faster than that (6.306).

    Optimising it to store odds only more than doubles the speed of
    vector<bool> to 5.107 seconds.

    That has less effect with uint64_t, taking it to 4.946 - which is the
    best time I have.

    Oddly storing odds only with uint32_t is not as fast as odds only with uint64_t, although it is still faster than storing all the values (5.302).

    I've optimised it to do less recursion, which has helped, but it still
    isn't worth not storing the multiples of 3. I'll guess that's because
    that code required a divide and mod by 6, whereas optimising out the
    evens only requires shifts and masks.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From red floyd@21:1/5 to Vir Campestris on Thu Dec 14 08:20:19 2023
    On 12/13/2023 7:16 AM, Vir Campestris wrote:
    On 11/12/2023 17:19, Bonita Montero wrote:
    Am 11.12.2023 um 18:12 schrieb Vir Campestris:

    Because all primes except 2 are odd you don't need to store the
    _even_ numbers, which is what I meant to say. Or else you only need
    to store the odd ones.

    I'll try that the next days; but I don't expect a significant change.

    Well, I went away and tried it. I didn't write anything nearly as sophisticated as yours - I didn't worry about threads and caching. The
    code follows. You'll have to unwrap it in a few places.

    It occurred to me I could go further - not just optimise it out to store
    odd numbers only, but also miss out the ones divisible by 3, or other
    higher primes. The results were:
    - Storing all numbers: 9.494 seconds to run the sieve
    - Storing odd numbers only: 4.455. That's over twice as fast.
    - Excluding also the ones divisible by 3: 3.468. That's an improvement,
    but not as much as I expected. Perhaps it's got coo complicated. I don't
    have a high powered PC.

    Another way to attempt to optimize is that except for 2 and 3, all
    primes are of the form 6n+1 or 6n-1.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Bonita Montero on Thu Dec 21 15:30:12 2023
    On 20/12/2023 12:44, Bonita Montero wrote:
    Now the code runs 0.15s on the 16 core Zen4 system, that's +87% faster.
    On th4e 64 core Zen2 system the time to produce the first 21 million
    primes is about 0.05 seconds.

    I thought I'd try it.

    Can I make a suggestion? Where it says

    if( argc < 2 )
    return EXIT_FAILURE;

    make it instead print a user guide?

    On my system my code takes about 20 seconds to produce 1e9 primes. It's
    single threaded. Yours is taking a little over 6, but about 18 seconds
    of CPU time. (I have 4 cores, each with 2 threads. Mine is designed to
    be cool and quiet...)

    I've obviously got something wrong though - I'm using a _lot_ more
    memory than you.

    Out of interest - I thought you must be Spanish from your name. And yet
    you speak fluent German?

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to red floyd on Sat Dec 23 10:30:21 2023
    red floyd <no.spam.here@its.invalid> writes:

    On 12/13/2023 7:16 AM, Vir Campestris wrote:

    On 11/12/2023 17:19, Bonita Montero wrote:

    Am 11.12.2023 um 18:12 schrieb Vir Campestris:

    Because all primes except 2 are odd you don't need to store the
    _even_ numbers, which is what I meant to say. Or else you only
    need to store the odd ones.

    I'll try that the next days; but I don't expect a significant change.

    Well, I went away and tried it. I didn't write anything nearly as
    sophisticated as yours - I didn't worry about threads and
    caching. The code follows. You'll have to unwrap it in a few places.

    It occurred to me I could go further - not just optimise it out to
    store odd numbers only, but also miss out the ones divisible by 3,
    or other higher primes. The results were:
    - Storing all numbers: 9.494 seconds to run the sieve
    - Storing odd numbers only: 4.455. That's over twice as fast.
    - Excluding also the ones divisible by 3: 3.468. That's an
    improvement, but not as much as I expected. Perhaps it's got coo
    complicated. I don't have a high powered PC.

    Another way to attempt to optimize is that except for 2 and 3, all
    primes are of the form 6n+1 or 6n-1.

    A more compact form is to consider numbers mod 30, in which case
    numbers that are 0 mod {2, 3, or 5} can be ignored. Conveniently
    there are just 8 such values - 1, 7, 11, 13, 17, 19, 23, and 29 -
    so each block of 30 can be represented in one 8-bit byte. Doing
    that gives a 25% reduction in space compared to a 6n+/-1 scheme.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vir Campestris on Sat Dec 23 10:21:04 2023
    Vir Campestris <vir.campestris@invalid.invalid> writes:

    On my system my code takes about 20 seconds to produce 1e9
    primes. [...]

    Do you mean 1e9 primes or just the primes less than 1e9? To
    do the first 1e9 primes a sieve would need to go up to about
    23.9e9 (so half that many bits if only odd numbers were
    represented).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Tim Rentsch on Sat Dec 23 21:20:42 2023
    On 23/12/2023 18:30, Tim Rentsch wrote:
    A more compact form is to consider numbers mod 30, in which case
    numbers that are 0 mod {2, 3, or 5} can be ignored. Conveniently
    there are just 8 such values - 1, 7, 11, 13, 17, 19, 23, and 29 -
    so each block of 30 can be represented in one 8-bit byte. Doing
    that gives a 25% reduction in space compared to a 6n+/-1 scheme.

    I found that on my system the modulo operation was so slow this wasn't
    worth doing.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Tim Rentsch on Sat Dec 23 21:21:21 2023
    On 23/12/2023 18:21, Tim Rentsch wrote:
    Vir Campestris <vir.campestris@invalid.invalid> writes:

    On my system my code takes about 20 seconds to produce 1e9
    primes. [...]

    Do you mean 1e9 primes or just the primes less than 1e9? To
    do the first 1e9 primes a sieve would need to go up to about
    23.9e9 (so half that many bits if only odd numbers were
    represented).

    Primes up to 1e9.

    I have another idea though, watch this space...

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vir Campestris on Sun Dec 24 00:36:42 2023
    Vir Campestris <vir.campestris@invalid.invalid> writes:

    On 23/12/2023 18:30, Tim Rentsch wrote:

    A more compact form is to consider numbers mod 30, in which case
    numbers that are 0 mod {2, 3, or 5} can be ignored. Conveniently
    there are just 8 such values - 1, 7, 11, 13, 17, 19, 23, and 29 -
    so each block of 30 can be represented in one 8-bit byte. Doing
    that gives a 25% reduction in space compared to a 6n+/-1 scheme.

    I found that on my system the modulo operation was so slow this wasn't
    worth doing.

    Depending on how the code is written, no modulo operations need
    to be done, because they will be optimized away and done at
    compile time. If we look at multiplying two numbers represented
    by bits in bytes i and j, the two numbers are

    i*30 + a
    j*30 + b

    for some a and b in { 1, 7, 11, 13 17, 19, 23, 29 }.

    The values we're interested in are the index of the product and
    the residue of the product, namely

    (i*30+a) * (j*30+b) / 30 (for the index)
    (i*30+a) * (j*30+b) % 30 (for the residue)

    Any term with a *30 in the numerator doesn't contribute to
    the residue, and also can be combined with the by-30 divide
    for computing the index. Thus these expressions can be
    rewritten as

    i*30*j + i*b + j*a + (a*b/30) (for the index)
    a*b%30 (for the residue)

    When a and b have values that are known at compile time,
    neither the divide nor the remainder result in run-time
    operations being done; all of that heavy lifting is
    optimized away and done at compile time. Of course there
    are some multiplies, but they are cheaper than divides, and
    also can be done in parallel. (The multiplication a*b also
    can be done at compile time.)

    The residue needs to be turned into a bit mask to do the
    logical operation on the byte of bits, but here again that
    computation can be optimized away and done at compile time.

    Does that all make sense?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vir Campestris on Sun Dec 24 10:49:51 2023
    Vir Campestris <vir.campestris@invalid.invalid> writes:

    On 23/12/2023 18:21, Tim Rentsch wrote:

    Vir Campestris <vir.campestris@invalid.invalid> writes:

    On my system my code takes about 20 seconds to produce 1e9
    primes. [...]

    Do you mean 1e9 primes or just the primes less than 1e9? To
    do the first 1e9 primes a sieve would need to go up to about
    23.9e9 (so half that many bits if only odd numbers were
    represented).

    Primes up to 1e9.

    I have another idea though, watch this space...

    Does your have enough memory to compute all the primes up
    to 24e9? If it does I suggest that for your next milestone.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Chris M. Thomasson on Tue Dec 26 23:35:11 2023
    On 2023-12-26, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 12/26/2023 1:27 AM, Bonita Montero wrote:
    Am 26.12.2023 um 06:39 schrieb Chris M. Thomasson:

    Remember when Intel first started hyperthreading and god damn threads
    could false share with each other (on the stacks!) the low and high 64
    byte parts of the 128 byte cache lines? A workaround was to
    artificially offset the threads stacks via alloca.

    If you have one core and two threads there's no false sharing.

    So, are you familiar with Intel's early hyper threading problem? There
    was false sharing between the hyperhtreads. The workaround did improve performance by quite a bit. IIRC, my older appcore project had this workaround incorporated into it logic. I wrote that sucker back in very
    early 2000's. Humm... I will try to find the exact line.

    Could you be both right? The performance problem was real, but maybe
    it wasn't "false sharing"? The hyper-thread are the same core; they
    share the same caches.

    Might you be describing a cache collision rather than false sharing?

    A single processor can trigger degenerate cache uses (at any level
    of a cache hierarchy). For instance, if pages of virtual memory
    are accessed in certain patterns, they can trash the same TLB entry.

    Was it perhaps the case that these thread stacks were allocated at such
    a stride, that their addresses clashed on the same cache line set?

    That could be a problem even on one processor, but it's obvious that hyperthreading could exacerbate it because switches between hyperthreads
    happen on a fine granularity. They don't get to run a full
    scheduler-driven time quantum. A low-level processor event like a
    pipeline stall (or other resource issue) can drive a hyper thread
    switch.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Wed Dec 27 20:49:06 2023
    On 2023-12-27, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 27.12.2023 um 06:59 schrieb Chris M. Thomasson:

    Something about false interference between threads that should not even
    be interacting with one another to begin with. It was a problem. The fix
    was based on alloca, so that is something to ponder on.;

    Like with any SMT-core there could be cache-thrashing between the
    cores. The L1 data cache was only 8kB, maybe only two was associative
    that could be thrashing between the cores. But I've no clue what this
    would have to to with alloca().

    Say you have thread stacks that are offset by some power of two amount,
    like a megabyte. Stack addresses at the same depth (like the local
    variables of threads executing the same function) are likely to collide
    on the same cache set (at different levels of the caching hierachy: L1,
    L2, translation caches).

    With alloca, since it moves the stack pointer, we can carve variable
    amounts of stack space to randomize the stack offsets (before calling
    the work functions). In different threads, we use differently-sized
    alloca allocations.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bonita Montero on Fri Dec 29 13:56:54 2023
    On 29/12/2023 10:58, Bonita Montero wrote:
    Am 29.12.2023 um 05:58 schrieb Chris M. Thomasson:

    On 12/28/2023 7:17 PM, Bonita Montero wrote:

    Am 29.12.2023 um 00:38 schrieb Chris M. Thomasson:

    The use of alloca to try to get around the problem in their
    (Intel's) early hyperthreaded processors was real, and actually
    helped. It was in the Intel docs.

    I don't believe it.

    Why not?

    Because I don't understand what's different with the access pattern
    of alloca() and usual stack allocation. And if I google for "alloca
    Pentium 4 site:intel.com" I can't find anything that fits.


    There is nothing different from alloca() and ordinary stack allocations.
    But alloca() makes it quick and easy to make large allocations, and to
    do so with random sizes (or at least sizes that differ significantly
    between threads, even if they are running the same code). Without
    alloca(), you'd need to do something like arranging to call a recursive function a random number of times before it then calls the next bit of
    code in your thread. alloca() is simply far easier and faster.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Bonita Montero on Fri Dec 29 16:01:47 2023
    Bonita Montero <Bonita.Montero@gmail.com> writes:
    Am 29.12.2023 um 05:58 schrieb Chris M. Thomasson:

    On 12/28/2023 7:17 PM, Bonita Montero wrote:

    Am 29.12.2023 um 00:38 schrieb Chris M. Thomasson:

    The use of alloca to try to get around the problem in their (Intel's)
    early hyperthreaded processors was real, and actually helped. It was
    in the Intel docs.

    I don't believe it.

    Why not?

    Because I don't understand what's different with the access pattern
    of alloca() and usual stack allocation. And if I google for "alloca
    Pentium 4 site:intel.com" I can't find anything that fits.


    See page 2-35 in the _Intel Pentium 4 Processor Optimization_
    manual.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Tim Rentsch on Fri Dec 29 18:03:54 2023
    On 24/12/2023 08:36, Tim Rentsch wrote:
    Depending on how the code is written, no modulo operations need
    to be done, because they will be optimized away and done at
    compile time. If we look at multiplying two numbers represented
    by bits in bytes i and j, the two numbers are

    i*30 + a
    j*30 + b

    for some a and b in { 1, 7, 11, 13 17, 19, 23, 29 }.

    The values we're interested in are the index of the product and
    the residue of the product, namely

    (i*30+a) * (j*30+b) / 30 (for the index)
    (i*30+a) * (j*30+b) % 30 (for the residue)

    Any term with a *30 in the numerator doesn't contribute to
    the residue, and also can be combined with the by-30 divide
    for computing the index. Thus these expressions can be
    rewritten as

    i*30*j + i*b + j*a + (a*b/30) (for the index)
    a*b%30 (for the residue)

    When a and b have values that are known at compile time,
    neither the divide nor the remainder result in run-time
    operations being done; all of that heavy lifting is
    optimized away and done at compile time. Of course there
    are some multiplies, but they are cheaper than divides, and
    also can be done in parallel. (The multiplication a*b also
    can be done at compile time.)

    The residue needs to be turned into a bit mask to do the
    logical operation on the byte of bits, but here again that
    computation can be optimized away and done at compile time.

    Does that all make sense?

    Right now, no. But that's me. I'll flag it to read again when I've had a
    better night's sleep.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Fri Dec 29 17:29:00 2023
    On 2023-12-29, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 29.12.2023 um 05:58 schrieb Chris M. Thomasson:

    On 12/28/2023 7:17 PM, Bonita Montero wrote:

    Am 29.12.2023 um 00:38 schrieb Chris M. Thomasson:

    The use of alloca to try to get around the problem in their (Intel's)
    early hyperthreaded processors was real, and actually helped. It was
    in the Intel docs.

    I don't believe it.

    Why not?

    Because I don't understand what's different with the access pattern
    of alloca() and usual stack allocation. And if I google for "alloca
    Pentium 4 site:intel.com" I can't find anything that fits.

    I explained it. The allocation is not used. When you call alloca(n),
    the stack pointer moves by n bytes. If you then call a function,
    its stack frame will be offset by that much (plus any alignment if
    n is not aligned).


    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Chris M. Thomasson on Sat Dec 30 04:51:47 2023
    On 2023-12-29, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 12/29/2023 2:09 PM, Chris M. Thomasson wrote:
    On 12/29/2023 8:01 AM, Scott Lurndal wrote:
    page 2-35 in the_Intel Pentium 4 Processor Optimization_
    manual.

    I think it was in chapter 5 of Developing Multithreaded Applications: A
    Platform Consistent Approach cannot remember the damn section right now.

    Wait a minute! I might have found it, lets see:

    https://www.intel.com/content/dam/www/public/us/en/documents/training/developing-multithreaded-applications.pdf

    Ahhh section 5.3! Nice! I read this a while back, before 2005.

    Wow, I guessed that one. Elsewhere in the thread, I made a remark
    similar to "imagine that thread stacks are aligned at an address like
    nnnnFFFF" I.e. the top of the stack starts at the top of a 64 kB
    aligned window. I was exactly thinking of a typical L1 cache size
    from aroudn that era, in fact.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Sat Dec 30 04:56:19 2023
    On 2023-12-30, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 29.12.2023 um 23:12 schrieb Chris M. Thomasson:

    Wait a minute! I might have found it, lets see:
    https://www.intel.com/content/dam/www/public/us/en/documents/training/developing-multithreaded-applications.pdf
    Ahhh section 5.3! Nice! I read this a while back, before 2005.

    This is nonsense what it says if the cache is really four-way
    associative like in the other paper mentioned here. And of
    course that has nothing to do with false sharing

    If it's four way associative, you star to have a performance problem as
    soon as five things collide on it. For instance, suppose you have two
    thread stack tops mapped to the same cache lines, plus three more data structures heavily being accessed.

    Oh, and the above Intel paper does actually mention alloca:

    "The easiest way to adjust the initial stack address for each thread is
    to call the memory allocation function, _alloca, with varying byte
    amounts in the intermediate thread function."

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Sat Dec 30 05:51:51 2023
    On 2023-12-30, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 30.12.2023 um 05:56 schrieb Kaz Kylheku:

    If it's four way associative, you star to have a performance problem as
    soon as five things collide on it. ...

    With four-way associativity that's rather unlikely.

    Under four-way associativity, five isn't greater than four?

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bonita Montero on Sat Dec 30 19:27:30 2023
    On 29/12/2023 17:04, Bonita Montero wrote:
    Am 29.12.2023 um 13:56 schrieb David Brown:

    There is nothing different from alloca() and ordinary stack
    allocations.  But alloca() makes it quick and easy to make large
    allocations, and to do so with random sizes (or at least sizes that
    differ significantly between threads, even if they are running the
    same code).

    I've got my own class overflow_array<> which is like an array<> and a vector<>. If you append more than the internal array<> can handle the
    objects are moved to an internal vector. I think Boost's small_array
    is similar to that.

    I'm sure that's all very nice, but it is completely and utterly
    irrelevant to the issue being discussed. Perhaps you didn't understand
    your own post?


    Without  alloca(), you'd need to do something like arranging to call a
    recursive  function a random number of times before it then calls the
    next bit of  code in your thread.  alloca() is simply far easier and
    faster.

    You've got strange ideas. alloca() has been completely removed from the
    Linux kernel.

    Citation? You are usually wrong in your claims, or at least mixed-up,
    so I won't trust you without evidence. (That does not mean you are
    wrong here - I don't know either way.)

    Of course, avoiding alloca() within the kernel is, again, utterly
    irrelevant to the point under discussion.

    The point is that if there's a fixed upper limit you would
    allocate you could allocate it always statically.


    No, that would be useless.

    The idea is to have /different/ allocation sizes in different threads,
    so that the different threads have their stacks at wildly different
    address bits in their tags for the processor caches.

    I can't tell you how helpful or not this may be in practice. I am
    merely trying to explain to you what the idea is, since you said you did
    not understand it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Sat Dec 30 20:35:03 2023
    On 2023-12-30, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 30.12.2023 um 06:51 schrieb Kaz Kylheku:

    Under four-way associativity, five isn't greater than four?

    Two-way associativeness would leave no space if both threads have
    synchronous stack frames. With four-way associativeness there's
    not much likehood for that to happen.

    My comment makes it clear that there are two thread stacks vying
    for that cache line, plus a couple of other accesses that
    are not thread stacks.

    (By the way, set associative caches don't always have full LRU
    replacement strategies.)

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From red floyd@21:1/5 to Chris M. Thomasson on Sat Dec 30 14:58:06 2023
    On 12/30/2023 11:58 AM, Chris M. Thomasson wrote:
    On 12/29/2023 8:45 PM, Bonita Montero wrote:
    Am 29.12.2023 um 18:29 schrieb Kaz Kylheku:

    I explained it. The allocation is not used. When you call alloca(n),
    the stack pointer moves by n bytes. If you then call a function,
    its stack frame will be offset by that much (plus any alignment if
    n is not aligned).

    According to the paper Scott mentioned the associativity of the
    Pentium 4's L1 data cache is four. With that it's not necessary
    to have such aliasing preventions.


    Huh? Wow, you really need to write Intel a letter about it wrt their
    older hyperthreaded processors! Although, it seems like you simply do
    not actually _understand_ what is going on here...

    Intel's suggestions for how to mitigate the problem in their earlier hyperhtreaded processors actually worked wrt improving performance. Keep
    in mind this was a while back in 2004-2005. I am happy that the way back machine has my older code.

    Oh, come on, Chris. It's clear that Bonita knows more about what's
    going on inside an Intel processor than Intel does.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Sun Dec 31 07:01:07 2023
    On 2023-12-31, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 30.12.2023 um 21:35 schrieb Kaz Kylheku:

    My comment makes it clear that there are two thread stacks
    vying for that cache line, plus a couple of other accesses
    that are not thread stacks.

    With four way associativity that's rather unlikely to happen.

    What? The associativity of the cache is not the determiner of
    program behavior; if the program accesses five different
    areas whose addresses are the same modulo 65536 bytes,
    that happens whether there a direct mapped cache, fully
    associative cache or no cache at all, or ...




    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Sun Dec 31 17:30:09 2023
    On 2023-12-31, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 31.12.2023 um 08:01 schrieb Kaz Kylheku:
    On 2023-12-31, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 30.12.2023 um 21:35 schrieb Kaz Kylheku:

    My comment makes it clear that there are two thread stacks
    vying for that cache line, plus a couple of other accesses
    that are not thread stacks.

    With four way associativity that's rather unlikely to happen.

    What? The associativity of the cache is not the determiner
    of program behavior; if the program accesses five different
    areas whose addresses are the same modulo 65536 bytes, ...

    The set size is smaller than 64kB an of course there can be aliasing
    but with four way associativity it's unlikely that there's a need to
    control aliasing. And if the set size would be 64kB aliasing would be
    even less likely through page colouring.

    When I describe a scenario in which five items are being frequently
    accessed and collide to the same cache line, which is associative with a
    set size of four, there is necessarily a conflict, because five is
    greater than four.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Bonita Montero on Sun Dec 31 18:44:54 2023
    Bonita Montero <Bonita.Montero@gmail.com> writes:
    Am 30.12.2023 um 05:56 schrieb Kaz Kylheku:

    If it's four way associative, you star to have a performance problem as
    soon as five things collide on it. ...

    With four-way associativity that's rather unlikely.


    Why? The set selection is based on specific bits in the address. As soon
    as a fifth address hits the set, you lose one of the existing lines.

    Given the aligned stacks in the specified processor, the next function
    called would _always_ overwrite the elements of the set(s) used by the calling function.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Sun Dec 31 18:49:22 2023
    David Brown <david.brown@hesbynett.no> writes:
    On 29/12/2023 17:04, Bonita Montero wrote:


    You've got strange ideas. alloca() has been completely removed from the
    Linux kernel.

    Citation? You are usually wrong in your claims, or at least mixed-up,
    so I won't trust you without evidence. (That does not mean you are
    wrong here - I don't know either way.)

    Kernel threads generally run with a small, fixed stack. Stack-based
    dynamic allocation is generally avoided. I don't know of any
    hard restrictions, but I suspect that Linus would look askance
    at it.


    Of course, avoiding alloca() within the kernel is, again, utterly
    irrelevant to the point under discussion.

    Very true

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Mon Jan 1 08:28:57 2024
    On 2024-01-01, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 31.12.2023 um 19:44 schrieb Scott Lurndal:

    Why? ...

    Because there must be set-conflicts on the same line index.
    That's rather unlikely with four-way associativeness.

    It's rather likely. Suppose you have a large number of threads,
    with stacks that end on a multiple of 64 kB.

    Only two of these threads are runnable concurrently on the
    two hyperthreads. However, your application may be rapidly
    switching between them.

    If their stacks evacuate each other from the L1 cache,
    that could be bad.

    For compute-bound tasks with long quanta, it might not
    matter so much; the same threads will occupy the hyperthreads
    for tens of milliseconds at a time or whatever.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Scott Lurndal on Mon Jan 1 12:46:39 2024
    On 31/12/2023 19:49, Scott Lurndal wrote:
    David Brown <david.brown@hesbynett.no> writes:
    On 29/12/2023 17:04, Bonita Montero wrote:


    You've got strange ideas. alloca() has been completely removed from the
    Linux kernel.

    Citation? You are usually wrong in your claims, or at least mixed-up,
    so I won't trust you without evidence. (That does not mean you are
    wrong here - I don't know either way.)

    Kernel threads generally run with a small, fixed stack. Stack-based
    dynamic allocation is generally avoided. I don't know of any
    hard restrictions, but I suspect that Linus would look askance
    at it.


    Oh, I have no doubts that it would be frowned upon in the kernel itself.
    And I know that VLAs have been actively removed from the kernel. But Bonita's claim implies that "alloca()" was used in the kernel earlier,
    and has since been removed, presumably due to a specific decision.


    Of course, avoiding alloca() within the kernel is, again, utterly
    irrelevant to the point under discussion.

    Very true

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Sat Jan 6 08:31:06 2024
    On 2024-01-06, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 06.01.2024 um 04:21 schrieb Chris M. Thomasson:

    Are you trying to tell me that the aliasing problem on those older Intel
    hyperthreaded processors and the workaround (from Intel) was a myth?
    lol. ;^)

    Intel just made a nerd-suggestion. With four-way associativity
    there's no frequent aliasing problem in the L1 data dache of
    Pentium 4.

    I think the L1 cache was 8K on that thing, and the blocks are 32 bytes.

    I think how it works on the P4 is that the address is structured is like
    this:

    31 11 10 5 4 0
    | | | | | |
    [ 21 bit tag ] [ 6 bit cache set ] [ 5 bit offset into 32 bit block ]

    Thus say we have an area of the stack with the address
    range nnnnFF80 to nnnnFFFF (128 bytes, 4 x 32 byte cache blocks).

    These four blocks all map to the same set: they have the same six
    bits in the "cache set" part of the address.

    So if a thread is accessing something in all four blocks, it will
    completely use that cache set, all by itself.

    If any other thread has a similar block in its stack, with the same
    cache set ID, it will cause evictions against this thread.

    Sure, if each of these threads confines itself to working with just one cacheline-sized aperture of the stack, it looks better.

    You're forgetting that the sets are very small and that groups of
    adjacent four 32 byte blocks map to the same set. Touch four adjacent
    cache blocks that are aligned on a 128 byte boundary, and you have
    hit full occupancy in the cache set corresponding to that block!

    (I suspect the references to 64K should not be kilobytes but sets.
    The 8K cache has 64 sets.)

    In memory, 128 byte blocks that is aligned maps to, and precisely covers
    a cache set. If two such blocks addresses that are equal modulo 8K, they collide to the same cache set. If one of those blocks is fully present
    in the cache, the other must be fully evicted.

    It's really easy to see how things can go south under hyperthreading.
    If two hyperthreads are working with clashing 128 byte areas that each
    want to hog the same cache set, and the core is switching between them
    on a fine-grained basis, ... you get the picture.

    It's very easy for the memory mapping allocations used for thread
    stacks to produce addresses such tha the delta between them is a
    multiple of 8K.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From red floyd@21:1/5 to Chris M. Thomasson on Mon Jan 8 17:14:55 2024
    On 1/8/2024 12:18 PM, Chris M. Thomasson wrote:
    On 1/7/2024 9:48 PM, Bonita Montero wrote:
    Am 07.01.2024 um 21:46 schrieb Chris M. Thomasson:

    I know that they had a problem and the provided workaround from Intel
    really did help out. ...

    Absolutely not, not with four way associativity.


    Whatever you say; Sigh. I am done with this.

    Intel just needs to call Bonita whenever they have an issue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Bonita Montero on Tue Jan 9 02:02:13 2024
    On 2024-01-08, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 07.01.2024 um 21:46 schrieb Chris M. Thomasson:

    I know that they had a problem and the provided workaround from Intel
    really did help out. ...

    Absolutely not, not with four way associativity.

    You are referring to that as if it were some rescuing magic.

    Four way associativity is quite poor. It's only a step above two-way,
    which is a step above the bottom of the barrel: direct mapping.

    The cache entries are tiny on the P4: 32 bytes. An entire set is just
    128 bytes. Code that works intensely with a 128 byte block occupies
    the entire set of four cache blocks. If anything else touches memory
    that maps to the same set, an evacuation has to take place, and
    then when that code is resumed, it will take a hit.

    If two hyperthreads run the same function which works intensively
    with a 128 byte area of the stack, and the distance between the
    stacks is a multiple of 8K, they will interfere through the same
    cache set. The cache set is 128 bytes; each thread wants to keep
    its own 128 bytes in it.

    The situation seems far from improbable to me.

    We don't need more than four hyper-threads in order to blow the
    four-way set associative cache. We just need more than one
    in the above scenario.

    Needing more that four would be the case for threads that are hammering
    away on a 32 byte block. (And so since there are only two hyperthreads
    on the P4, that wouldn't be a problem.)

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vir Campestris on Sat Jan 13 21:31:42 2024
    Vir Campestris <vir.campestris@invalid.invalid> writes:

    On 24/12/2023 08:36, Tim Rentsch wrote:

    Depending on how the code is written, no modulo operations need
    to be done, because they will be optimized away and done at
    compile time. If we look at multiplying two numbers represented
    by bits in bytes i and j, the two numbers are

    i*30 + a
    j*30 + b

    for some a and b in { 1, 7, 11, 13 17, 19, 23, 29 }.

    The values we're interested in are the index of the product and
    the residue of the product, namely

    (i*30+a) * (j*30+b) / 30 (for the index)
    (i*30+a) * (j*30+b) % 30 (for the residue)

    Any term with a *30 in the numerator doesn't contribute to
    the residue, and also can be combined with the by-30 divide
    for computing the index. Thus these expressions can be
    rewritten as

    i*30*j + i*b + j*a + (a*b/30) (for the index)
    a*b%30 (for the residue)

    When a and b have values that are known at compile time,
    neither the divide nor the remainder result in run-time
    operations being done; all of that heavy lifting is
    optimized away and done at compile time. Of course there
    are some multiplies, but they are cheaper than divides, and
    also can be done in parallel. (The multiplication a*b also
    can be done at compile time.)

    The residue needs to be turned into a bit mask to do the
    logical operation on the byte of bits, but here again that
    computation can be optimized away and done at compile time.

    Does that all make sense?

    Right now, no. But that's me. I'll flag it to read again when I've
    had a better night's sleep.

    I'm posting to nudge you into looking at this again, if
    you haven't already.

    I have now had a chance to get your source and run some
    comparisons. A program along the lines I outlined can run much
    faster than the code you posted (as well as needing less memory).
    A good target is to find all primes less than 1e11, which needs
    less than 4gB of ram.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)