• faster "CRC"- / "FNV"-hashing

    From Bonita Montero@21:1/5 to All on Sun Oct 10 18:51:56 2021
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction
    for CRC, but which have the same equal distribution and are much more performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to Bonita Montero on Sun Oct 10 12:36:06 2021
    On 10/10/2021 9:51 AM, Bonita Montero wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction
    for CRC, but which have the same equal distribution and are much more performant on modern OoO-CPUS. How did I do that ?

    I don't know.


    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32:  : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64:  : 3.11791 GB/s 232%
    crc64:           : 0.478093 GB/s
    crc64 blocked:   : 2.39144 GB/s 400%


    Btw, if you can come up with a really fast SHA2 impl, I would be
    interested because of my experimental HMAC cipher. I have a C version:

    https://groups.google.com/g/comp.lang.c/c/a53VxN8cwkY/m/XKl1-0a8DAAJ

    https://pastebin.com/raw/feUnA3kP

    Also, I put it up online using a rather inefficient, but working SHA2
    lib. I say inefficient because it does not provide an update method.

    http://fractallife247.com/test/hmac_cipher/ver_0_0_0_1?ct_hmac_cipher= 0422a78fffa58f349a486b3842d2eedfa87985658fb9f011153b896fb97b4b291224ddd327017e9fcdf4b3d8fd5dfde47ae8f23639044f7c5c73a1f0a891087814a139dfe44e47b4300cac921f736776ab7042fb09aae38f8780aa49e5cd128d141e2982d3aa4b288fceef939126c0a319da20b0cf219732504491eb14c6911
    49f

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Chris M. Thomasson on Sun Oct 10 22:51:37 2021
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:


    Btw, if you can come up with a really fast SHA2 impl, I would be
    interested because of my experimental HMAC cipher. I have a C version:

    https://en.wikipedia.org/wiki/Intel_SHA_extensions https://developer.arm.com/documentation/100076/0100/a64-instruction-set-reference/a64-cryptographic-algorithms/a64-cryptographic-instructions?lang=en

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 08:09:53 2021
    Am 10.10.2021 um 21:36 schrieb Chris M. Thomasson:
    On 10/10/2021 9:51 AM, Bonita Montero wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction
    for CRC, but which have the same equal distribution and are much more
    performant on modern OoO-CPUS. How did I do that ?

    I don't know.


    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32:  : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64:  : 3.11791 GB/s 232%
    crc64:           : 0.478093 GB/s
    crc64 blocked:   : 2.39144 GB/s 400%


    Btw, if you can come up with a really fast SHA2 impl, I would be
    interested because of my experimental HMAC cipher. I have a C version:

    SHA* is completely different and can't be improved how I did it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 05:23:11 2021
    On 2021-10-10, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction
    for CRC, but which have the same equal distribution and are much more performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    Proprietary?

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 05:41:06 2021
    On 2021-10-10, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction
    for CRC, but which have the same equal distribution and are much more performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    Here is mine :


    -opyright : (c) 2004 by Dzenis Softic / http://www.dzeni.com
    *
    * Filename : vmp_crc32.c
    *
    * Description: Calculates crc32 of string
    *
    * Company : Seenetix D.O.O.
    *
    * Authors : B. Maksimovic
    *
    * $Id$
    *
    *===========================================================================*/ #include <stddef.h>

    static unsigned crc32_table[] =
    {
    0x00000000, 0x77073096, 0xEE0E612C, 0x990951BA,
    0x076DC419, 0x706AF48F, 0xE963A535, 0x9E6495A3,
    0x0EDB8832, 0x79DCB8A4, 0xE0D5E91E, 0x97D2D988,
    0x09B64C2B, 0x7EB17CBD, 0xE7B82D07, 0x90BF1D91,
    0x1DB71064, 0x6AB020F2, 0xF3B97148, 0x84BE41DE,
    0x1ADAD47D, 0x6DDDE4EB, 0xF4D4B551, 0x83D385C7,
    0x136C9856, 0x646BA8C0, 0xFD62F97A, 0x8A65C9EC,
    0x14015C4F, 0x63066CD9, 0xFA0F3D63, 0x8D080DF5,
    0x3B6E20C8, 0x4C69105E, 0xD56041E4, 0xA2677172,
    0x3C03E4D1, 0x4B04D447, 0xD20D85FD, 0xA50AB56B,
    0x35B5A8FA, 0x42B2986C, 0xDBBBC9D6, 0xACBCF940,
    0x32D86CE3, 0x45DF5C75, 0xDCD60DCF, 0xABD13D59,
    0x26D930AC, 0x51DE003A, 0xC8D75180, 0xBFD06116,
    0x21B4F4B5, 0x56B3C423, 0xCFBA9599, 0xB8BDA50F,
    0x2802B89E, 0x5F058808, 0xC60CD9B2, 0xB10BE924,
    0x2F6F7C87, 0x58684C11, 0xC1611DAB, 0xB6662D3D,

    0x76DC4190, 0x01DB7106, 0x98D220BC, 0xEFD5102A,
    0x71B18589, 0x06B6B51F, 0x9FBFE4A5, 0xE8B8D433,
    0x7807C9A2, 0x0F00F934, 0x9609A88E, 0xE10E9818,
    0x7F6A0DBB, 0x086D3D2D, 0x91646C97, 0xE6635C01,
    0x6B6B51F4, 0x1C6C6162, 0x856530D8, 0xF262004E,
    0x6C0695ED, 0x1B01A57B, 0x8208F4C1, 0xF50FC457,
    0x65B0D9C6, 0x12B7E950, 0x8BBEB8EA, 0xFCB9887C,
    0x62DD1DDF, 0x15DA2D49, 0x8CD37CF3, 0xFBD44C65,
    0x4DB26158, 0x3AB551CE, 0xA3BC0074, 0xD4BB30E2,
    0x4ADFA541, 0x3DD895D7, 0xA4D1C46D, 0xD3D6F4FB,
    0x4369E96A, 0x346ED9FC, 0xAD678846, 0xDA60B8D0,
    0x44042D73, 0x33031DE5, 0xAA0A4C5F, 0xDD0D7CC9,
    0x5005713C, 0x270241AA, 0xBE0B1010, 0xC90C2086,
    0x5768B525, 0x206F85B3, 0xB966D409, 0xCE61E49F,
    0x5EDEF90E, 0x29D9C998, 0xB0D09822, 0xC7D7A8B4,
    0x59B33D17, 0x2EB40D81, 0xB7BD5C3B, 0xC0BA6CAD,

    0xEDB88320, 0x9ABFB3B6, 0x03B6E20C, 0x74B1D29A,
    0xEAD54739, 0x9DD277AF, 0x04DB2615, 0x73DC1683,
    0xE3630B12, 0x94643B84, 0x0D6D6A3E, 0x7A6A5AA8,
    0xE40ECF0B, 0x9309FF9D, 0x0A00AE27, 0x7D079EB1,
    0xF00F9344, 0x8708A3D2, 0x1E01F268, 0x6906C2FE,
    0xF762575D, 0x806567CB, 0x196C3671, 0x6E6B06E7,
    0xFED41B76, 0x89D32BE0, 0x10DA7A5A, 0x67DD4ACC,
    0xF9B9DF6F, 0x8EBEEFF9, 0x17B7BE43, 0x60B08ED5,
    0xD6D6A3E8, 0xA1D1937E, 0x38D8C2C4, 0x4FDFF252,
    0xD1BB67F1, 0xA6BC5767, 0x3FB506DD, 0x48B2364B,
    0xD80D2BDA, 0xAF0A1B4C, 0x36034AF6, 0x41047A60,
    0xDF60EFC3, 0xA867DF55, 0x316E8EEF, 0x4669BE79,
    0xCB61B38C, 0xBC66831A, 0x256FD2A0, 0x5268E236,
    0xCC0C7795, 0xBB0B4703, 0x220216B9, 0x5505262F,
    0xC5BA3BBE, 0xB2BD0B28, 0x2BB45A92, 0x5CB36A04,
    0xC2D7FFA7, 0xB5D0CF31, 0x2CD99E8B, 0x5BDEAE1D,

    0x9B64C2B0, 0xEC63F226, 0x756AA39C, 0x026D930A,
    0x9C0906A9, 0xEB0E363F, 0x72076785, 0x05005713,
    0x95BF4A82, 0xE2B87A14, 0x7BB12BAE, 0x0CB61B38,
    0x92D28E9B, 0xE5D5BE0D, 0x7CDCEFB7, 0x0BDBDF21,
    0x86D3D2D4, 0xF1D4E242, 0x68DDB3F8, 0x1FDA836E,
    0x81BE16CD, 0xF6B9265B, 0x6FB077E1, 0x18B74777,
    0x88085AE6, 0xFF0F6A70, 0x66063BCA, 0x11010B5C,
    0x8F659EFF, 0xF862AE69, 0x616BFFD3, 0x166CCF45,
    0xA00AE278, 0xD70DD2EE, 0x4E048354, 0x3903B3C2,
    0xA7672661, 0xD06016F7, 0x4969474D, 0x3E6E77DB,
    0xAED16A4A, 0xD9D65ADC, 0x40DF0B66, 0x37D83BF0,
    0xA9BCAE53, 0xDEBB9EC5, 0x47B2CF7F, 0x30B5FFE9,
    0xBDBDF21C, 0xCABAC28A, 0x53B39330, 0x24B4A3A6,
    0xBAD03605, 0xCDD70693, 0x54DE5729, 0x23D967BF,
    0xB3667A2E, 0xC4614AB8, 0x5D681B02, 0x2A6F2B94,
    0xB40BBE37, 0xC30C8EA1, 0x5A05DF1B, 0x2D02EF8D,

    };

    inline static unsigned calc_crc32(unsigned char b, unsigned crc32)
    {
    return (crc32 >> 8) ^ crc32_table[b ^ (crc32 & 0x000000FF)];
    }

    unsigned vmp_crc32str(const char* str)
    {
    unsigned crc32_tmp = 0xFFFFFFFF;

    while(*str)
    {
    crc32_tmp = calc_crc32(*str++, crc32_tmp);
    }

    crc32_tmp = ~crc32_tmp;

    return crc32_tmp;
    }

    unsigned vmp_crc32(const void* src, size_t size)
    {
    unsigned crc32_tmp = 0xFFFFFFFF;
    int i = 0;
    for(;i<size;++i)
    {
    crc32_tmp = calc_crc32(*(const unsigned char *)src++, crc32_tmp);
    }
    crc32_tmp = ~crc32_tmp;

    return crc32_tmp;
    }

    /*=============================================================================
    * History:
    *
    * $Log$
    * Revision 1.2 2004/04/20 20:08:46 bmaxa
    * added vmp_crc32 void version
    *
    * Revision 1.1 2004/04/20 19:31:53 bmaxa
    * crc32 added
    *
    *
    *===========================================================================*/


    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 06:32:22 2021
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 07:23 schrieb Branimir Maksimovic:
    On 2021-10-10, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction >>> for CRC, but which have the same equal distribution and are much more
    performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    Proprietary?

    That's the improved "CRC64":

    #include "crc64.h"

    using namespace std;

    uint64_t CRC64_ECMA182::operator ()( void const *p, size_t n, uint64_t startCrc ) const
    {
    uint64_t crc = startCrc;
    uint8_t const *s = (uint8_t *)p,
    *end = s + n;
    size_t t;
    for( ; s != end; ++s )
    t = (size_t)(crc >> 56) ^ *s,
    crc = table.t[t] ^ (crc << 8);
    return crc;
    }

    uint64_t CRC64_ECMA182::blocked( void const *p, size_t n, uint64_t
    startCrc ) const
    {
    auto crc64_8x8 = []( uint8_t const *s ) -> uint64_t
    {
    uint64_t crcs[8] =
    {
    table.t[s[ 0]],
    table.t[s[ 8]],
    table.t[s[16]],
    table.t[s[24]],
    table.t[s[32]],
    table.t[s[40]],
    table.t[s[48]],
    table.t[s[56]]
    };
    size_t t;
    uint8_t const *end = ++s + 7;
    do
    t = (size_t)(crcs[0] >> 56) ^ s[ 0],
    crcs[0] = table.t[t] ^ (crcs[0] << 8),
    t = (size_t)(crcs[1] >> 56) ^ s[ 8],
    crcs[1] = table.t[t] ^ (crcs[1] << 8),
    t = (size_t)(crcs[2] >> 56) ^ s[16],
    crcs[2] = table.t[t] ^ (crcs[2] << 8),
    t = (size_t)(crcs[3] >> 56) ^ s[24],
    crcs[3] = table.t[t] ^ (crcs[3] << 8),
    t = (size_t)(crcs[4] >> 56) ^ s[32],
    crcs[4] = table.t[t] ^ (crcs[4] << 8),
    t = (size_t)(crcs[5] >> 56) ^ s[40],
    crcs[5] = table.t[t] ^ (crcs[5] << 8),
    t = (size_t)(crcs[6] >> 56) ^ s[48],
    crcs[6] = table.t[t] ^ (crcs[6] << 8),
    t = (size_t)(crcs[7] >> 56) ^ s[56],
    crcs[7] = table.t[t] ^ (crcs[7] << 8);
    while( ++s != end );
    uint64_t crc = 0;
    for( size_t i = 0; i != 8; ++i )
    crc ^= crcs[i];
    return crc;
    };
    uint8_t const *s = (uint8_t *)p;
    uint64_t crc = startCrc;
    for( uint8_t const *end = s + (n & -64); s != end; s += 64 )
    crc ^= crc64_8x8( s );
    crc ^= (*this)( s, n % 64, 0 );
    return crc;
    }

    CRC64_ECMA182::crc64_table::crc64_table()
    {
    uint64_t const CRC64_ECMA182_POLY = 0x42F0E1EBA9EA3693u;
    for( uint64_t i = 0; i != 256; ++i )
    {
    uint64_t crc = 0,
    c = i << 56;
    for( unsigned j = 0; j != 8; ++j )
    crc = (int64_t)(crc ^ c) < 0 ? (crc << 1) ^ CRC64_ECMA182_POLY : crc
    << 1,
    c <<= 1;
    t[(size_t)i] = crc;
    }
    }

    CRC64_ECMA182::crc64_table CRC64_ECMA182::table;

    Why does it run faster ?
    Dunno, haven't have need to calculate crc64 yet :P
    Better to generate table, don't waste time on generation :P


    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 06:29:03 2021
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 07:23 schrieb Branimir Maksimovic:
    On 2021-10-10, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction >>> for CRC, but which have the same equal distribution and are much more
    performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    Proprietary?

    Of course - but the same equal distribution.

    Dunno, those hashing algos work much better if there is support
    from hardware. But, of course faster, better :P

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 08:27:24 2021
    Am 11.10.2021 um 07:23 schrieb Branimir Maksimovic:
    On 2021-10-10, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction
    for CRC, but which have the same equal distribution and are much more
    performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    Proprietary?

    That's the improved "CRC64":

    #include "crc64.h"

    using namespace std;

    uint64_t CRC64_ECMA182::operator ()( void const *p, size_t n, uint64_t
    startCrc ) const
    {
    uint64_t crc = startCrc;
    uint8_t const *s = (uint8_t *)p,
    *end = s + n;
    size_t t;
    for( ; s != end; ++s )
    t = (size_t)(crc >> 56) ^ *s,
    crc = table.t[t] ^ (crc << 8);
    return crc;
    }

    uint64_t CRC64_ECMA182::blocked( void const *p, size_t n, uint64_t
    startCrc ) const
    {
    auto crc64_8x8 = []( uint8_t const *s ) -> uint64_t
    {
    uint64_t crcs[8] =
    {
    table.t[s[ 0]],
    table.t[s[ 8]],
    table.t[s[16]],
    table.t[s[24]],
    table.t[s[32]],
    table.t[s[40]],
    table.t[s[48]],
    table.t[s[56]]
    };
    size_t t;
    uint8_t const *end = ++s + 7;
    do
    t = (size_t)(crcs[0] >> 56) ^ s[ 0],
    crcs[0] = table.t[t] ^ (crcs[0] << 8),
    t = (size_t)(crcs[1] >> 56) ^ s[ 8],
    crcs[1] = table.t[t] ^ (crcs[1] << 8),
    t = (size_t)(crcs[2] >> 56) ^ s[16],
    crcs[2] = table.t[t] ^ (crcs[2] << 8),
    t = (size_t)(crcs[3] >> 56) ^ s[24],
    crcs[3] = table.t[t] ^ (crcs[3] << 8),
    t = (size_t)(crcs[4] >> 56) ^ s[32],
    crcs[4] = table.t[t] ^ (crcs[4] << 8),
    t = (size_t)(crcs[5] >> 56) ^ s[40],
    crcs[5] = table.t[t] ^ (crcs[5] << 8),
    t = (size_t)(crcs[6] >> 56) ^ s[48],
    crcs[6] = table.t[t] ^ (crcs[6] << 8),
    t = (size_t)(crcs[7] >> 56) ^ s[56],
    crcs[7] = table.t[t] ^ (crcs[7] << 8);
    while( ++s != end );
    uint64_t crc = 0;
    for( size_t i = 0; i != 8; ++i )
    crc ^= crcs[i];
    return crc;
    };
    uint8_t const *s = (uint8_t *)p;
    uint64_t crc = startCrc;
    for( uint8_t const *end = s + (n & -64); s != end; s += 64 )
    crc ^= crc64_8x8( s );
    crc ^= (*this)( s, n % 64, 0 );
    return crc;
    }

    CRC64_ECMA182::crc64_table::crc64_table()
    {
    uint64_t const CRC64_ECMA182_POLY = 0x42F0E1EBA9EA3693u;
    for( uint64_t i = 0; i != 256; ++i )
    {
    uint64_t crc = 0,
    c = i << 56;
    for( unsigned j = 0; j != 8; ++j )
    crc = (int64_t)(crc ^ c) < 0 ? (crc << 1) ^ CRC64_ECMA182_POLY : crc
    << 1,
    c <<= 1;
    t[(size_t)i] = crc;
    }
    }

    CRC64_ECMA182::crc64_table CRC64_ECMA182::table;

    Why does it run faster ?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 08:15:55 2021
    Am 11.10.2021 um 07:23 schrieb Branimir Maksimovic:
    On 2021-10-10, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield
    the same results, and don't provide the opportunity for error-correction
    for CRC, but which have the same equal distribution and are much more
    performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    Proprietary?

    Of course - but the same equal distribution.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 08:33:33 2021
    Am 11.10.2021 um 08:32 schrieb Branimir Maksimovic:

    CRC64_ECMA182::crc64_table::crc64_table()
    {
    uint64_t const CRC64_ECMA182_POLY = 0x42F0E1EBA9EA3693u;
    for( uint64_t i = 0; i != 256; ++i )
    {
    uint64_t crc = 0,
    c = i << 56;
    for( unsigned j = 0; j != 8; ++j )
    crc = (int64_t)(crc ^ c) < 0 ? (crc << 1) ^ CRC64_ECMA182_POLY : crc
    << 1,
    c <<= 1;
    t[(size_t)i] = crc;
    }
    }

    CRC64_ECMA182::crc64_table CRC64_ECMA182::table;

    Why does it run faster ?
    Dunno, haven't have need to calculate crc64 yet :P
    Better to generate table, don't waste time on generation :P

    Eeeh, I'm using also a table as you can see from above.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 08:34:58 2021
    Am 11.10.2021 um 08:29 schrieb Branimir Maksimovic:
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 07:23 schrieb Branimir Maksimovic:
    On 2021-10-10, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    I derived two hash-algorithms from FNV32/64 and CRC64 that don't yield >>>> the same results, and don't provide the opportunity for error-correction >>>> for CRC, but which have the same equal distribution and are much more
    performant on modern OoO-CPUS. How did I do that ?

    These are the results on my Linux Ryzen 7 1800X:

    fnv streamed 32: : 0.964425 GB/s
    fnv blocked 32: : 1.96624 GB/s 104%
    fnv streamed 64: : 0.939418 GB/s
    fnv blocked 64: : 3.11791 GB/s 232%
    crc64: : 0.478093 GB/s
    crc64 blocked: : 2.39144 GB/s 400%

    Proprietary?

    Of course - but the same equal distribution.

    Dunno, those hashing algos work much better if there is
    support from hardware. But, of course faster, better :P

    There are only special SSE-instrucions for CRC32, but not for CRC64.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 07:46:59 2021
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 08:32 schrieb Branimir Maksimovic:

    CRC64_ECMA182::crc64_table::crc64_table()
    {
    uint64_t const CRC64_ECMA182_POLY = 0x42F0E1EBA9EA3693u;
    for( uint64_t i = 0; i != 256; ++i )
    {
    uint64_t crc = 0,
    c = i << 56;
    for( unsigned j = 0; j != 8; ++j )
    crc = (int64_t)(crc ^ c) < 0 ? (crc << 1) ^ CRC64_ECMA182_POLY : crc
    << 1,
    c <<= 1;
    t[(size_t)i] = crc;
    }
    }

    CRC64_ECMA182::crc64_table CRC64_ECMA182::table;

    Why does it run faster ?
    Dunno, haven't have need to calculate crc64 yet :P
    Better to generate table, don't waste time on generation :P

    Eeeh, I'm using also a table as you can see from above.


    What do you think about following: https://github.com/intel/isa-l/tree/master/crc

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 09:11:56 2021
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 09:46 schrieb Branimir Maksimovic:
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 08:32 schrieb Branimir Maksimovic:

    CRC64_ECMA182::crc64_table::crc64_table()
    {
    uint64_t const CRC64_ECMA182_POLY = 0x42F0E1EBA9EA3693u;
    for( uint64_t i = 0; i != 256; ++i )
    {
    uint64_t crc = 0,
    c = i << 56;
    for( unsigned j = 0; j != 8; ++j )
    crc = (int64_t)(crc ^ c) < 0 ? (crc << 1) ^ CRC64_ECMA182_POLY : crc
    << 1,
    c <<= 1;
    t[(size_t)i] = crc;
    }
    }

    CRC64_ECMA182::crc64_table CRC64_ECMA182::table;

    Why does it run faster ?
    Dunno, haven't have need to calculate crc64 yet :P
    Better to generate table, don't waste time on generation :P

    Eeeh, I'm using also a table as you can see from above.


    What do you think about following:
    https://github.com/intel/isa-l/tree/master/crc

    I won't check this ASM-code. An I don't know why people use ASM.
    C / C++ and intrinsics usually result in better code.
    If you want to be hacker you have to program in ASM
    (without debugger :) )
    ASM code is most efficient always and works as tested without
    surprises :P
    C/C++ you can use after ypou master ASM :P
    IMO :P

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 10:53:30 2021
    Am 11.10.2021 um 09:46 schrieb Branimir Maksimovic:
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 08:32 schrieb Branimir Maksimovic:

    CRC64_ECMA182::crc64_table::crc64_table()
    {
    uint64_t const CRC64_ECMA182_POLY = 0x42F0E1EBA9EA3693u;
    for( uint64_t i = 0; i != 256; ++i )
    {
    uint64_t crc = 0,
    c = i << 56;
    for( unsigned j = 0; j != 8; ++j )
    crc = (int64_t)(crc ^ c) < 0 ? (crc << 1) ^ CRC64_ECMA182_POLY : crc
    << 1,
    c <<= 1;
    t[(size_t)i] = crc;
    }
    }

    CRC64_ECMA182::crc64_table CRC64_ECMA182::table;

    Why does it run faster ?
    Dunno, haven't have need to calculate crc64 yet :P
    Better to generate table, don't waste time on generation :P

    Eeeh, I'm using also a table as you can see from above.


    What do you think about following: https://github.com/intel/isa-l/tree/master/crc

    I won't check this ASM-code. An I don't know why people use ASM.
    C / C++ and intrinsics usually result in better code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 11:18:06 2021
    Am 11.10.2021 um 11:11 schrieb Branimir Maksimovic:

    ASM code is most efficient always and works as tested without
    surprises :P

    ASM code can be faster in rare cases when you know everyting about
    your OoO-CPU, but in most cases the compiler generates better code.
    I've seen code from clang where you might think: there would be no ASM-programmer that knows all of these optimization-tricks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 10:42:13 2021
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 11:11 schrieb Branimir Maksimovic:

    ASM code is most efficient always and works as tested without
    surprises :P

    ASM code can be faster in rare cases when you know everyting about
    your OoO-CPU, but in most cases the compiler generates better code.
    I've seen code from clang where you might think: there would be no ASM-programmer that knows all of these optimization-tricks.
    Human always beats compiler, as you can always examine
    compiler generated code, learn and bit it :
    Also humans are better in algorithms :P
    As major optimisation is *always algorithm* :P

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 11:12:47 2021
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 12:42 schrieb Branimir Maksimovic:

    Human always beats compiler, as you can always examine
    compiler generated code, learn and bit it :

    Humans tend to write Asm that is readable. Compilers generate
    Asm that's often not readbale for performance reasons.
    Readable and compact, not bloated :P
    Optimize in iterations :P

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 13:03:41 2021
    Am 11.10.2021 um 12:42 schrieb Branimir Maksimovic:

    Human always beats compiler, as you can always examine
    compiler generated code, learn and bit it :

    Humans tend to write Asm that is readable. Compilers generate
    Asm that's often not readbale for performance reasons.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 13:44:49 2021
    Readable and compact, not bloated :P

    Inlining makes bloat - but is usually more performant.
    Loop-unrolling of small loops makes bloat - but is usually
    more performant.

    Optimize in iterations :P

    Hand-written asm is usually slower because there are only a small
    number of asm-writers that know all the optimizaztion-tricks that
    compilers have learned for decades. F.e. the code of clang 12 is
    meanwhile somewhat faster than that of gcc 11. I think in five
    years absoutely no asm writer can beat a compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bo Persson@21:1/5 to Bonita Montero on Mon Oct 11 13:58:31 2021
    On 2021-10-11 at 11:18, Bonita Montero wrote:
    Am 11.10.2021 um 11:11 schrieb Branimir Maksimovic:

    ASM code is most efficient always and works as tested without
    surprises :P

    ASM code can be faster in rare cases when you know everyting about
    your OoO-CPU, but in most cases the compiler generates better code.
    I've seen code from clang where you might think: there would be no ASM-programmer that knows all of these optimization-tricks.

    In this case the asm code is supplied by Intel.

    I bet they qualify for "you know everything about your OoO-CPU". :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to David Brown on Mon Oct 11 12:56:41 2021
    On 2021-10-11, David Brown <david.brown@hesbynett.no> wrote:
    On 11/10/2021 13:58, Bo Persson wrote:
    On 2021-10-11 at 11:18, Bonita Montero wrote:
    Am 11.10.2021 um 11:11 schrieb Branimir Maksimovic:

    ASM code is most efficient always and works as tested without
    surprises :P

    ASM code can be faster in rare cases when you know everyting about
    your OoO-CPU, but in most cases the compiler generates better code.
    I've seen code from clang where you might think: there would be no
    ASM-programmer that knows all of these optimization-tricks.

    In this case the asm code is supplied by Intel.

    I bet they qualify for "you know everything about your OoO-CPU". :-)


    Well, kind of. Intel as a whole probably knows most of what there is to
    know about Intel processors. But Intel does not write code - people
    working at (or for) Intel write code, and there is absolutely no
    guarantee that the person or people who wrote the code know all about
    all of Intel's processors - never mind non-Intel x86 processors, or
    non-x86 processors, or any other device. At best, you can probably be
    quite confident that the code is close to optimal if you run it on the
    same processor the assembly author used.

    How well it will run on the dozen other current Intel processor
    variations is another matter (by "dozen", I am ignoring devices that
    differ only in clock speed or core count, and ignoring older devices).
    How well it will run on AMD processors is also another matter.

    You write these routines in C (or C++), and you tune the optimisation.
    You compile with "-fmarch=native", or whatever flag your compiler has to
    get the fastest code for your particular processor. You use compiler features for multi-versioning for target-specific optimisations, so that
    the compiler generates versions for different SIMD and other instruction
    set extensions, and picks the best version for the real cpu when the
    code starts up. You use inline assembly or intrinsics for specific
    target versions if you are /sure/ your assembly works faster, and have measured it.

    General use of assembly language is something that comes /way/ down on
    the list when you are trying to get fast implementation of code.
    Compiler at best can produce generic optimistic code that does not stand
    a chance against dedicated human :P
    ONly reason we do not program in assembler is because lerning curve is
    steep :P

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bo Persson on Mon Oct 11 14:23:25 2021
    On 11/10/2021 13:58, Bo Persson wrote:
    On 2021-10-11 at 11:18, Bonita Montero wrote:
    Am 11.10.2021 um 11:11 schrieb Branimir Maksimovic:

    ASM code is most efficient always and works as tested without
    surprises :P

    ASM code can be faster in rare cases when you know everyting about
    your OoO-CPU, but in most cases the compiler generates better code.
    I've seen code from clang where you might think: there would be no
    ASM-programmer that knows all of these optimization-tricks.

    In this case the asm code is supplied by Intel.

    I bet they qualify for "you know everything about your OoO-CPU". :-)


    Well, kind of. Intel as a whole probably knows most of what there is to
    know about Intel processors. But Intel does not write code - people
    working at (or for) Intel write code, and there is absolutely no
    guarantee that the person or people who wrote the code know all about
    all of Intel's processors - never mind non-Intel x86 processors, or
    non-x86 processors, or any other device. At best, you can probably be
    quite confident that the code is close to optimal if you run it on the
    same processor the assembly author used.

    How well it will run on the dozen other current Intel processor
    variations is another matter (by "dozen", I am ignoring devices that
    differ only in clock speed or core count, and ignoring older devices).
    How well it will run on AMD processors is also another matter.

    You write these routines in C (or C++), and you tune the optimisation.
    You compile with "-fmarch=native", or whatever flag your compiler has to
    get the fastest code for your particular processor. You use compiler
    features for multi-versioning for target-specific optimisations, so that
    the compiler generates versions for different SIMD and other instruction
    set extensions, and picks the best version for the real cpu when the
    code starts up. You use inline assembly or intrinsics for specific
    target versions if you are /sure/ your assembly works faster, and have
    measured it.

    General use of assembly language is something that comes /way/ down on
    the list when you are trying to get fast implementation of code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bonita Montero@21:1/5 to All on Mon Oct 11 15:55:04 2021
    Am 11.10.2021 um 14:56 schrieb Branimir Maksimovic:

    Compiler at best can produce generic optimistic code that does not stand
    a chance against dedicated human :P

    It was easy to beat a compiler before 10 years, but today not anymore.
    And in five years it will be almost impossible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Branimir Maksimovic on Mon Oct 11 17:19:37 2021
    On 11/10/2021 14:56, Branimir Maksimovic wrote:
    On 2021-10-11, David Brown <david.brown@hesbynett.no> wrote:
    On 11/10/2021 13:58, Bo Persson wrote:
    On 2021-10-11 at 11:18, Bonita Montero wrote:
    Am 11.10.2021 um 11:11 schrieb Branimir Maksimovic:

    ASM code is most efficient always and works as tested without
    surprises :P

    ASM code can be faster in rare cases when you know everyting about
    your OoO-CPU, but in most cases the compiler generates better code.
    I've seen code from clang where you might think: there would be no
    ASM-programmer that knows all of these optimization-tricks.

    In this case the asm code is supplied by Intel.

    I bet they qualify for "you know everything about your OoO-CPU". :-)


    Well, kind of. Intel as a whole probably knows most of what there is to
    know about Intel processors. But Intel does not write code - people
    working at (or for) Intel write code, and there is absolutely no
    guarantee that the person or people who wrote the code know all about
    all of Intel's processors - never mind non-Intel x86 processors, or
    non-x86 processors, or any other device. At best, you can probably be
    quite confident that the code is close to optimal if you run it on the
    same processor the assembly author used.

    How well it will run on the dozen other current Intel processor
    variations is another matter (by "dozen", I am ignoring devices that
    differ only in clock speed or core count, and ignoring older devices).
    How well it will run on AMD processors is also another matter.

    You write these routines in C (or C++), and you tune the optimisation.
    You compile with "-fmarch=native", or whatever flag your compiler has to
    get the fastest code for your particular processor. You use compiler
    features for multi-versioning for target-specific optimisations, so that
    the compiler generates versions for different SIMD and other instruction
    set extensions, and picks the best version for the real cpu when the
    code starts up. You use inline assembly or intrinsics for specific
    target versions if you are /sure/ your assembly works faster, and have
    measured it.

    General use of assembly language is something that comes /way/ down on
    the list when you are trying to get fast implementation of code.
    Compiler at best can produce generic optimistic code that does not stand
    a chance against dedicated human :P
    ONly reason we do not program in assembler is because lerning curve is
    steep :P


    I'm going to assume the ":P" smiley means you are being sarcastic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to Bonita Montero on Mon Oct 11 16:55:59 2021
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 14:56 schrieb Branimir Maksimovic:

    Compiler at best can produce generic optimistic code that does not stand
    a chance against dedicated human :P

    It was easy to beat a compiler before 10 years, but today not anymore.
    And in five years it will be almost impossible.
    Look, humans write compilers, start at that :P


    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to David Brown on Mon Oct 11 16:57:54 2021
    On 2021-10-11, David Brown <david.brown@hesbynett.no> wrote:
    General use of assembly language is something that comes /way/ down on
    the list when you are trying to get fast implementation of code.
    Compiler at best can produce generic optimistic code that does not stand
    a chance against dedicated human :P
    ONly reason we do not program in assembler is because lerning curve is
    steep :P


    I'm going to assume the ":P" smiley means you are being sarcastic.

    Well, problem with assembler is that brain overheat :P
    When I write something bigger in it, my brain overheats in
    order to undertsand what I wrote before :P


    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From RadicalRabbit@theburrow.co.uk@21:1/5 to Branimir Maksimovic on Tue Oct 12 09:06:03 2021
    On Mon, 11 Oct 2021 16:55:59 GMT
    Branimir Maksimovic <branimir.maksimovic@icloud.com> wrote:
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 14:56 schrieb Branimir Maksimovic:

    Compiler at best can produce generic optimistic code that does not stand >>> a chance against dedicated human :P

    It was easy to beat a compiler before 10 years, but today not anymore.
    And in five years it will be almost impossible.
    Look, humans write compilers, start at that :P

    Humans wrote AlphaZero. Good luck beating it at chess however.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Branimir Maksimovic@21:1/5 to RadicalRabbit@theburrow.co.uk on Tue Oct 12 11:01:26 2021
    On 2021-10-12, RadicalRabbit@theburrow.co.uk <RadicalRabbit@theburrow.co.uk> wrote:
    On Mon, 11 Oct 2021 16:55:59 GMT
    Branimir Maksimovic <branimir.maksimovic@icloud.com> wrote:
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 14:56 schrieb Branimir Maksimovic:

    Compiler at best can produce generic optimistic code that does not stand >>>> a chance against dedicated human :P

    It was easy to beat a compiler before 10 years, but today not anymore.
    And in five years it will be almost impossible.
    Look, humans write compilers, start at that :P

    Humans wrote AlphaZero. Good luck beating it at chess however.

    AlpaZero is nothing special, just better determination of position
    quality function :P
    Compiler is mutch larger byte :P

    --

    7-77-777
    Evil Sinner!
    with software, you repeat same experiment, expecting different results...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From RadicalRabbit@theburrow.co.uk@21:1/5 to Branimir Maksimovic on Tue Oct 12 14:49:40 2021
    On Tue, 12 Oct 2021 11:01:26 GMT
    Branimir Maksimovic <branimir.maksimovic@icloud.com> wrote:
    On 2021-10-12, RadicalRabbit@theburrow.co.uk <RadicalRabbit@theburrow.co.uk> >wrote:
    On Mon, 11 Oct 2021 16:55:59 GMT
    Branimir Maksimovic <branimir.maksimovic@icloud.com> wrote:
    On 2021-10-11, Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 11.10.2021 um 14:56 schrieb Branimir Maksimovic:

    Compiler at best can produce generic optimistic code that does not stand >>>>> a chance against dedicated human :P

    It was easy to beat a compiler before 10 years, but today not anymore. >>>> And in five years it will be almost impossible.
    Look, humans write compilers, start at that :P

    Humans wrote AlphaZero. Good luck beating it at chess however.

    AlpaZero is nothing special, just better determination of position
    quality function :P

    Its not what it does that matters, its how it does it.

    Compiler is mutch larger byte :P

    Not really. Both are complex problems.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris M. Thomasson@21:1/5 to Scott Lurndal on Tue Oct 12 13:47:43 2021
    On 10/10/2021 3:51 PM, Scott Lurndal wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:


    Btw, if you can come up with a really fast SHA2 impl, I would be
    interested because of my experimental HMAC cipher. I have a C version:

    https://en.wikipedia.org/wiki/Intel_SHA_extensions https://developer.arm.com/documentation/100076/0100/a64-instruction-set-reference/a64-cryptographic-algorithms/a64-cryptographic-instructions?lang=en


    Nice! I failed to notice SHA-384. I like that hash for some reasons.
    However, I did notice SHA3, which means they should have it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)