• strlcpy and how CPUs can defy common sense

    From Ben Collver@21:1/5 to All on Fri Jul 26 15:36:17 2024
    strlcpy and how CPUs can defy common sense ==========================================
    24 Jul 2024

    Recently one of my older post about strlcpy has sparked some
    discussion on various forums. Presumably the recently released POSIX
    edition had something to do with it. One particular counter-argument
    was raised by multiple posters - and it's an argument that I've heard
    before as well:

    * In the common case where the source string fits in to the
    destination buffer, strlcpy would only traverse the string once
    whereas strlen + memcpy would traverse it twice always.

    Hidden in this argument is the assumption that traversing the string
    once is faster. Which - to be clear - is not at all an unreasonable
    assumption. But is it actually true? That's the focus of today's
    article.

    <https://nrk.neocities.org/articles/not-a-fan-of-strlcpy>

    CPU vs common sense
    ===================
    Computers do not have common sense. Computers are surprising.
    - Tony Hoare to Lomuto

    The following is from openbsd, where strlcpy originated - modified a
    bit for brevity.

    size_t strlcpy(char *dst, const char *src, size_t dsize)
    {
    const char *osrc = src;
    size_t nleft = dsize;

    /* Copy as many bytes as will fit. */
    if (nleft != 0) while (--nleft != 0) {
    if ((*dst++ = *src++) == '\0')
    break;
    }

    /* Not enough room in dst, add NUL and traverse rest of src. */
    if (nleft == 0) {
    if (dsize != 0) *dst = '\0'; /* NUL-terminate dst */
    while (*src++) ;
    }

    return(src - osrc - 1); /* count does not include NUL */
    }

    It starts by copying from src to dst as much as it can, and if it has
    to truncate due to insufficient dst size, then traverses the rest of
    src in order to get the strlen(src) value for returning. And so if
    the source string fits, it will be traversed only once.

    Now if you try to take a look at the glibc implementation of strlcpy, immediately you'll notice that the first line is this...

    size_t src_length = strlen (src);

    ... followed by the rest of the code using memcpy to do the copying.
    This already shatters the illusion that strlcpy will traverse the
    string once, there's no requirement for that to happen, and as you
    can see in practice, one of the major libcs will always traverse the
    string twice, once in strlen and once in memcpy.

    But before you open a bug report against glibc for being inefficient,
    here's some benchmark number when copying a 512 byte string
    repeatedly in a loop:

    512 byte
    openbsd: 242us
    glibc: 12us

    <https://gist.github.com/N-R-K/ebf096448c0a7f3fdd8b93d280747550>

    Perhaps the string is so small that the double traversal doesn't
    matter? How about a string of 1MiB?

    1MiB
    openbsd: 501646us
    glibc: 31793us

    The situation only gets worse for the openbsd version here, not
    better. To be fair, this huge speed up is coming from the fact that
    glibc punts all the work over to strlen and memcpy which on glibc are
    SIMD optimized. But regardless, we can already see that doing
    something fast, twice - is faster than doing it once but slowly.

    Apples to apples
    ================
    In order to do an apples to apples comparison I've written the
    following strlcpy implementation, which is pretty close to the glibc implementation except with the strlen and memcpy calls written out in
    for loops.

    size_t bespoke_strlcpy(char *dst, const char *src, size_t size)
    {
    size_t len = 0;
    for (; src[len] != '\0'; ++len) {} // strlen() loop

    if (size > 0) {
    size_t to_copy = len < size ? len : size - 1;
    for (size_t i = 0; i < to_copy; ++i) // memcpy() loop
    dst[i] = src[i];
    dst[to_copy] = '\0';
    }
    return len;
    }

    It's important to note that in order to do a truly apples to apples
    comparison, you'd need to also use -fno-builtin when compiling.
    Otherwise gcc will realize that the "strlen loop" can be "optimized"
    down to a strlen call and emit that. -fno-builtin avoids that from
    happening and keeps the comparison fair.

    So how does this version, which traverses src twice, perform against
    the openbsd's variant which traverses src only once?

    512 byte
    openbsd: 237us
    bespoke: 139us

    It's almost twice as fast. How about on bigger strings?

    1MiB
    openbsd: 488469us
    bespoke: 277183us

    Still roughly twice as fast. How come?

    Dependencies
    ============
    The importance of cache misses (rightfully) gets plenty of spotlight, dependencies on the other hand are not talked about as much. Your cpu
    has multiple cores, and each core has multiple ports (or logic units)
    capable of executing instructions. Which means that if you have some instructions like this (in pseudo assembly, where upper case alphabet
    denotes a register):

    A <- add B, C
    X <- add Y, Z
    E <- add A, X

    The computation of A and X are independent, and thus can be executed
    in parallel. But computation of E requires the result of A and X and
    thus cannot be parallelized. This process of being able to execute
    independent instructions simultaneously is called
    instruction-level-parallelism (or ILP). And dependencies are it's
    kryptonite.

    If you try to profile the "bespoke" strlcpy version, you'll notice
    that nearly 100% of the cpu time is spent on the "strlen loop" while
    the copy loop is basically free. Indeed if you replace the "strlen
    loop" with an actual strlen call (reminder: that it's SIMD optimized
    on glibc) then the bespoke version starts competing with the glibc
    version quite well even though we aren't using an optimized memcpy.
    In order to understand why this is happening, let's look at the
    "strlen loop", written in a verbose manner below:

    len = 0;
    while (true) {
    if (src[len] == '\0')
    break; // <- this affects the next iteration
    else
    ++len;
    }

    In the above loop, whether or not the next iteration of the loop will
    execute depends on the result of the previous iteration (whether
    src[len] was nul or not). We pay for this in our strlen loop. But our
    memcpy loop is free of such loop-carried-dependencies, the current
    iteration happens regardless of what happened on the last iteration.

    for (size_t i = 0; i < to_copy; ++i) // memcpy() loop
    dst[i] = src[i]; // <- does NOT depend on previous iteration

    In the openbsd version, because the length and copy loop are fused
    together, whether or not the next byte will be copied depends on the
    byte value of the previous iteration.

    while (--nleft != 0) { // openbsd copy loop
    // <- the branch taken here affect the next iteration
    if ((*dst++ = *src++) == '\0')
    break;
    }

    Effectively the cost of this dependency is now not just imposed on
    the length computation but also on the copy operation. And to add
    insult to injury, dependencies are not just difficult for the CPU,
    they are also difficult for the compiler to optimize/auto-vectorize
    resulting in worse code generation - a compounding effect.

    Addendum: don't throw the length away
    =====================================
    The key to making programs fast is to make them do practically nothing.
    - Mike Haertel, why GNU grep is fast

    <https://lists.freebsd.org/pipermail/freebsd-current/2010-August/
    019310.html>

    2 years ago when I wrote the strlcpy article I was still of the
    opinion that nul-terminated strings were "fine" and the problem was
    due to the standard library being poor. But even with better
    nul-string routines, I noticed that a disproportionate amount of
    mental effort was spent, and bugs written, trying to program with
    them. Two very important observations since then:

    * The length of a string is an invaluable information.

    <https://www.symas.com/post/the-sad-state-of-c-strings>

    Without knowing the length, strings become more closer to a linked
    list - forcing a serial access pattern - rather than an array that
    can be randomly accessed. Many common string functions are better
    expressed (read: less error-prone) when the length can be cheaply
    known. Nul-terminated strings on the other hand encourages you to
    continuously keep throwing this very valuable information away -
    leading to having to spuriously recompute it again and again and
    again (the GTA loading screen incident always comes to mind).

    * Ability to have zero-copy substrings is huge.

    <https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times-by-70/>

    They get rid of a lot of spurious copies (i.e more efficiency) as
    well as allocations (i.e avoids unnecessary memory management). And
    as a result, a great deal of logic and code that were necessary when
    managing nul-terminated strings simply disappear.

    With these two in mind, nowadays I just use sized-strings (something
    akin to C++'s std::string_view) and only convert to nul-string when
    an external API demands it. This topic is worth an article on it's
    own, but since that is not the focus of this article, I'll digress.

    But the good news is that aside from a group of C crowd, where the
    "default fallacy" (if something is default, it must be the right
    thing to use) is running high, majority of the world has more or less
    realized nul-strings to be a mistake. This is evident when you look
    at most other programming languages, including a lot of the newer
    system programming ones, where nul-strings are not used by default
    (if at all). Even languages with a C heritage are moving away from
    nul-strings, recall C++'s string_view.

    Conclusion
    ==========
    When talking about performance, it's important to make it clear
    whether we are talking about it in an academic setting or in a
    practical setting because CPUs do not care about common sense or
    big-O notation. A modern CPU is incredibly complex and full of
    surprises. And so the performance of an algorithm doesn't just depend
    on high level algorithmic factors - lower level factors such as cache
    misses, ILP, branch mispredictions etc, also need to taken into
    account. Many things which seems to be faster from a common sense
    perspective might in practice end up being slower and vice versa.

    From: <https://nrk.neocities.org/articles/cpu-vs-common-sense>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Stefan Ram on Fri Jul 26 16:34:25 2024
    ram@zedat.fu-berlin.de (Stefan Ram) wrote or quoted:
    This video might be a hit with some folks: "The strange details
    of std__string at Facebook" - Nicholas Ormrod, CppCon 2016.

    Also, "Efficiency with Algorithms, Performance with Data Structures"
    - Chandler Carruth, CppCon 2014, from which I take:

    CPUS HAVE A HIERARCHICAL CACHE SYSTEM

    One cycle on a 3 GHz processor 1 ns
    L1 cache reference 0.5 ns
    Branch mispredict 5 ns
    L2 cache reference 7 ns 14x L1 cache
    Mutex lock/unlock 25 ns
    Main memory reference 100 ns 20xL2, 200xL1 Compress 1K bytes with Snappy 3,000 ns
    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms
    Read 4K randomly from SSD 150,000 ns 0.15 ms
    Read 1 MB sequentially from memory 250,000 ns 0.25 ms
    Round trip within same datacenter 500,000 ns 0.5 ms
    Read 1 MB sequentially From SSD 1,000,000 ns 1 ms 4x memory
    Disk seek 10,000,000 ns 10 ms 20xdatacen. RT
    Read 1 MB sequentially from disk 20,000,000 ns 20 ms 80xmem.,20xSSD
    Send packet CA->Netherlands->CA 150,000,000 ns 150 ms

    .

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Ben Collver on Fri Jul 26 16:19:01 2024
    Ben Collver <bencollver@tilde.pink> wrote or quoted:
    Hidden in this argument is the assumption that traversing the string
    once is faster. Which - to be clear - is not at all an unreasonable >assumption. But is it actually true? That's the focus of today's
    article.

    If the string is long, it might not fit into some level caches,
    meaning it would need to be fetched from main memory twice,
    which is a drag. But if the string is short, that's not the
    case, and looping through it twice doesn't have to be slower.

    surprises. And so the performance of an algorithm doesn't just depend
    on high level algorithmic factors - lower level factors such as cache
    misses, ILP, branch mispredictions etc, also need to taken into
    account. Many things which seems to be faster from a common sense
    perspective might in practice end up being slower and vice versa.

    Especially when it comes to cache misses. And for loops,
    there's also the regularity factor.

    This video might be a hit with some folks: "The strange details
    of std__string at Facebook" - Nicholas Ormrod, CppCon 2016.

    And who said this?

    |Rob Pike's 5 Rules of Programming
    |
    |Rule 1. You can't tell where a program is going to spend
    |its time. Bottlenecks occur in surprising places, so don't
    |try to second guess and put in a speed hack until you've
    |proven that's where the bottleneck is.
    |
    |Rule 2. Measure. Don't tune for speed until you've measured, and
    |even then don't unless one part of the code overwhelms the rest.
    |
    |Rule 3. Fancy algorithms are slow when n is small, and n is
    |usually small. Fancy algorithms have big constants. Until you
    |know that n is frequently going to be big, don't get fancy.
    |(Even if n does get big, use Rule 2 first.)
    |
    |Rule 4. Fancy algorithms are buggier than simple ones, and
    |they're much harder to implement. Use simple algorithms as
    |well as simple data structures.
    |
    |Rule 5. Data dominates. If you've chosen the right data
    |structures and organized things well, the algorithms will
    |almost always be self-evident. Data structures, not algorithms,
    |are central to programming.

    If you said, "Rob Pike", you were right!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Ben Collver on Sat Jul 27 00:31:05 2024
    On Fri, 26 Jul 2024 15:36:17 -0000 (UTC), Ben Collver wrote:

    The situation only gets worse for the openbsd version here, not better.

    Not the only time the GNU folks have done something smarter than the BSD
    folks.

    <http://trillian.mit.edu/~jc/humor/ATT_Copyright_true.html> <http://www.theregister.co.uk/2016/02/10/line_break_ep2/>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Stefan Ram on Sat Jul 27 02:09:05 2024
    On 26 Jul 2024 16:34:25 GMT, Stefan Ram wrote:

    One cycle on a 3 GHz processor 1 ns

    Shouldn’t that be ⅓ns?

    Send 1K bytes over 1 Gbps network 10,000 ns 0.01 ms

    Perhaps more easily written as 10µs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bruce Horrocks@21:1/5 to Ben Collver on Sat Jul 27 11:32:25 2024
    On 26/07/2024 16:36, Ben Collver wrote:
    strlcpy and how CPUs can defy common sense

    Thank-you Ben for re-posting that. Very interesting.

    --
    Bruce Horrocks
    Surrey, England

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Johanne Fairchild@21:1/5 to Bruce Horrocks on Sat Jul 27 09:30:30 2024
    Bruce Horrocks <07.013@scorecrow.com> writes:

    On 26/07/2024 16:36, Ben Collver wrote:
    strlcpy and how CPUs can defy common sense

    Thank-you Ben for re-posting that. Very interesting.

    Ditto. I loved it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John McCue@21:1/5 to Lawrence D'Oliveiro on Sat Jul 27 19:58:06 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 26 Jul 2024 15:36:17 -0000 (UTC), Ben Collver wrote:

    The situation only gets worse for the openbsd version here, not better.

    Not the only time the GNU folks have done something smarter than the BSD folks.

    I do not understand this statement in regards to true(1).

    <http://trillian.mit.edu/~jc/humor/ATT_Copyright_true.html>

    This is interesting

    <http://www.theregister.co.uk/2016/02/10/line_break_ep2/>

    How is GNU's version of true better than OpenBSD's ?
    See page 2 in the articke.

    --
    [t]csh(1) - "An elegant shell, for a more... civilized age."
    - Paraphrasing Star Wars

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joerg Mertens@21:1/5 to John McCue on Sat Jul 27 22:53:11 2024
    John McCue <jmccue@magnetar.jmcunx.com> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 26 Jul 2024 15:36:17 -0000 (UTC), Ben Collver wrote:

    The situation only gets worse for the openbsd version here, not better.

    Not the only time the GNU folks have done something smarter than the BSD
    folks.

    I do not understand this statement in regards to true(1).

    <http://trillian.mit.edu/~jc/humor/ATT_Copyright_true.html>

    This is interesting

    <http://www.theregister.co.uk/2016/02/10/line_break_ep2/>

    How is GNU's version of true better than OpenBSD's ?

    It's definitely better if speed is your only quality critereon.

    See page 2 in the articke.

    A similar case is yes(1):

    https://github.com/coreutils/coreutils/blob/master/src/yes.c

    versus

    https://github.com/openbsd/src/blob/master/usr.bin/yes/yes.c

    It was discussed in this Hacker News article:

    https://news.ycombinator.com/item?id=14542938

    Regards

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John McCue on Sat Jul 27 22:40:38 2024
    On Sat, 27 Jul 2024 19:58:06 -0000 (UTC), John McCue wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Fri, 26 Jul 2024 15:36:17 -0000 (UTC), Ben Collver wrote:

    The situation only gets worse for the openbsd version here, not
    better.

    Not the only time the GNU folks have done something smarter than the
    BSD folks.

    I do not understand this statement in regards to true(1).

    <http://trillian.mit.edu/~jc/humor/ATT_Copyright_true.html>

    This is interesting

    <http://www.theregister.co.uk/2016/02/10/line_break_ep2/>

    How is GNU's version of true better than OpenBSD's ?
    See page 2 in the articke.

    You have to put the two together to realize how hilariously wrong the “Register” article is. The OpenBSD version of “true” may seem concise and
    elegant, until you notice that it requires the loading of an entirely new
    shell instance to run each time.

    Whereas the GNU version, with its much longer source code entirely in C,
    loads faster and runs in less memory. Which was one of the points made in
    the first article.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joerg Mertens@21:1/5 to Lawrence D'Oliveiro on Sun Jul 28 11:42:29 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Sat, 27 Jul 2024 19:58:06 -0000 (UTC), John McCue wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Fri, 26 Jul 2024 15:36:17 -0000 (UTC), Ben Collver wrote:

    The situation only gets worse for the openbsd version here, not
    better.

    Not the only time the GNU folks have done something smarter than the
    BSD folks.

    I do not understand this statement in regards to true(1).

    <http://trillian.mit.edu/~jc/humor/ATT_Copyright_true.html>

    This is interesting

    <http://www.theregister.co.uk/2016/02/10/line_break_ep2/>

    How is GNU's version of true better than OpenBSD's ?
    See page 2 in the articke.

    You have to put the two together to realize how hilariously wrong the “Register” article is. The OpenBSD version of “true” may seem concise and
    elegant, until you notice that it requires the loading of an entirely new shell instance to run each time.

    Whereas the GNU version, with its much longer source code entirely in C, loads faster and runs in less memory. Which was one of the points made in
    the first article.

    At least Theo de Raadt agrees with you in this commit message from
    about eight years ago¹:

    -----
    Switch back to C versions of true/false. I do not accept any of the
    arguments made 20 years ago. A small elf binary is smaller and faster
    than a large elf binary running a script. Noone cares about the file
    sizes on disk.
    -----

    The interesting word is `back´, which means, they already had had a
    C version in earlier days and then at some point had switched to a
    script. Someone would have to go through CVS history to find the
    reason why.

    1) https://cvsweb.openbsd.org/src/usr.bin/true/true.c

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)