• Re: "White House to Developers: Using C or C++ Invites Cybersecurity Ri

    From Lawrence D'Oliveiro@21:1/5 to Lynn McGuire on Sun Mar 3 00:05:28 2024
    XPost: comp.lang.c++

    On Sat, 2 Mar 2024 17:13:56 -0600, Lynn McGuire wrote:

    The feddies want to regulate software development very much.

    Given the high occurrence of embarrassing mistakes companies have been
    making with their code, and continue to make, it’s quite clear they’re not capable of regulating this issue themselves.

    I wouldn’t worry about companies tripping over and hurting themselves, but when the consequences are security leaks, not of information belonging to
    those companies, but to their innocent customers/users who are often
    unaware that those companies even had that information, then it’s quite
    clear that Government has to step in.

    Because if they don’t, then who will?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John McCue@21:1/5 to Lynn McGuire on Sun Mar 3 02:10:03 2024
    XPost: comp.lang.c++

    trimmed followups to comp.lang.c

    In comp.lang.c Lynn McGuire <lynnmcguire5@gmail.com> wrote:
    <snip>

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much. They
    have been talking about it for at least 20 years now. This is a very
    bad thing.

    Well to be fair, the feds regulations in the 60s made COBOL and
    FORTRAN very popular, plus POSIX later on. All they did was
    say "we will not buy anything unless ... rules".

    From "The C Programming Language Quotes by Brian W. Kernighan".

    Nevertheless, C retains the basic philosophy that
    programmers know what they are doing; it only requires
    that they state their intentions explicitly.

    If programmers were given time to test and develop, many
    issues would not exist. Anyone who has ever worked for a
    large company knows the pressure that exists to get things
    done quickly instead of right. So all these issues I blame
    on management.

    How many times have we heard "ship it now, you can fix later"
    and "later" never comes. :)

    Rust will never fix policy issues, just different and maybe worst
    issues will happen.

    Lynn

    --
    [t]csh(1) - "An elegant shell, for a more... civilized age."
    - Paraphrasing Star Wars

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to John McCue on Sun Mar 3 02:23:48 2024
    On 2024-03-03, John McCue <jmccue@neutron.jmcunx.com> wrote:
    Nevertheless, C retains the basic philosophy that
    programmers know what they are doing; it only requires
    that they state their intentions explicitly.

    fflush(stdin); // my explicit intention is to discard unread input

    A lot of what programmers intend is nonportable or undefined,
    without their nowledge.

    Pretty much all imperative languages require that programmer
    state their intentions explicitly. PL/I, Algol, Modula, ...

    You can't, for instance, just declare some facts and write a query
    against them.

    All languages have implicit behaviors. For instance in C you can
    write x + y, without having to express a detailed intention about
    what happens with every bit.

    It's a tautology that you have to be explicit about declaring your
    intent using the documented knobs and levers that are available,
    using the semantics of the paradigm they control, while not being able
    declare intent about the inner mechanism that underlies them. Using
    any language whatsoever.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John McCue on Sun Mar 3 03:30:17 2024
    XPost: comp.lang.c++

    On Sun, 3 Mar 2024 02:10:03 -0000 (UTC), John McCue wrote:

    Well to be fair, the feds regulations in the 60s made COBOL and FORTRAN
    very popular, plus POSIX later on.

    The US Government purchasing rules on POSIX were sufficiently sketchy that Microsoft was able to satisfy them easily with Windows NT, while supplying
    a “POSIX” subsystem that was essentially unusable.

    And then Microsoft went on to render POSIX largely irrelevant by eating
    all the proprietary “Unix” vendors alive.

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence because of Linux and Open Source. Developers are discovering that the Linux ecosystem offers a much more productive development environment for a code-sharing, code-reusing, Web-centric world than anything Microsoft can offer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to Lawrence D'Oliveiro on Sun Mar 3 08:54:36 2024
    XPost: comp.lang.c++

    Lawrence D'Oliveiro wrote:

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
    of Linux and Open Source. Developers are discovering that the Linux
    ecosystem offers a much more productive development environment for a code-sharing, code-reusing, Web-centric world than anything Microsoft
    can offer.

    I do not want to live in a web-centric world. I would much rather see
    other, better uses of the internet become widespread.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/ │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    Special thanks to misinformed hipsters!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to All on Sun Mar 3 08:52:03 2024
    XPost: comp.lang.c++

    Any attempt to displace C will require total replacement of the modern computing ecosystem. Frankly, i'd be fine with that if pulled off well,
    but i wouldn't be fine with a half-baked solution nor trying to force out
    C without thinking about the whole rest of everything.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/ │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    Mac and Cheese, Horrifying Quality, Prepared by Barack Obama

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lynn McGuire on Sun Mar 3 11:10:22 2024
    XPost: comp.lang.c++

    On Sat, 2 Mar 2024 17:13:56 -0600
    Lynn McGuire <lynnmcguire5@gmail.com> wrote:

    They have been talking about it for at least 20 years now.

    More like 48-49 years. https://en.wikipedia.org/wiki/High_Order_Language_Working_Group

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lynn McGuire on Sun Mar 3 12:01:57 2024
    XPost: comp.lang.c++

    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."

    No.  The feddies want to regulate software development very much.  They have been talking about it for at least 20 years now.  This is a very
    bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are
    paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    Good languages and good tools help, but they are not the root cause of
    poor quality software in the world.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Sun Mar 3 16:03:10 2024
    XPost: comp.lang.c++

    On 03.03.2024 12:01, David Brown wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. [...]"
    [...]

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. [...]

    [...]

    Good languages and good tools help, but they are not the root cause of
    poor quality software in the world.

    I agree about the necessity of having good programmers. But a lot more
    factors are important, and there's factors that influence programmers. Languages may have a design that makes it possible to produce safer
    software, or to be error prone and require a lot more attention from
    the programmers (and also from management). Tools may help a bit to
    work around the problems that languages inherently add. Good project
    management may also help to increase software quality. But it's much
    more costly in case of using inferior (or unsuited) languages.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lynn McGuire on Sun Mar 3 15:31:15 2024
    XPost: comp.lang.c++

    Lynn McGuire <lynnmcguire5@gmail.com> writes:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming >languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much.

    You've been reading far to much apocalyptic fiction and seeing the
    world through trump-colored glasses. Neither reflect reality.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to David Brown on Sun Mar 3 18:18:26 2024
    XPost: comp.lang.c++

    On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming
    languages. The tech industry sees their point, but it won't be easy."

    No.  The feddies want to regulate software development very much.  They
    have been talking about it for at least 20 years now.  This is a very
    bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    Programmers who think about safety, correctness and quality and all that
    have way fewer diagnostics and more footguns if they are coding in C
    compared to Rust.

    I think, you can't just wave away the characteristics of Rust as making
    no difference in this regard.

    But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are
    paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    The rhetoric you hear from Rust people about this is that coders taking
    a safety shortcut to make something work have to explicitly ask for that
    in Rust. It leaves a visible trace. If something goes wrong because of
    an unsafe block, you can trace that to the commit which added it.

    The rhetoric all sounds good.

    However, like you, I also believe it boils down to people, in a
    somewhat different way. To use Rust productively, you have to be one of
    the rare idiot savants who are smart enough to use it *and* numb to all
    the inconveniences.

    The reason the average programmer won't make any safety
    boo-boos using Rust is that the average programmer either isn't smart
    enough to use it at all, or else doesn't want to put up with the fuss:
    they will opt for some safe language which is easy to use.

    Rust's problem is that we have safe languages in which you can almost
    crank out working code with your eyes closed. (Or if not working,
    then at least code in which the only uncaught bugs are your logic bugs,
    not some undefined behavior from integer overflow or array out of
    bounds.)

    This is why Rust people are desperately pitching Rust as an alternative
    for C and whatnot, and showcasing it being used in the kernel and
    whatnot.

    Trying to be both safe and efficient to be able to serve as a "C
    replacement" is a clumsy hedge that makes Rust an awkward language.

    You know the parable about the fox that tries to chase two rabbits.

    The alternative to Rust in application development is pretty much any convenient, "easy" high level language, plus a little bit of C.
    You can get a small quantity of C right far more easily than a large
    quantity of C. It's almost immaterial.

    An important aspect of Rust is the ownership-based memory management.

    The problem is, the "garbage collection is bad" era is /long/ behind us.

    Scoped ownership is a half-baked solution to the object lifetime
    problem, that gets in the way of the programmer and isn't appropriate
    for the vast majority of software tasks.

    Embedded systems often need custom memory management, not something that
    the language imposes. C has malloc, yet even that gets disused in favor
    of something else.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Kaz Kylheku on Sun Mar 3 11:11:11 2024
    Kaz Kylheku <433-929-6894@kylheku.com> writes:

    On 2024-03-03, John McCue <jmccue@neutron.jmcunx.com> wrote:

    Nevertheless, C retains the basic philosophy that
    programmers know what they are doing; it only requires
    that they state their intentions explicitly.

    fflush(stdin); // my explicit intention is to discard unread input

    A lot of what programmers intend is nonportable or undefined,
    without their nowledge.

    Pretty much all imperative languages require that programmer
    state their intentions explicitly. PL/I, Algol, Modula, ...

    You can't, for instance, just declare some facts and write a query
    against them.

    All languages have implicit behaviors. For instance in C you can
    write x + y, without having to express a detailed intention about
    what happens with every bit.

    It's a tautology that you have to be explicit about declaring your
    intent using the documented knobs and levers that are available,
    using the semantics of the paradigm they control, while not being able declare intent about the inner mechanism that underlies them. Using
    any language whatsoever.

    Are you really so clueless that you don't understand the point
    of the quoted paragraph? Or are you just being deliberately
    obtuse as part of a general contrarian affect?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sun Mar 3 20:11:14 2024
    XPost: comp.lang.c++

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
    of Linux and Open Source. Developers are discovering that the Linux
    ecosystem offers a much more productive development environment for a
    code-sharing, code-reusing, Web-centric world than anything Microsoft
    can offer.

    I do not want to live in a web-centric world.

    You already do.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Sun Mar 3 20:10:26 2024
    XPost: comp.lang.c++

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to Rust.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Kaz Kylheku on Sun Mar 3 21:23:56 2024
    XPost: comp.lang.c++

    On 03/03/2024 19:18, Kaz Kylheku wrote:
    On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
    On 03/03/2024 00:13, Lynn McGuire wrote:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."

    No.  The feddies want to regulate software development very much.  They >>> have been talking about it for at least 20 years now.  This is a very
    bad thing.

    Lynn

    It's the wrong solution to the wrong problem.

    It is not languages like C and C++ that are "unsafe". It is the
    programmers that write the code for them. As long as the people
    programming in Rust or other modern languages are the more capable and
    qualified developers - the ones who think about memory safety, correct
    code, testing, and quality software development - then code written in
    Rust will be better quality and safer than the average C, C++, Java and
    C# code.

    Programmers who think about safety, correctness and quality and all that
    have way fewer diagnostics and more footguns if they are coding in C
    compared to Rust.

    I think, you can't just wave away the characteristics of Rust as making
    no difference in this regard.

    I did not.

    I said that the /root/ problem is not the language, but the programmers
    and the way they work.

    Of course some languages make some things harder and other things
    easier. And even the most careful programmers will occasionally make
    mistakes. So having a language that helps reduce the risk of some kinds
    of errors is a helpful thing.

    But consider this. When programming in modern C++, you can be risk-free
    from buffer overruns and most kinds of memory leak - use container
    classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete. You can use the C++ coding guideline
    libraries to mark ownership of pointers. You can use compiler
    sanitizers to catch many kinds undefined behaviour. You can use all
    sorts of static analysis tools, from free to very costly, to help find problems. And yet there are armies of programmers writing bad C++ code.
    PHP and Javascript have automatic memory management and garbage
    collection eliminating many of the possible problems seen in C and C++
    code, yet armies of programmers write PHP and Javascript code full of
    bugs and security faults.

    Better languages, better libraries, and better tools certainly help.
    There are not many tasks for which C is the best choice of language.
    But none of that will deal with the root of the problem. Good
    programmers, with good training, in good development departments with
    good managers and good resources, will write correct code more
    efficiently in a better language, but they can write correct code in
    pretty much /any/ language. Similarly, the bulk of programmers will
    write bad code in any language.


    But if it gets popular enough for schools and colleges to teach Rust
    programming course to the masses, and it gets used by developers who are
    paid per KLoC, given responsibilities well beyond their abilities and
    experience, lead by incompetent managers, untrained in good development
    practices and pushed to impossible deadlines, then the average quality
    of programs in Rust will drop to that of average C and C++ code.

    The rhetoric you hear from Rust people about this is that coders taking
    a safety shortcut to make something work have to explicitly ask for that
    in Rust. It leaves a visible trace. If something goes wrong because of
    an unsafe block, you can trace that to the commit which added it.

    The rhetoric all sounds good.

    You can't trace the commit for programmers who don't use version control software - and that is a /lot/ of them. Leaving visible traces does not
    help when no one else looks at the code. Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    Rust makes it possible to have some safety checks for a few things that
    are much harder to do in C++. It does not stop people writing bad code
    using bad development practices.


    However, like you, I also believe it boils down to people, in a
    somewhat different way. To use Rust productively, you have to be one of
    the rare idiot savants who are smart enough to use it *and* numb to all
    the inconveniences.

    And you have to have managers who are smart enough to believe it when
    their programmers say they need to train in a new language, re-write
    lots of existing code, and accept longer development times as a tradeoff
    for fewer bugs in shipped code.

    (I personally have a very good manager, but I know a great many
    programmers do not.)


    The reason the average programmer won't make any safety
    boo-boos using Rust is that the average programmer either isn't smart
    enough to use it at all, or else doesn't want to put up with the fuss:
    they will opt for some safe language which is easy to use.

    Rust's problem is that we have safe languages in which you can almost
    crank out working code with your eyes closed. (Or if not working,
    then at least code in which the only uncaught bugs are your logic bugs,
    not some undefined behavior from integer overflow or array out of
    bounds.)

    This is why Rust people are desperately pitching Rust as an alternative
    for C and whatnot, and showcasing it being used in the kernel and
    whatnot.


    I personally think it is madness to have Rust in a project like the
    Linux kernel. I used to see C++ as a rapidly changing language with its
    3 year cycle - Rust seems to have a 3 week cycle for updates, with no
    formal standardisation and "work in progress" attitude. That's fine for
    a new language under development, but /not/ something you want for a
    project that spans decades.

    Trying to be both safe and efficient to be able to serve as a "C
    replacement" is a clumsy hedge that makes Rust an awkward language.

    You know the parable about the fox that tries to chase two rabbits.

    The alternative to Rust in application development is pretty much any convenient, "easy" high level language, plus a little bit of C.
    You can get a small quantity of C right far more easily than a large
    quantity of C. It's almost immaterial.


    There are lots of alternatives to Rust for application development. But
    in general, higher level languages mean you do less manual work, and
    write fewer lines of code for the same amount of functionality. And
    that means a lower risk of errors.

    An important aspect of Rust is the ownership-based memory management.

    The problem is, the "garbage collection is bad" era is /long/ behind us.

    Scoped ownership is a half-baked solution to the object lifetime
    problem, that gets in the way of the programmer and isn't appropriate
    for the vast majority of software tasks.

    Embedded systems often need custom memory management, not something that
    the language imposes. C has malloc, yet even that gets disused in favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to Lawrence D'Oliveiro on Sun Mar 3 22:11:14 2024
    XPost: comp.lang.c++

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    Nowadays, POSIX (and *nix generally) is undergoing a resurgence
    because of Linux and Open Source. Developers are discovering that the
    Linux ecosystem offers a much more productive development environment
    for a code-sharing, code-reusing, Web-centric world than anything
    Microsoft can offer.

    I do not want to live in a web-centric world.

    You already do.

    That does not change the veracity of my statement.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/
    │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    Every time!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to All on Sun Mar 3 22:14:31 2024
    XPost: comp.lang.c++

    Frankly, i think we should all be programming in macros over assembly
    anyway.



    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/
    │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    You have a disease!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sun Mar 3 23:29:42 2024
    XPost: comp.lang.c++

    On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:

    On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google
    <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to
    Rust.

    Sure. Putting corks on the forks reduces the chance of eye injuries.

    Except this is Google, and they’re doing it in real-world production
    code, namely Android. And showing some positive benefits from doing
    so, without impairing the functionality of Android in any way.

    Not like “putting corks on the forks”, whatever that might be about
    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Sun Mar 3 23:31:35 2024
    XPost: comp.lang.c++

    On Sun, 3 Mar 2024 21:23:56 +0100, David Brown wrote:

    But consider this. When programming in modern C++, you can be risk-free
    from buffer overruns and most kinds of memory leak - use container
    classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete.

    Or, going further, how about Google‘s “Carbon” project <https://github.com/carbon-language/carbon-lang>, which tries to keep
    the good bits from C++ while chucking out the bad?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Sun Mar 3 23:27:54 2024
    XPost: comp.lang.c++

    On Sun, 3 Mar 2024 22:11:14 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    I do not want to live in a web-centric world.

    You already do.

    That does not change the veracity of my statement.

    That doesn’t change the veracity of mine.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David LaRue@21:1/5 to Lynn McGuire on Sun Mar 3 23:59:33 2024
    XPost: comp.lang.c++

    Lynn McGuire <lynnmcguire5@gmail.com> wrote in news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No. The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now. This is a
    very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages. They
    can be hacked just as easily as C and C++ and many other languages. The government should worry about things they really need to control, which is
    less not more, IMHO. They obviously know very little about computer development.

    David
    Professional developer for nearly 45 years

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to Chris M. Thomasson on Mon Mar 4 00:44:41 2024
    XPost: comp.lang.c++

    In article <us2s96$2n6h3$6@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    ...
    Sure. Putting corks on the forks reduces the chance of eye injuries.
    Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
    Funny to me:


    https://youtu.be/eF8QAeQm3ZM?t=332

    Leader Keith gets mad when you post YouTube URLs here.

    I'd be more careful, if I were you.

    Putting the cork on the fork is akin to saying nobody should be using C >and/or C++ in this "modern" age? :^)
    --
    The randomly chosen signature file that would have appeared here is more than 4 lines long. As such, it violates one or more Usenet RFCs. In order to remain in compliance with said RFCs, the actual sig can be found at the following URL:
    http://user.xmission.com/~gazelle/Sigs/ModernXtian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Mon Mar 4 01:00:24 2024
    XPost: comp.lang.c++

    On 03/03/2024 23:29, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:

    On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:

    It is not languages like C and C++ that are "unsafe".

    Some empirical evidence from Google
    <https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
    shows a reduction in memory-safety errors in switching from C/C++ to
    Rust.

    Sure. Putting corks on the forks reduces the chance of eye injuries.

    Except this is Google, and they’re doing it in real-world production
    code, namely Android. And showing some positive benefits from doing
    so, without impairing the functionality of Android in any way.

    That's great. So long as it is somebody else is programming in one of
    those languages where you have one hand tied behind your back. That used
    to be Ada. Now apparently it is Rust (so more like both hands tied).


    In the piechart in your link however, new code in C/C++ still looks to
    be nearly 3 times as much as Rust.

    Personally I think there must be an easier language which is considered
    to be safer without also making coding a nightmare.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Mon Mar 4 05:43:40 2024
    XPost: comp.lang.c++

    On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:

    I remember a while back when some people would try to tell me that [Ada] solves all issues...

    It did make a difference. Did you know the life-support system on the International Space Station was written in Ada? Not something you
    would trust C++ code to, let’s face it.

    And here <https://devclass.com/2022/11/08/spark-as-good-as-rust-for-safer-coding-adacore-cites-nvidia-case-study/>
    is a project to make it even safer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Mon Mar 4 09:44:04 2024
    XPost: comp.lang.c++

    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something that >>> the language imposes. C has malloc, yet even that gets disused in favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your various
    data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management. And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic malloc/free.
    Flexible network communication (such as Ethernet or other IP
    networking) is hard to do without dynamic memory.

    But for things that are safety or reliability critical, you aim to have everything statically allocated. (Sometimes you use dynamic memory at
    startup for convenience, but you never free that memory.) This, of
    course, means you simply don't use certain kinds of data structures. std::array<> is fine - it's just a nicer type wrapper around a fixed
    size C-style array. But you don't use std::vector<>, or other growable structures. You figure out in advance the maximum size you need for
    your structures, and nail them to that size at compile time.

    There are three big run-time dangers and one big build-time limitation
    when you have dynamic memory:

    1. You can run out. PC's can often be assumed to have "limitless"
    memory, and it is also often fine for a PC program to say it can't load
    that big file until you close other programs and free up memory. In a safety-critical embedded system, you have limited ram, and your code
    never does things it does not have to do - consequently, it is not
    acceptable to say it can't run a task at the moment due to lack of memory.

    2. You get fragmentation from malloc/free, leading to allocation
    failures even when there is enough total free memory. Small embedded
    systems don't have virtual memory, paging, MMUs, and other ways to
    re-arrange the appearance of memory. If you free your memory in a
    different order from allocation, your heap gets fragmented, and you end
    up with your "free" memory consisting of lots of discontinuous bits.

    3. Your timing is hard to predict or constrain. Walking heaps to find
    free memory for malloc, or coalescing free segments on deallocation,
    often has very unpredictable timing. This is a big no-no for real time systems.

    And at design/build time, dynamic memory requires are extremely
    difficult to analyse. In comparison, if everything is allocated
    statically, it's simple - it's all there in your map files, and you have
    a pass/fail result from trying to link it all within the available
    memory of the target.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Derek@21:1/5 to All on Mon Mar 4 12:18:25 2024
    XPost: comp.lang.c++

    All,

    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point,
    but it won't be easy."

    They make the mistake of blaming the tools rather than
    how the tools are used https://shape-of-code.com/2024/03/03/the-whitehouse-report-on-adopting-memory-safety/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Mon Mar 4 15:41:43 2024
    XPost: comp.lang.c++

    On 04/03/2024 12:54, Malcolm McLean wrote:
    On 04/03/2024 00:06, Chris M. Thomasson wrote:
    On 3/3/2024 3:59 PM, David LaRue wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> wrote in
    news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"
    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>> invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No.  The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now.  This is a >>>> very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages.
    They
    can be hacked just as easily as C and C++ and many other languages.  The >>> government should worry about things they really need to control,
    which is
    less not more, IMHO.  They obviously know very little about computer
    development.
    [...]

    I remember a while back when some people would try to tell me that ADA
    solves all issues...

    And there's ADA, and there's Ada, the lady.

    No, there's Ada the programming language, named after Lady Ada Lovelace.

    For those that perhaps don't understand these things, all-caps names are usually used for acronyms, such as BASIC, or languages from before small letters were universal in computer systems, such as early FORTRAN.
    Programming languages named after people are generally capitalised the
    same way people's names are - thus Ada and Pascal.


    And she wrote.

    "The Analytical Engine has no pretensions whatever to originate
    anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical
    relations or truths."

    And so she knew what the capabilites of the Analytical Engine were,
    exactly what programming was, what and what it could not achieve, and
    how set out making it achieve what it could achieved. And so she had it,
    and in a sense, ADA solved all issues.


    What I think you are trying to say, but got completely lost in the last sentence, is that Lady Ada Lovelace is often regarded (perhaps
    incorrectly) as the first computer programmer.

    And no formal computer science education. Of course.

    She had a great deal of education in mathematics - just like most
    computer science pioneers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Mon Mar 4 17:05:54 2024
    XPost: comp.lang.c++

    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control
    development according to a realistic plan.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Mon Mar 4 15:28:35 2024
    XPost: comp.lang.c++

    David Brown <david.brown@hesbynett.no> writes:
    On 04/03/2024 12:54, Malcolm McLean wrote:
    On 04/03/2024 00:06, Chris M. Thomasson wrote:
    On 3/3/2024 3:59 PM, David LaRue wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> wrote in
    news:us0brl$246bf$1@dont-email.me:

    "White House to Developers: Using C or C++ Invites Cybersecurity
    Risks"
    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>>> invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe
    programming languages. The tech industry sees their point, but it
    won't be easy."

    No.  The feddies want to regulate software development very much.
    They have been talking about it for at least 20 years now.  This is a >>>>> very bad thing.

    Lynn

    I was thinking about this wrt other alledgedly more secure languages.
    They
    can be hacked just as easily as C and C++ and many other languages.  The >>>> government should worry about things they really need to control,
    which is
    less not more, IMHO.  They obviously know very little about computer
    development.
    [...]

    I remember a while back when some people would try to tell me that ADA
    solves all issues...

    And there's ADA, and there's Ada, the lady.

    No, there's Ada the programming language, named after Lady Ada Lovelace.\

    Indeed. And ADA has a very different meaning stateside.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Mon Mar 4 18:24:58 2024
    XPost: comp.lang.c++

    On 04/03/2024 17:05, Janis Papanagnou wrote:
    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control development according to a realistic plan.


    Now you are beginning to understand!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Malcolm McLean on Mon Mar 4 21:07:27 2024
    XPost: comp.lang.c++

    On Mon, 4 Mar 2024 11:44:06 +0000, Malcolm McLean wrote:

    And of course Google can solve a problem by inventing a new language and putting up all the infrastructure that that would need around it.

    Google has invented quite a lot of languages: Dart and Go come to mind,
    and also this “Carbon” effort.

    I suppose nowadays a language can find a niche outside the mainstream, and still be viable. Proprietary products need mass-market success to stay
    afloat, but with open-source ones, what’s important is the contributor
    base, not the user base.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Mon Mar 4 21:11:08 2024
    XPost: comp.lang.c++

    On Mon, 4 Mar 2024 15:41:43 +0100, David Brown wrote:

    ... Lady Ada Lovelace is often regarded (perhaps
    incorrectly) as the first computer programmer.

    She was the first, in written records, to appreciate some of the not-so- obvious issues in computer programming.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Mon Mar 4 21:26:51 2024
    XPost: comp.lang.c++

    On Mon, 4 Mar 2024 13:15:20 -0800, Chris M. Thomasson wrote:

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    The less C code you write, the easier it is to keep it under control.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Tue Mar 5 00:59:48 2024
    XPost: comp.lang.c++

    On Mon, 4 Mar 2024 21:07:27 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 4 Mar 2024 11:44:06 +0000, Malcolm McLean wrote:

    And of course Google can solve a problem by inventing a new
    language and putting up all the infrastructure that that would need
    around it.

    Google has invented quite a lot of languages: Dart and Go come to
    mind, and also this “Carbon” effort.

    I suppose nowadays a language can find a niche outside the
    mainstream, and still be viable. Proprietary products need
    mass-market success to stay afloat, but with open-source ones, what’s important is the contributor base, not the user base.

    Go *is* mainstream, more so than Rust.
    Dart is not mainstream and is not even niche.
    For Carbon it's too early to call, but so far prospects look bleak.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Tue Mar 5 02:46:51 2024
    XPost: comp.lang.c++

    On 04.03.2024 18:24, David Brown wrote:
    On 04/03/2024 17:05, Janis Papanagnou wrote:
    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only
    so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control
    development according to a realistic plan.

    Now you are beginning to understand!

    Huh? - I posted about various factors (beyond the programmers'
    proficiency and tools) in an earlier reply to you; it was including
    the management factor that you missed to note and that you adopted
    as factor just in a later post. - So there's neither need nor reason
    for such an arrogant, wrong, and disrespectful statement.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Mar 5 01:54:46 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:

    Go *is* mainstream, more so than Rust.

    Google looked at what language to use for its proprietary “Fuchsia” OS,
    and decided Rust was a better choice than Go.

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better performance.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Chris M. Thomasson on Tue Mar 5 03:32:23 2024
    XPost: comp.lang.c++

    On 04.03.2024 22:15, Chris M. Thomasson wrote:
    On 3/3/2024 9:43 PM, Lawrence D'Oliveiro wrote:
    On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:

    I remember a while back when some people would try to tell me that [Ada] >>> solves all issues...

    It did make a difference. Did you know the life-support system on the
    International Space Station was written in Ada? Not something you
    would trust C++ code to, let’s face it.

    Would you trust a "safe" language that had some critical libraries that
    were written in say, C?

    You named them as "critical libraries", which (as a project manager)
    I'd handle as such; be sure about their quality, about certificates,
    write own test cases if necessary, or demand source code for reviews
    for own verification.

    As already said, there's more factors than the language. An external
    library is also an externality to consider, and to not consider it
    (per se) as okay.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Malcolm McLean on Tue Mar 5 04:43:21 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:

    The less code you have, the less that can go wrong.

    This can also mean using the build system to automatically generate some repetitive things, to avoid having to write them manually.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Tue Mar 5 07:06:38 2024
    XPost: comp.lang.c++

    On Mon, 4 Mar 2024 22:18:47 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 5:54 PM, Lawrence D'Oliveiro wrote:

    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:

    Go *is* mainstream, more so than Rust.

    Google looked at what language to use for its proprietary “Fuchsia” OS, >> and decided Rust was a better choice than Go.

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better
    performance.

    Why do you mention performance? I thought is was all about safety...

    Safety’s a given. Plus you get performance as well.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Tue Mar 5 07:07:48 2024
    XPost: comp.lang.c++

    On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 8:43 PM, Lawrence D'Oliveiro wrote:

    On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:

    The less code you have, the less that can go wrong.

    This can also mean using the build system to automatically generate
    some repetitive things, to avoid having to write them manually.

    Does the build system depend on anything coded in C?

    These days, it might be Rust.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Lynn McGuire on Tue Mar 5 07:08:54 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would
    trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for delivery.

    Was it debugged again? Or was it assumed that the translation was bug-
    free?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Lynn McGuire on Tue Mar 5 07:07:24 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 00:09:35 -0600, Lynn McGuire wrote:

    ... I actually have had a Professional Engineer's License in Texas for
    34 years now and can tell you all about what it takes to get one and
    what it takes to keep one.

    Does that include any qualification in safety-critical or security-
    critical systems?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Tue Mar 5 11:11:03 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:

    Go *is* mainstream, more so than Rust.

    Google looked at what language to use for its proprietary “Fuchsia”
    OS, and decided Rust was a better choice than Go.


    Go is (1) garbage-collected, (2) mostly statically linked.
    (1) means it is not suitable for kernel
    (2) means it is suitable for big user-mode utilities, but probably
    impractical for smaller utilities, because you don't want your tiny
    utility to occupy 2-3 MB on permanent storage.
    But both (1) and (2) are advantages for typical application programming,
    esp. for back-end processing.

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better
    performance.

    I have no idea who is Discord.
    However I fully expect that for micro- or mini-benchmarks they are
    correct.
    I also expect that
    - even for micro- or mini-benchmark the difference in speed is less
    than factor of 3
    - for big and complex real-world back-end processing, writing working
    solution in go will take 5 time less man hours than writing it in
    Rust
    - for more complex processing just making it work in Rust, regardless of
    execution speed, will require uncommon level of programming skills
    - even if Rust solution works initially, it would be more costly (than
    go solution) to maintain and especially to adapt to changing
    requirements.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Tue Mar 5 10:01:53 2024
    XPost: comp.lang.c++

    On 04/03/2024 21:36, Chris M. Thomasson wrote:
    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not something
    that
    the language imposes. C has malloc, yet even that gets disused in
    favor
    of something else.


    For safe embedded systems, you don't want memory management at all.
    Avoiding dynamic memory is an important aspect of safety-critical
    embedded development.


    You still have to think about memory management even if you avoid any
    dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic memory
    and therefore memory management.  And as Kaz says, you will often use
    custom solutions such as resource pools rather than generic
    malloc/free.   Flexible network communication (such as Ethernet or
    other IP networking) is hard to do without dynamic memory.
    [...]

    Think of using a big chunk of memory, never needed to be freed and is
    just there per process. Now, you carve it up and store it in a cache
    that has functions push and pop. So, you still have to manage memory
    even when you are using no dynamic memory at all... Fair enough, in a
    sense? The push and the pop are your malloc and free in a strange sense...


    I believe I mentioned that. You do not, in general, "push and pop" -
    you malloc and never free. Excluding debugging code and other parts
    useful in testing and developing, you have something like :

    enum { heap_size = 16384; }
    alignas(max_align_t) static uint8_t heap[heap_size];
    uint8_t * next_free = heap;

    void free(void * ptr) {
    (void) ptr;
    }

    void * malloc(size_t size) {
    const size_t align = alignof(max_align_t);
    const real_size = size ? (size + (align - 1)) & ~(align - 1)
    : align;
    void * p = next_free;
    next_free += real_size;
    return p;
    }


    Allowing for pops requires storing the size of the allocations (unless
    you change the API from that of malloc/free), and is only rarely useful.
    Generally if you want memory that temporary, you use a VLA or alloca
    to put it on the stack.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Tue Mar 5 11:23:41 2024
    XPost: comp.lang.c++

    On 05/03/2024 02:46, Janis Papanagnou wrote:
    On 04.03.2024 18:24, David Brown wrote:
    On 04/03/2024 17:05, Janis Papanagnou wrote:
    On 03.03.2024 21:23, David Brown wrote:

    [...] Shortcuts are taken because
    the sales people need the code by tomorrow morning, and there are only >>>> so many hours in the night to get it working.

    An indication of bad project management (or none at all) to control
    development according to a realistic plan.

    Now you are beginning to understand!

    Huh? - I posted about various factors (beyond the programmers'
    proficiency and tools) in an earlier reply to you; it was including
    the management factor that you missed to note and that you adopted
    as factor just in a later post. - So there's neither need nor reason
    for such an arrogant, wrong, and disrespectful statement.


    It was not intended that way at all - I'm sorry if that is how it came
    across.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Tue Mar 5 11:31:11 2024
    XPost: comp.lang.c++

    On 04/03/2024 22:11, Lawrence D'Oliveiro wrote:
    On Mon, 4 Mar 2024 15:41:43 +0100, David Brown wrote:

    ... Lady Ada Lovelace is often regarded (perhaps
    incorrectly) as the first computer programmer.

    She was the first, in written records, to appreciate some of the not-so- obvious issues in computer programming.

    Yes. That includes realising that computers could do more than number crunching. She was also involved in checking, correcting and commenting
    some of Babbage's programs, and also was the first to publish an
    algorithm (for Bernouli numbers) designed specifically for executing on
    a computer. And she did all this without a working computer.

    So while calling her "the first computer programmer" is inaccurate, she
    was definitely a key computer science pioneer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Tue Mar 5 11:27:01 2024
    XPost: comp.lang.c++

    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would
    trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for
    delivery.

    Was it debugged again? Or was it assumed that the translation was bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lynn McGuire on Tue Mar 5 14:56:58 2024
    XPost: comp.lang.c++

    Lynn McGuire <lynnmcguire5@gmail.com> writes:
    On 3/3/2024 9:31 AM, Scott Lurndal wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> writes:
    "White House to Developers: Using C or C++ Invites Cybersecurity Risks"

    https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks

    "The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."

    No. The feddies want to regulate software development very much.

    You've been reading far to much apocalyptic fiction and seeing the
    world through trump-colored glasses. Neither reflect reality.

    Nope, I actually have had a Professional Engineer's License in Texas for
    34 years now and can tell you all about what it takes to get one and
    what it takes to keep one.

    This bunch of crazies in the White House wants to do the same thing to >software development.


    Nothing in the quoted article supports your ridiculous assertion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Chris M. Thomasson on Tue Mar 5 21:24:26 2024
    XPost: comp.lang.c++

    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something you would >>>>> trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada for
    delivery.

    Was it debugged again? Or was it assumed that the translation was bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware overflow exception from forcing a 64 bit floating-point value into a 16
    bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Mar 5 22:58:10 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had been
    using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing working
    solution in go will take 5 time less man hours than writing it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Wed Mar 6 00:25:08 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 11:31:11 +0100, David Brown wrote:

    That includes realising that computers could do more than number
    crunching.

    Or, conversely, realizing that all forms of computation (including symbol manipulation) can be expressed as arithmetic? Maybe that came later, cf “Gödel numbering”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Wed Mar 6 00:25:49 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 13:48:25 -0800, Chris M. Thomasson wrote:

    On 3/4/2024 11:07 PM, Lawrence D'Oliveiro wrote:

    On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:

    Does the build system depend on anything coded in C?

    These days, it might be Rust.

    The keyword is might... Right?

    Might does not make right.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Wed Mar 6 11:43:21 2024
    XPost: comp.lang.c++

    On 05/03/2024 21:51, Chris M. Thomasson wrote:
    On 3/5/2024 1:01 AM, David Brown wrote:
    On 04/03/2024 21:36, Chris M. Thomasson wrote:
    On 3/4/2024 12:44 AM, David Brown wrote:
    On 03/03/2024 23:01, Chris M. Thomasson wrote:
    On 3/3/2024 12:23 PM, David Brown wrote:
    On 03/03/2024 19:18, Kaz Kylheku wrote:

    Embedded systems often need custom memory management, not
    something that
    the language imposes. C has malloc, yet even that gets disused in >>>>>>> favor
    of something else.


    For safe embedded systems, you don't want memory management at
    all. Avoiding dynamic memory is an important aspect of
    safety-critical embedded development.


    You still have to think about memory management even if you avoid
    any dynamic memory? How are you going to mange this memory wrt your
    various data structures needs....

    To be clear here - sometimes you can't avoid all use of dynamic
    memory and therefore memory management.  And as Kaz says, you will
    often use custom solutions such as resource pools rather than
    generic malloc/free.   Flexible network communication (such as
    Ethernet or other IP networking) is hard to do without dynamic memory.
    [...]

    Think of using a big chunk of memory, never needed to be freed and is
    just there per process. Now, you carve it up and store it in a cache
    that has functions push and pop. So, you still have to manage memory
    even when you are using no dynamic memory at all... Fair enough, in a
    sense? The push and the pop are your malloc and free in a strange
    sense...


    I believe I mentioned that.  You do not, in general, "push and pop" -
    you malloc and never free.  Excluding debugging code and other parts
    useful in testing and developing, you have something like :

    enum { heap_size = 16384; }
    alignas(max_align_t) static uint8_t heap[heap_size];
    uint8_t * next_free = heap;

    void free(void * ptr) {
         (void) ptr;
    }

    void * malloc(size_t size) {
         const size_t align = alignof(max_align_t);
         const real_size = size ? (size + (align - 1)) & ~(align - 1)
                     : align;
         void * p = next_free;
         next_free += real_size;
         return p;
    }


    Allowing for pops requires storing the size of the allocations (unless
    you change the API from that of malloc/free), and is only rarely
    useful.   Generally if you want memory that temporary, you use a VLA
    or alloca to put it on the stack.


    wrt systems with no malloc/free I am thinking more along the lines of a region allocator mixed with a LIFO for a cache, so a node based thing.
    The region allocator gets fed with a large buffer. Depending on specific needs, it can work out nicely for systems that do not have malloc/free.
    The pattern I used iirc, was something like:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
        // try the lifo first...

        node* n = lifo_pop();

        if (! n)
        {
            // resort to the region allocator...

            n = region_allocate_node();

            // note, n can be null here.
            // if it is, we are out of memory.

            // note, out of memory on a system
            // with no malloc/free...
        }

        return n;
    }

    void
    node_push(
        node* n
    ) {
         lifo_push(n);
    }
    _______________________


    make any sense to you?


    I know what you are trying to suggest, and I understand how it can sound reasonable. In some cases, this can be a useful kind of allocator, and
    when it is suitable, it is very fast. But it is has two big issues for
    small embedded systems.

    One problem is the "region_allocate_node()" - getting a lump of space
    from the underlying OS. That is fine on "big systems", and it is normal
    that malloc/free systems only ask for memory from the OS in big lumps,
    then handle local allocation within the process space for efficiency.
    (This can work particularly well if each thread gets dedicated lumps, so
    that no locking is needed for most malloc/free calls.)

    But in a small embedded system, there is no OS (an RTOS is generally
    part of the same binary as the application), and providing such "lumps"
    would be dynamic memory management. So if you are using a system like
    you describe, then you would have a single statically allocated block of
    memory for your lifo stack.

    Then there is the question of how often such a stack-like allocator is
    useful, independent of the normal stack. I can imagine it is
    /sometimes/ helpful, but rarely. I can't think off-hand of any cases
    where I would have found it useful in anything I have written.

    As I (and others) have said elsewhere, in small embedded systems and
    safety or reliability critical systems, you want to avoid dynamic memory
    and memory management whenever possible, for a variety of reasons. If
    you do need something, then specialise allocators are more common -
    possibly including lifos like this.

    But it's more likely to have fixed-size pools with fixed-size elements, dedicated to particular memory tasks. For example, if you need to track multiple in-flight messages on a wireless mesh network, where messages
    might take different amounts of time to be delivered and acknowledged,
    or retried, you define a structure that holds all the data you need for
    a message. Then you decide how many in-flight messages you will support
    as a maximum. This gives you a statically allocated array of N structs.
    Block usage is then done by a bitmap, typically within a single 32-bit
    word. Finding a free slot is a just finding the first free zero, and
    freeing it is clearing the correct bit.

    There are, of course, many other kinds of dedicated allocators that can
    be used in other circumstances.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Wed Mar 6 14:02:14 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than writing
    it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On
    average the performance was satisfactory, but every two minutes there
    was spike in latency. The latency during the spike was not that big
    (300 msec), but they stilled were feeling that they want better.
    They tried to tune GC, but the problem appeared to be fundamental.
    So they just rewrote this particular server in Rust. Naturally, Rust
    does not collect garbage, so this particular problem disappeared.

    The key phrase of the story is "This service was a great candidate to
    port to Rust since it was small and self-contained".
    I'd add to this that even more important for eventual success of
    migration was the fact that at time of rewrite server was already
    running for several years, so requirements were stable and
    well-understood.
    Another factor is that their service does not create/free that many
    objects. The delay was caused by mere fact of GC scanning rather than
    by frequent compacting of memory pools. So, from the beginning it was
    obvious that potential fragmentation of the heap, which is the main
    weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
    not apply in their case.

    There is also non-technical angle involved: Discord is fueled by
    investor's money. It's not that they have no revenues at all, but their revenues at this stage are not supposed to cover their expenses.
    Companies that operate in such mode have different
    perspective to just about everything. I mean, different from
    perspective of people like myself, working in a company that fights hard
    to stay profitable and succeeds more often than not.

    I have few questions about the story, most important one is whether the weakness of this sort is specific to GC of Go, due to its relative
    immaturity or more general and applies equally to most mature GCs on
    the market, i.e. J2EE and .NET.
    Another question is whether the problem is specific to GC-style of
    automatic memory management (AMM) or applies, at least to some degree,
    to other forms of AMM, most importantly, to AMMs based on Reference
    Counting used by Swift and also popular in C++.
    Of course, I don't expected that my questions will be answered fully on comp.lang.c, but if some knowledgeable posters will try to answer I
    would appreciate.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Wed Mar 6 12:28:59 2024
    XPost: comp.lang.c++

    On 06/03/2024 12:02, Michael S wrote:
    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than writing
    it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On
    average the performance was satisfactory, but every two minutes there
    was spike in latency. The latency during the spike was not that big
    (300 msec), but they stilled were feeling that they want better.
    They tried to tune GC, but the problem appeared to be fundamental.
    So they just rewrote this particular server in Rust. Naturally, Rust
    does not collect garbage, so this particular problem disappeared.

    The key phrase of the story is "This service was a great candidate to
    port to Rust since it was small and self-contained".
    I'd add to this that even more important for eventual success of
    migration was the fact that at time of rewrite server was already
    running for several years, so requirements were stable and
    well-understood.
    Another factor is that their service does not create/free that many
    objects. The delay was caused by mere fact of GC scanning rather than
    by frequent compacting of memory pools. So, from the beginning it was
    obvious that potential fragmentation of the heap, which is the main
    weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
    not apply in their case.

    From the same link:

    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps track
    of who can read and write to memory. It knows when the program is using
    memory and immediately frees the memory once it is no longer needed. It enforces memory rules at compile time, making it virtually impossible to
    have runtime memory bugs.⁴ You do not need to manually keep track of
    memory. The compiler takes care of it."

    This suggests the language automatically takes care of this. But you
    have to write your programs in a certain way to make it possible. The programmer has to help the language keep track of what owns what.

    So you will probably be able to do the same thing in another language.
    But Rust will do more compile-time enforcement by restricting how you
    share objects in memory.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Wed Mar 6 14:34:50 2024
    XPost: comp.lang.c++

    On 05/03/2024 23:02, Chris M. Thomasson wrote:
    On 3/5/2024 1:58 PM, Keith Thompson wrote:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote: >>>> On 3/5/2024 2:27 AM, David Brown wrote:
    On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:

    On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:

    Did you know the life-support system on the
    International Space Station was written in Ada? Not something
    you would
    trust C++ code to, let’s face it.

    Most of the Ada code was written in C or C++ and converted to Ada >>>>>>> for
    delivery.

    Was it debugged again? Or was it assumed that the translation was
    bug-
    free?

    With Ada, if you can get it to compile, it's ready to ship :-)

    Really? Any logic errors in the program itself?

    Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware >>> overflow exception from forcing a 64 bit floating-point value into a 16
    bit integer. The situation was not expected by the code which was
    developed for the Ariane 4, or something like that.

    A numeric overflow occurred during the Ariane 5's initial flight -- and
    the software *did* catch the overflow.  The same overflow didn't occur
    on Ariane 4 because of its different flight profile.  There was a
    management decision to reuse the Ariane 4 flight software for Ariane 5
    without sufficient review.

    The code (which had been thoroughly tested on Ariane 4 and was known not
    to overflow) emitted an error message describing the overflow exception.
    That error message was then processed as data.  Another problem was that
    systems were designed to shut down on any error; as a result, healthy
    and necessary equipment was shut down prematurely.

    This is from my vague memory, and may not be entirely accurate.

    That matches my recollection too.


    *Of course* logic errors are possible in Ada programs, but in my
    experience and that of many other programmers, if you get an Ada program
    to compile (and run without raising unhandled exceptions), you're likely
    to be much closer to a working program than if you get a C program to
    compile.  A typo in a C program is more likely to result in a valid
    program with different semantics.


    So close you can just feel its a 100% correct and working program?

    Didn't you notice the smiley in my comment? It used to be a running
    joke that if you managed to get your Ada code to compile, it was ready
    to ship. The emphasis is on the word "joke".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Wed Mar 6 14:31:50 2024
    XPost: comp.lang.c++

    On 05/03/2024 23:34, Chris M. Thomasson wrote:
    On 3/5/2024 2:11 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    I wonder how many people got confused?


    Apparently you and Malcolm got confused.

    Others who mentioned the language know it is called "Ada". I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Wed Mar 6 14:40:46 2024
    XPost: comp.lang.c++

    On 06/03/2024 01:25, Lawrence D'Oliveiro wrote:
    On Tue, 5 Mar 2024 11:31:11 +0100, David Brown wrote:

    That includes realising that computers could do more than number
    crunching.

    Or, conversely, realizing that all forms of computation (including symbol manipulation) can be expressed as arithmetic?

    That's also a reasonable way to put it. I have not read any of her
    writings, so I don't know exactly how she described things.

    Maybe that came later, cf
    “Gödel numbering”.

    That's getting a few steps further on - it is treating programs as data,
    and I don't think there's any reason to suspect that was something Ada
    Lovelace thought about. It's also very theoretical, while Ada was more interested in the practical applications of computers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Wed Mar 6 13:50:16 2024
    XPost: comp.lang.c++

    On 06/03/2024 13:31, David Brown wrote:
    On 05/03/2024 23:34, Chris M. Thomasson wrote:
    On 3/5/2024 2:11 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    I wonder how many people got confused?


    Apparently you and Malcolm got confused.

    Others who mentioned the language know it is called "Ada".  I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.


    Whoever wrote this short Wikipedia article on it got confused too as it
    uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since it
    is case-insensitive, 'ADA' would also work.)

    Here's also a paper that uses 'ADA' (I assume it is the same language):

    https://www.sciencedirect.com/science/article/abs/pii/0166361582900136

    Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
    written in all-caps or only capitalised? You can't tell!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From aph@littlepinkcloud.invalid@21:1/5 to Michael S on Wed Mar 6 14:30:58 2024
    XPost: comp.lang.c++

    In comp.lang.c Michael S <already5chosen@yahoo.com> wrote:
    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than writing
    it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On
    average the performance was satisfactory, but every two minutes there
    was spike in latency. The latency during the spike was not that big
    (300 msec), but they stilled were feeling that they want better.

    ...

    I have few questions about the story, most important one is whether the weakness of this sort is specific to GC of Go, due to its relative
    immaturity

    I'm sure it is. 300ms is terrible.

    or more general and applies equally to most mature GCs on the
    market, i.e. J2EE and .NET.

    Continuously-compacting concurrent collectors like those available for
    Java aim for less than 10ms, and often hit 1ms. You have to stop each
    thread briefly to scan its stack and do a few other things, but that's
    all.

    Andrew.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Wed Mar 6 14:38:25 2024
    XPost: comp.lang.c++

    On 06/03/2024 14:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:

    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    https://en.wikipedia.org/wiki/Ada_(programming_language)

    Here's also a paper that uses 'ADA' (I assume it is the same
    language):

    https://www.sciencedirect.com/science/article/abs/pii/0166361582900136


    The article published 1982. The language became official in 1983.
    Possibly, in 1982 there still was a confusion w.r.t. its name.

    It would have been know it was named after a person. (I think Lovelace
    would have been better though.)

    Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
    written in all-caps or only capitalised? You can't tell!


    If only ADA, written in upper case, was not widely used for something
    else...

    I don't know what that is without looking it up. In a programming
    newsgroup I expect ADA to be the language.

    BTW it's a good thing that C, written in upper case, can never be
    confused with anything else...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Wed Mar 6 16:18:42 2024
    XPost: comp.lang.c++

    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:

    On 06/03/2024 13:31, David Brown wrote:
    On 05/03/2024 23:34, Chris M. Thomasson wrote:
    On 3/5/2024 2:11 PM, Keith Thompson wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    [...]
    ADA is bullet proof... Until its not... ;^)

    The language is called Ada, not ADA.

    I wonder how many people got confused?


    Apparently you and Malcolm got confused.

    Others who mentioned the language know it is called "Ada". I not
    only corrected you, but gave an explanation of it, in the hope that
    with that clarity, you'd learn.


    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    https://en.wikipedia.org/wiki/Ada_(programming_language)

    Here's also a paper that uses 'ADA' (I assume it is the same
    language):

    https://www.sciencedirect.com/science/article/abs/pii/0166361582900136


    The article published 1982. The language became official in 1983.
    Possibly, in 1982 there still was a confusion w.r.t. its name.

    Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
    written in all-caps or only capitalised? You can't tell!


    If only ADA, written in upper case, was not widely used for something
    else...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Michael S on Wed Mar 6 14:14:42 2024
    XPost: comp.lang.c++

    On 3/6/24 09:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:
    ...
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link
    serves the same purpose. "Simple English" is it's own language, closely
    related to standard English. Read the corresponding Wikipedia article
    for more details.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to bart on Wed Mar 6 19:46:49 2024
    XPost: comp.lang.c++

    On 06/03/2024 14:38, bart wrote:
    On 06/03/2024 14:18, Michael S wrote:

    If only ADA, written in upper case, was not widely used for something
    else...

    I don't know what that is without looking it up. In a programming
    newsgroup I expect ADA to be the language.

    Here's an interesting pic:

    https://upload.wikimedia.org/wikipedia/commons/5/50/AdaLovelaceplaque.JPG

    Notice the upper-case name.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Wed Mar 6 19:50:16 2024
    XPost: comp.lang.c++

    bart <bc@freeuk.com> writes:
    On 06/03/2024 14:38, bart wrote:
    On 06/03/2024 14:18, Michael S wrote:

    If only ADA, written in upper case, was not widely used for something
    else...

    I don't know what that is without looking it up. In a programming
    newsgroup I expect ADA to be the language.

    Here's an interesting pic:

    https://upload.wikimedia.org/wikipedia/commons/5/50/AdaLovelaceplaque.JPG

    Notice the upper-case name.

    Given that the entire name is in all uppercase, and it's not referring
    to the computer language, what is your point, if any?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to James Kuyper on Wed Mar 6 19:50:45 2024
    XPost: comp.lang.c++

    On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 3/6/24 09:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:
    ...
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link serves the same purpose. "Simple English" is it's own language, closely related to standard English.

    Where is Simple English spoken? Is there some geographic area where
    native speakers concentrate?

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Kaz Kylheku on Wed Mar 6 21:13:40 2024
    XPost: comp.lang.c++

    On 06/03/2024 20:50, Kaz Kylheku wrote:
    On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 3/6/24 09:18, Michael S wrote:
    On Wed, 6 Mar 2024 13:50:16 +0000
    bart <bc@freeuk.com> wrote:
    ...
    Whoever wrote this short Wikipedia article on it got confused too as
    it uses both Ada and ADA:

    https://simple.wikipedia.org/wiki/Ada_(programming_language)

    (The example program also includes 'Ada' as some package name. Since
    it is case-insensitive, 'ADA' would also work.)


    Your link is to "simple Wikipedia". I don't know what it is
    exactly, but it does not appear as authoritative as real Wikipedia

    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link
    serves the same purpose. "Simple English" is it's own language, closely
    related to standard English.

    Where is Simple English spoken? Is there some geographic area where
    native speakers concentrate?


    It is meant to be simpler text, written in simpler language. The target audience will include younger people, people with dyslexia or other
    reading difficulties, learners of English, people with lower levels of education, people with limited intelligence or learning impediments, or
    simply people whose eyes glaze over when faced with long texts on the
    main Wikipedia pages.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Thu Mar 7 00:00:08 2024
    XPost: comp.lang.c++

    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:

    On 06/03/2024 12:02, Michael S wrote:
    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than
    writing it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector. On average the performance was satisfactory, but every two minutes
    there was spike in latency. The latency during the spike was not
    that big (300 msec), but they stilled were feeling that they want
    better. They tried to tune GC, but the problem appeared to be
    fundamental. So they just rewrote this particular server in Rust. Naturally, Rust does not collect garbage, so this particular
    problem disappeared.

    The key phrase of the story is "This service was a great candidate
    to port to Rust since it was small and self-contained".
    I'd add to this that even more important for eventual success of
    migration was the fact that at time of rewrite server was already
    running for several years, so requirements were stable and
    well-understood.
    Another factor is that their service does not create/free that many objects. The delay was caused by mere fact of GC scanning rather
    than by frequent compacting of memory pools. So, from the beginning
    it was obvious that potential fragmentation of the heap, which is
    the main weakness of "plain" C/C++/Rust based solutions for Web
    back-ends, does not apply in their case.

    From the same link:

    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps
    track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually impossible to have runtime memory bugs.⁴ You do not need to manually
    keep track of memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based languages like Java, C# or Go.

    But you
    have to write your programs in a certain way to make it possible. The programmer has to help the language keep track of what owns what.

    So you will probably be able to do the same thing in another
    language. But Rust will do more compile-time enforcement by
    restricting how you share objects in memory.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Kaz Kylheku on Wed Mar 6 19:27:24 2024
    XPost: comp.lang.c++

    On 3/6/24 14:50, Kaz Kylheku wrote:
    On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    ...
    Notice that in your following link, "en" appears at the beginning to
    indicate the use of English. "simple" at the beginning of the above link
    serves the same purpose. "Simple English" is it's own language, closely
    related to standard English.

    Where is Simple English spoken? Is there some geographic area where
    native speakers concentrate?

    It's a constructed language, which probably has no native speakers. See <https://en.wikipedia.org/wiki/Constructed_language>. Wikipedia has
    articles in several constructed languages. The two biggest such
    languages are Esperanto, with 350,598, and Simple English with 248,540.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Thu Mar 7 01:45:05 2024
    XPost: comp.lang.c++

    On Wed, 6 Mar 2024 12:28:59 +0000, bart wrote:

    This suggests the language automatically takes care of this. But you
    have to write your programs in a certain way to make it possible.

    You are forced to by default, because if you don’t follow the rules,
    that’s a compile-time error.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to aph on Thu Mar 7 01:46:22 2024
    XPost: comp.lang.c++

    On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:

    Continuously-compacting concurrent collectors like those available for
    Java aim for less than 10ms, and often hit 1ms.

    What ... a 1ms potential delay every time you want to allocate a new
    object??

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Thu Mar 7 01:44:25 2024
    XPost: comp.lang.c++

    On Wed, 6 Mar 2024 14:02:14 +0200, Michael S wrote:

    Another factor is that their service does not create/free that many
    objects. The delay was caused by mere fact of GC scanning rather than
    by frequent compacting of memory pools.

    In other words, a GC language could not even cope reasonably with a light memory-management load.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Thu Mar 7 03:06:41 2024
    XPost: comp.lang.c++

    On Wed, 6 Mar 2024 19:27:24 -0500, James Kuyper wrote:

    It's a constructed language, which probably has no native speakers.

    Not to be confused with Basic English, which was created, and copyrighted
    by, C K Ogden.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Chris M. Thomasson on Thu Mar 7 02:37:11 2024
    XPost: comp.lang.c++

    On 2024-03-07, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 3/6/2024 5:46 PM, Lawrence D'Oliveiro wrote:
    On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:

    Continuously-compacting concurrent collectors like those available for
    Java aim for less than 10ms, and often hit 1ms.

    What ... a 1ms potential delay every time you want to allocate a new
    object??

    GC can be a no go for certain schemes. GC can be fine and it has its place.

    It is the situations where GC cannot be used that are niches that have
    their place. Everywhere else, you can use GC.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to Lawrence D'Oliveiro on Thu Mar 7 06:46:46 2024
    XPost: comp.lang.c++

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 22:11:14 -0000 (UTC), Blue-Maned_Hawk wrote:

    Lawrence D'Oliveiro wrote:

    On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:

    I do not want to live in a web-centric world.

    You already do.

    That does not change the veracity of my statement.

    That doesn’t change the veracity of mine.



    Then our collective fingertips have done nothing in their plasticsmacking.

    --
    Blue-Maned_Hawk│shortens to Hawk│/ blu.mɛin.dÊ°ak/
    │he/him/his/himself/Mr. blue-maned_hawk.srht.site
    FORE!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Thu Mar 7 11:35:08 2024
    XPost: comp.lang.c++

    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps
    track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually
    impossible to have runtime memory bugs.⁴ You do not need to manually
    keep track of memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program at
    any given time, and having larger heaps reduces fragmentation (or at
    least reduces the consequences of it).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Thu Mar 7 13:44:01 2024
    XPost: comp.lang.c++

    On Thu, 7 Mar 2024 11:35:08 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps
    track of who can read and write to memory. It knows when the
    program is using memory and immediately frees the memory once it
    is no longer needed. It enforces memory rules at compile time,
    making it virtually impossible to have runtime memory bugs.⁴ You
    do not need to manually keep track of memory. The compiler takes
    care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program
    at any given time, and having larger heaps reduces fragmentation (or
    at least reduces the consequences of it).


    GC does not stop fragmentation, but it allow heap compaction to be
    built-in part of environment. So, it turns heap fragmentation
    from denial of service type of problem to mere slowdown, hopefully insignificant slowdown.
    I don't say that heap compaction is impossible in other environments,
    but it is much harder, esp. in environments where pointers are visible
    to programmer. The famous David Wheeler's quote applies here at full
    force.
    Also when non-GC environments chooses to implement heap compaction they
    suffer the same or bigger impact to real-time responsiveness as GC.
    So, although I don't know it for sure, my impression is that generic
    heap compaction extremely rarely implemented in performance-aware
    non-GC environments.
    Performance-neglecting non-GC environments, first and foremost CPython,
    can, of course, have heap compaction, although my googling didn't give
    me a definite answer whether it's done or not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Thu Mar 7 16:36:43 2024
    XPost: comp.lang.c++

    On 07/03/2024 12:44, Michael S wrote:
    On Thu, 7 Mar 2024 11:35:08 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the
    program is using memory and immediately frees the memory once it
    is no longer needed. It enforces memory rules at compile time,
    making it virtually impossible to have runtime memory bugs.⁴ You
    do not need to manually keep track of memory. The compiler takes
    care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program
    at any given time, and having larger heaps reduces fragmentation (or
    at least reduces the consequences of it).


    GC does not stop fragmentation, but it allow heap compaction to be
    built-in part of environment.

    No, GC alone does not do that. But heap compaction is generally done as
    part of a GC cycle.

    Heap compaction requires indirect pointers. That is to say, if you have
    a struct "node" on your heap, your code does not use a "node *" pointer
    that points to it. It has a "node_proxy *" pointer, and the
    "node_proxy" struct points to the actual node. Heap compaction moves
    the real node in memory, and updates the proxy with the new real
    address, while the main program uses the same "node_proxy" address.
    (These proxies, or indirect pointers, do not move during heap
    compaction.) And the main program needs to be careful to access the
    data via the proxy, and re-read the proxy after every heap compaction cycle.

    This is not going to work well with a low-level and efficient language -
    the extra accesses can be a significant burden for a language like C and
    C++. But it can be fine for VM-based high-level languages, where the
    overhead is lost in the noise, and where the VM knows when the heap
    compaction has run and it needs to re-read the proxies.

    So, it turns heap fragmentation
    from denial of service type of problem to mere slowdown, hopefully insignificant slowdown.

    For high-level VM based languages, that could be correct. But low-level compiled and optimised languages are dependent on addresses remaining
    valid, so heap compaction is not an option.

    (An OS on a "big" system with an MMU can move memory pages around and
    change the virtual to physical memory mapping to get more efficient use
    of hierarchical virtual memory or to free up contiguous large page
    areas. That is transparent to the user application code.)

    I don't say that heap compaction is impossible in other environments,
    but it is much harder, esp. in environments where pointers are visible
    to programmer. The famous David Wheeler's quote applies here at full
    force.
    Also when non-GC environments chooses to implement heap compaction they suffer the same or bigger impact to real-time responsiveness as GC.

    Agreed.

    So, although I don't know it for sure, my impression is that generic
    heap compaction extremely rarely implemented in performance-aware
    non-GC environments.

    I think that is likely.

    Performance-neglecting non-GC environments, first and foremost CPython,
    can, of course, have heap compaction, although my googling didn't give
    me a definite answer whether it's done or not.


    CPython does use garbage collection, as far as I know.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to David Brown on Thu Mar 7 16:35:48 2024
    XPost: comp.lang.c++

    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps
    track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually
    impossible to have runtime memory bugs.⁴ You do not need to manually
    keep track of memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
    languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program at
    any given time, and having larger heaps reduces fragmentation (or at
    least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation. Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently used
    to bump-allocate new objects.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to David Brown on Thu Mar 7 17:18:14 2024
    XPost: comp.lang.c++

    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 07/03/2024 12:44, Michael S wrote:
    GC does not stop fragmentation, but it allow heap compaction to be
    built-in part of environment.

    No, GC alone does not do that. But heap compaction is generally done as
    part of a GC cycle.

    Heap compaction requires indirect pointers.

    I believe, it doesn't, or doesn't have to. The garbage collector fixes
    all the pointers contained in the reachable graph to point to the new
    locations of objects.

    If some foreign code held pointers to GC objects, that would be a
    problem. That can usually be avoided. Or else, the proxy handles
    can be used just for those outside references.

    A simple copying garbage collector moves each object on the first
    traversal and rewrites the parent pointer which it just chased
    to point to the new location. Subsequent visits to the same object
    then recognize that it has already been moved and just adjust the
    pointer that had been traversed to reach that object. The forwarding
    pointer to the new location can be stored in the old object;
    most of its fields are no longer needed for anything.

    The space required for the scheme can be regarded as equivalent
    to fragmentation, but it's controlled.

    The worst case exhibited by fragmentation (where the wasted space is proportional to the size ratio of the largest to smallest object) is
    avoided.

    Now, copying collection is almost certainly inapplicable to C programs;
    it's not something you "slide under" C, like Boehm. We have to think
    outside of the C box. Outside of the C box, interesting things are
    possible, like precisely knowing all the places that point at an
    object.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Thu Mar 7 14:28:11 2024
    XPost: comp.lang.c++

    On 3/6/24 22:06, Lawrence D'Oliveiro wrote:
    On Wed, 6 Mar 2024 19:27:24 -0500, James Kuyper wrote:

    It's a constructed language, which probably has no native speakers.

    Not to be confused with Basic English, which was created, and copyrighted
    by, C K Ogden.

    Simple English is the term used by Wikipedia for one of it's
    language-specific subsets. One of it's requirements is that the articles
    be written in Basic English as much as possible. See <https://simple.wikipedia.org/wiki/Wikipedia:How_to_write_Simple_English_pages#Basic_English_and_VOA_Special_English>
    for details.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Thu Mar 7 23:43:09 2024
    XPost: comp.lang.c++

    On Wed, 6 Mar 2024 14:34:50 +0100, David Brown wrote:

    It used to be a running joke that if you managed to get your Ada code to compile, it was ready to ship.

    That joke actually originated with Pascal. Though I suppose Ada took it to
    the next level ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Thu Mar 7 23:42:10 2024
    XPost: comp.lang.c++

    On Tue, 5 Mar 2024 22:01:01 -0800, Chris M. Thomasson wrote:

    On 3/5/2024 4:25 PM, Lawrence D'Oliveiro wrote:

    So, what is the right language to use?

    Learn to use more than one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Thu Mar 7 23:44:20 2024
    XPost: comp.lang.c++

    On Thu, 7 Mar 2024 14:28:11 -0500, James Kuyper wrote:

    One of it's requirements is that the articles be written in Basic
    English as much as possible.

    Interesting, because it was Ogden’s protectiveness of his copyright that killed off any initial chance of Basic English taking off, back in the
    day.

    I guess that’s expired now.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Fri Mar 8 09:01:21 2024
    XPost: comp.lang.c++

    On 08/03/2024 00:43, Lawrence D'Oliveiro wrote:
    On Wed, 6 Mar 2024 14:34:50 +0100, David Brown wrote:

    It used to be a running joke that if you managed to get your Ada code to
    compile, it was ready to ship.

    That joke actually originated with Pascal.

    I didn't know that.

    Though I suppose Ada took it to
    the next level ...

    It seems much more appropriate for Ada (though Pascal also had stricter checking and stronger types than most other popular languages had when
    Pascal was developed).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Kaz Kylheku on Fri Mar 8 08:25:13 2024
    XPost: comp.lang.c++

    On 07/03/2024 17:35, Kaz Kylheku wrote:
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the program
    is using memory and immediately frees the memory once it is no longer
    needed. It enforces memory rules at compile time, making it virtually
    impossible to have runtime memory bugs.⁴ You do not need to manually >>>> keep track of memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
    languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the program at
    any given time, and having larger heaps reduces fragmentation (or at
    least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation.

    Yes, but garbage collectors that could be useable for C, C++, or other efficient compiled languages are not "copying" garbage collectors.

    Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently used
    to bump-allocate new objects.


    I think if you have a system with enough memory that copying garbage
    collection (or other kinds of heap compaction during GC) is a reasonable option, then it's unlikely that heap fragmentation is a big problem in
    the first place. And you won't be running on a small embedded system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Fri Mar 8 12:57:46 2024
    XPost: comp.lang.c++

    On Fri, 8 Mar 2024 08:25:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 07/03/2024 17:35, Kaz Kylheku wrote:
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust
    keeps track of who can read and write to memory. It knows when
    the program is using memory and immediately frees the memory
    once it is no longer needed. It enforces memory rules at compile
    time, making it virtually impossible to have runtime memory
    bugs.⁴ You do not need to manually keep track of memory. The
    compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the
    program at any given time, and having larger heaps reduces
    fragmentation (or at least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation.

    Yes, but garbage collectors that could be useable for C, C++, or
    other efficient compiled languages are not "copying" garbage
    collectors.


    Go, C# and Java are all efficient compiled languages. For Go it was
    actually a major goal.

    Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently
    used to bump-allocate new objects.


    I think if you have a system with enough memory that copying garbage collection (or other kinds of heap compaction during GC) is a
    reasonable option, then it's unlikely that heap fragmentation is a
    big problem in the first place. And you won't be running on a small
    embedded system.


    You sound like arguing for sake of arguing.
    Of course, heap fragmentation is relatively rare problem. But when you
    process 100s of 1000s of requests of significantly varying sizes for
    weeks without interruption then rare things happen with high
    probability :(
    In case of this particular Discord service, they appear to
    have a benefit of size of requests not varying significantly, so
    absence of heap compaction is not a major defect.
    BTW, I'd like to know if 3 years later they still have their Rust
    solution running.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paavo Helde@21:1/5 to All on Fri Mar 8 14:41:16 2024
    XPost: comp.lang.c++

    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++ std::shared_ptr (except that it does not need to be thread-safe).

    With reference counting one only knows how many pointers there are to a
    given heap block, but not where they are, so heap compaction would not
    be straightforward.

    Python also has zillions of extensions written in C or C++ (all of AI
    related work for example), so having e.g. heap compaction of Python
    objects only might not be worth of it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Paavo Helde on Fri Mar 8 15:07:47 2024
    XPost: comp.lang.c++

    On 08/03/2024 13:41, Paavo Helde wrote:
    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++ std::shared_ptr (except that it does not need to be thread-safe).

    Yes, that is my understanding too. (I could be wrong here, so don't
    rely on anything I write!) But the way it is used is still a type of
    garbage collection. When an object no longer has any "live" references,
    it is put in a list, and on the next GC it will get cleared up (and call
    the asynchronous destructor, __del__, for the object).

    A similar method is sometimes used in C++ for objects that are
    time-consuming to destruct. You have a "tidy up later" container that
    holds shared pointers. Each time you make a new object that will have asynchronous destruction, you use a shared_ptr for the access and put a
    copy of that pointer in the tidy-up container. A low priority
    background thread checks this list on occasion - any pointers with only
    one reference can be cleared up in the context of this separate thread.


    With reference counting one only knows how many pointers there are to a
    given heap block, but not where they are, so heap compaction would not
    be straightforward.

    Python also has zillions of extensions written in C or C++ (all of AI
    related work for example), so having e.g. heap compaction of Python
    objects only might not be worth of it.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Fri Mar 8 16:57:09 2024
    XPost: comp.lang.c++

    On Fri, 8 Mar 2024 15:32:22 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/03/2024 11:57, Michael S wrote:
    On Fri, 8 Mar 2024 08:25:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 07/03/2024 17:35, Kaz Kylheku wrote:
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust >>>>>> keeps track of who can read and write to memory. It knows when
    the program is using memory and immediately frees the memory
    once it is no longer needed. It enforces memory rules at
    compile time, making it virtually impossible to have runtime
    memory bugs.⁴ You do not need to manually keep track of
    memory. The compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the
    program at any given time, and having larger heaps reduces
    fragmentation (or at least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation.

    Yes, but garbage collectors that could be useable for C, C++, or
    other efficient compiled languages are not "copying" garbage
    collectors.


    Go, C# and Java are all efficient compiled languages. For Go it was actually a major goal.

    C# and Java are, AFAIUI, managed languages - they are byte-compiled
    and run on a VM. (JIT compilation to machine code can be used for acceleration, but that does not change the principles.) I don't know
    about Go.


    C# was Jitted originally and was even interpretted on on very small implementation that don't seem to be supported any longer. Today it is
    mostly AoTed, which in simpler language means "compiled". There are
    options in dev tools whhether to compile to native code on to platform-independent. I would think that most people compile to native.

    Java-on-Android which, I would guess, is majority on Java written in
    the world, is like 95% AoTed + 5% JITtted. Is used to be 100% AoTed in
    few versions of Android, but by now JIT is reintroduced as an option,
    not for portability, but for profile-guided optimization
    opportinities it allows. If I am not mistaken, direct interpretaions of
    Davlik non-byte-code was never supported on Android.

    Java-outside-Android? I don't know what is current stated. Would think
    that Oracle's JVMs intetended for desktop/la[top/server are also
    either JITted or AoTed, not interpreted.

    Go is compiled to native, most often via LLVM, but there exists gcc
    option as well.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Fri Mar 8 15:32:22 2024
    XPost: comp.lang.c++

    On 08/03/2024 11:57, Michael S wrote:
    On Fri, 8 Mar 2024 08:25:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 07/03/2024 17:35, Kaz Kylheku wrote:
    On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
    On 06/03/2024 23:00, Michael S wrote:
    On Wed, 6 Mar 2024 12:28:59 +0000
    bart <bc@freeuk.com> wrote:


    "Rust uses a relatively unique memory management approach that
    incorporates the idea of memory “ownership”. Basically, Rust
    keeps track of who can read and write to memory. It knows when
    the program is using memory and immediately frees the memory
    once it is no longer needed. It enforces memory rules at compile
    time, making it virtually impossible to have runtime memory
    bugs.⁴ You do not need to manually keep track of memory. The
    compiler takes care of it."

    This suggests the language automatically takes care of this.

    Takes care of what?
    AFAIK, heap fragmentation is as bad problem in Rust as it is in
    C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
    GC-based languages like Java, C# or Go.

    Garbage collection does not stop heap fragmentation. GC does, I
    suppose, mean that you need much more memory and bigger heaps in
    proportion to the amount of memory you actually need in the
    program at any given time, and having larger heaps reduces
    fragmentation (or at least reduces the consequences of it).

    Copying garbage collectors literally stop fragmentation.

    Yes, but garbage collectors that could be useable for C, C++, or
    other efficient compiled languages are not "copying" garbage
    collectors.


    Go, C# and Java are all efficient compiled languages. For Go it was
    actually a major goal.

    C# and Java are, AFAIUI, managed languages - they are byte-compiled and
    run on a VM. (JIT compilation to machine code can be used for
    acceleration, but that does not change the principles.) I don't know
    about Go.


    Reachable
    objects are identified and moved to a memory partition where they
    are now adjacent. The vacated memory partition is then efficiently
    used to bump-allocate new objects.


    I think if you have a system with enough memory that copying garbage
    collection (or other kinds of heap compaction during GC) is a
    reasonable option, then it's unlikely that heap fragmentation is a
    big problem in the first place. And you won't be running on a small
    embedded system.


    You sound like arguing for sake of arguing.

    I am just trying to be clear about things. Different types of system,
    and different types of task, have different challenges and different
    solutions. (This seems obvious, but people often think they have "the" solution to a particular issue.) In particular, in small embedded
    systems with limited ram and no MMU, if you use dynamic memory of any
    kind, then heap fragmentation is a serious risk. And a heap-compacting
    garbage collection will not mitigate that risk.

    There are a lot of GC algorithms, each with their pros and cons, and the
    kind of languages and tasks for which they are suitable. If you have a
    GC algorithm that works by copying all live data (then scraping
    everything left over), then heap compaction is a natural byproduct.

    But I think it is rare that heap compaction is an appropriate goal in
    itself - it is a costly operation. It invalidates all pointers, which
    means a lot of overhead and extra care in languages where pointers are
    likely to be cached in registers or local variables on the stack. And
    it will be tough on the cache as everything has to be copied and moved.
    That pretty much rules it out for efficient compiled languages, at least
    for the majority of their objects, and leaves it in the domain of
    languages that can accept the performance hit.


    Of course, heap fragmentation is relatively rare problem. But when you process 100s of 1000s of requests of significantly varying sizes for
    weeks without interruption then rare things happen with high
    probability :(

    There are all sorts of techniques usable to optimise such systems.
    Allocation pools for different sized blocks would be a typical strategy.

    In case of this particular Discord service, they appear to
    have a benefit of size of requests not varying significantly, so
    absence of heap compaction is not a major defect.
    BTW, I'd like to know if 3 years later they still have their Rust
    solution running.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Fri Mar 8 15:15:36 2024
    XPost: comp.lang.c++

    On 08/03/2024 14:07, David Brown wrote:
    On 08/03/2024 13:41, Paavo Helde wrote:
    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++
    std::shared_ptr (except that it does not need to be thread-safe).

    Yes, that is my understanding too.  (I could be wrong here, so don't
    rely on anything I write!)  But the way it is used is still a type of garbage collection.  When an object no longer has any "live" references,
    it is put in a list, and on the next GC it will get cleared up (and call
    the asynchronous destructor, __del__, for the object).

    Is that how CPython works? I can't quite see the point of saving up all
    the deallocations so that they are all done as a batch. It's extra
    overhead, and will cause those latency spikes that was the problem here.

    In my own reference count scheme, when the count reaches zero, the
    memory is freed immediately.

    I also tend to have most allocations being of either 16 or 32 bytes, so
    reuse is easy. It is only individual data items (a long string or long
    array) that might have an arbitrary length that needs to be in
    contiguous memory.

    Most strings however have an average length of well below 16 characters
    in my programs, so use a 16-byte allocation.

    I don't know the allocation pattern in that Discard app, but Michael S suggested they might not be lots of arbitrary-size objects.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Fri Mar 8 17:55:48 2024
    XPost: comp.lang.c++

    On 08/03/2024 16:15, bart wrote:
    On 08/03/2024 14:07, David Brown wrote:
    On 08/03/2024 13:41, Paavo Helde wrote:
    07.03.2024 17:36 David Brown kirjutas:

    CPython does use garbage collection, as far as I know.


    AFAIK CPython uses reference counting, i.e. basically the same as C++
    std::shared_ptr (except that it does not need to be thread-safe).

    Yes, that is my understanding too.  (I could be wrong here, so don't
    rely on anything I write!)  But the way it is used is still a type of
    garbage collection.  When an object no longer has any "live"
    references, it is put in a list, and on the next GC it will get
    cleared up (and call the asynchronous destructor, __del__, for the
    object).

    Is that how CPython works? I can't quite see the point of saving up all
    the deallocations so that they are all done as a batch. It's extra
    overhead, and will cause those latency spikes that was the problem here.

    I believe the GC runs are done very regularly (if there is something in
    the clean-up list), so there is not much build-up and not much extra
    latency.


    In my own reference count scheme, when the count reaches zero, the
    memory is freed immediately.

    That's synchronous deallocation. It's a perfectly good strategy, of
    course. There are pros and cons of both methods.


    I also tend to have most allocations being of either 16 or 32 bytes, so
    reuse is easy. It is only individual data items (a long string or long
    array) that might have an arbitrary length that needs to be in
    contiguous memory.

    Most strings however have an average length of well below 16 characters
    in my programs, so use a 16-byte allocation.

    I don't know the allocation pattern in that Discard app, but Michael S suggested they might not be lots of arbitrary-size objects.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Sat Mar 9 13:25:26 2024
    XPost: comp.lang.c++

    On 08/03/2024 22:23, Chris M. Thomasson wrote:
    On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
    On 3/6/2024 2:43 AM, David Brown wrote:
    [...]

    This is a fun one:

    // pseudo code...
    _______________________
    node*
    node_pop()
    {
        // try per-thread lifo

        // try shared distributed lifo

        // try global region

        // if all of those failed, return nullptr
    }


    Just to be clear here - if this is in a safety-critical system, and your allocation system returns nullptr, people die. That is why you don't
    use this kind of thing for important tasks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Ross Finlayson on Tue Mar 12 00:07:23 2024
    XPost: comp.lang.c++, comp.lang.java.programmer

    On Fri, 8 Mar 2024 21:36:14 -0800, Ross Finlayson wrote:

    What I'd like to know about is who keeps dialing the "harmonization"
    efforts, which really must give grouse to the "harmonisation"
    spellers ...

    Some words came from French and had “-ize”, others did not and had “-ise”.
    Some folks in Britain decided to change the former to the latter.

    “Televise”, “merchandise”, “advertise” -- never any “-ize” form.

    “Synchronize”, “harmonize”, “apologize” -- “-ize” originally.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Tue Mar 12 00:03:31 2024
    XPost: comp.lang.c++

    On Fri, 8 Mar 2024 09:01:21 +0100, David Brown wrote:

    It seems much more appropriate for Ada (though Pascal also had stricter checking and stronger types than most other popular languages had when
    Pascal was developed).

    That’s why Ada was built on Pascal: if you want something intended for high-reliability, safety-critical applications, why not build it on a foundation that was already the most, shall we say, anal-retentive, among well-known languages of the time?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu Mar 14 15:39:22 2024
    Michael S <already5chosen@yahoo.com> writes:

    On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:

    On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    Discord did some benchmarking of its back-end servers, which had
    been using Go, and decided that switching to Rust offered better
    performance.

    - for big and complex real-world back-end processing, writing
    working solution in go will take 5 time less man hours than
    writing it in Rust

    Nevertheless, they found the switch to Rust worthwhile.

    I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust

    Summary: performance of one of Discord's most heavy-duty servers
    suffered from weakness in implementation of Go garbage collector.
    [...]

    I have few questions about the story, most important one is
    whether the weakness of this sort is specific to GC of Go, due
    to its relative immaturity or more general and applies equally
    to most mature GCs on the market, i.e. J2EE and .NET.

    After reading the article, it seems clear that the design of the
    GC used in Go is not a good fit to the Discord server workload.
    That is not to say that Go's GC would be a bad fit for other
    workloads, only that it's not a good choice for the Discord
    server. Also it seems clear that other approaches to GC design
    (meant in the sense of already having been thought of and tried)
    would be just fine for the Discord server. Of course whether
    such schemes would be a good fit for the Go environment is
    a separate question (and one about which I have nothing to
    offer since I know very little about Go).

    Another question is whether the problem is specific to GC-style
    of automatic memory management (AMM) or applies, at least to
    some degree, to other forms of AMM, most importantly, to AMMs
    based on Reference Counting used by Swift and also popular in
    C++.

    It's very hard to make a blanket statement that applies to all
    the different approaches to garbage collection, or to other
    automatic memory management schemes (with reference counting as a
    specific example), in a general way. My experience with garbage
    collected environments is that they are perfectly usable both for
    long-term use and for interactive use. Reference counting too:
    typically RC gives a smoother feel, but that comes at a cost,
    because RC does not, by itself, reclaim circular structures.
    That means that either, one, code must be written to break the
    circularity of such structures (so memory management is not fully
    automated); or two, periodically some sort of more general GC
    method must be invoked to reclaim them; or three, eventually
    more and more memory is used to where the application must be
    rebooted (like the memory usage patterns of some popular web
    browsers). The question is what kind of workloads need to be
    supported - some schemes are good for typical interactive, but
    short-term, applications, other schemes are better for the sort
    of long-term server processes like the Discord example. The
    space of already existing techniques for doing GC is pretty
    large - if a particular kind of workload needs to be supported,
    there is a very good chance that an appropriate GC scheme can
    be found without much difficulty.

    Incidentally, there is no such thing as a fully automatic memory
    management system. Even full garbage collection sometimes needs
    help from the programmer so all memory is eventually reclaimed.
    (If you're feeling brave try doing a web search for "ephemeron".)
    The point of all AMM schemes is not to reduce the amount of
    programmer effort to zero but simply to greatly reduce it (and
    hopefully reduce it to zero in many of the most common cases).
    But for complicated programs it's almost inevitable that some programmer-written code needs to be included to help whatever
    automated mechanisms are used.

    Of course, I don't expected that my questions will be answered
    fully on comp.lang.c, but if some knowledgeable posters will try
    to answer I would appreciate.

    The questions presented were somewhat general, and so the answers
    given are also rather general. In spite of that I hope you found
    what you're looking for.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Mon Apr 29 00:02:02 2024
    XPost: comp.lang.c++

    On Fri, 8 Mar 2024 15:32:22 +0100, David Brown wrote:

    And it will be tough on the cache as everything has to be copied and
    moved.

    I think all kinds of garbage collector end up being tough on the cache.
    Because remember, they are doing things with lots of blocks of memory that haven’t been accessed recently, and therefore are not likely to be in the cache.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Paavo Helde on Mon Apr 29 00:05:17 2024
    XPost: comp.lang.c++

    On Fri, 8 Mar 2024 14:41:16 +0200, Paavo Helde wrote:

    AFAIK CPython uses reference counting ...

    Combination of reference-counting as a first resort, with full garbage collection to deal with those less common cases where you have reference cycles. Trying to get the best of both worlds.

    The trouble with reference-counting is it impacts multithreading
    performance. However, the CPython developers have a scheme to deal with
    this, by making the reference counts a little less deterministic (i.e.
    there may be a slight delay before they become fully correct). I think
    this is a complicated idea, and it may take them some time to get it fully implemented.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Chris M. Thomasson on Mon Apr 29 01:58:43 2024
    XPost: comp.lang.c++

    On 2024-04-29, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    Are you an AI?

    That entails two separable propositions; there is moderate evidence for
    the one, scant for the other.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Chris M. Thomasson on Mon Apr 29 04:28:37 2024
    XPost: comp.lang.c++

    On 2024-04-29, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 4/28/2024 6:58 PM, Kaz Kylheku wrote:
    On 2024-04-29, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    Are you an AI?

    That entails two separable propositions; there is moderate evidence for
    the one, scant for the other.


    Are you a human?

    If so, are you using AI?

    If not, are you an AI?

    Any better?

    Rather, the separable propositions are:

    - is it (A)rtificial; and
    - is it (I)ntelligent

    If you can confirm both, you have AI, but there are other possibilities.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From aph@littlepinkcloud.invalid@21:1/5 to Lawrence D'Oliveiro on Mon Apr 29 08:55:16 2024
    XPost: comp.lang.c++

    In comp.lang.c Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 8 Mar 2024 15:32:22 +0100, David Brown wrote:

    And it will be tough on the cache as everything has to be copied and
    moved.

    I think all kinds of garbage collector end up being tough on the
    cache. Because remember, they are doing things with lots of blocks
    of memory that haven’t been accessed recently, and therefore are not
    likely to be in the cache.

    Not usually. Most garbage dies young, so the GC will be scanning recently-allocated regions first. There's no strict need to scan old
    regions unless the program starts to run short of memory. You might
    have a periodic thread to do so, to reclaim some extra space.

    Andrew.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From paavo512@21:1/5 to Lawrence D'Oliveiro on Mon Apr 29 12:45:35 2024
    XPost: comp.lang.c++

    On 29.04.2024 03:05, Lawrence D'Oliveiro wrote:
    On Fri, 8 Mar 2024 14:41:16 +0200, Paavo Helde wrote:

    AFAIK CPython uses reference counting ...

    Combination of reference-counting as a first resort, with full garbage collection to deal with those less common cases where you have reference cycles. Trying to get the best of both worlds.

    The trouble with reference-counting is it impacts multithreading
    performance.

    Maybe only in case of heavy contention. If there is little contention
    and the reference counter is implemented as an atomic variable, there is
    no measurable hit on performance. I know this because I was suspicious
    myself and measured this recently.

    Anyway, multithreading performance is a non-issue for Python so far as
    the Python interpreter runs in a single-threaded regime anyway, under a
    global GIL lock. They are planning to get rid of GIL, but this work is
    still in development AFAIK. I'm sure it will take years to stabilize the
    whole Python zoo without GIL.


    However, the CPython developers have a scheme to deal with
    this, by making the reference counts a little less deterministic (i.e.
    there may be a slight delay before they become fully correct). I think
    this is a complicated idea, and it may take them some time to get it fully implemented.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to paavo@osa.pri.ee on Tue Apr 30 16:46:58 2024
    XPost: comp.lang.c++

    paavo512 <paavo@osa.pri.ee> writes:
    On 29.04.2024 03:05, Lawrence D'Oliveiro wrote:
    On Fri, 8 Mar 2024 14:41:16 +0200, Paavo Helde wrote:

    AFAIK CPython uses reference counting ...

    Combination of reference-counting as a first resort, with full garbage
    collection to deal with those less common cases where you have reference
    cycles. Trying to get the best of both worlds.

    The trouble with reference-counting is it impacts multithreading
    performance.

    Maybe only in case of heavy contention. If there is little contention
    and the reference counter is implemented as an atomic variable, there is
    no measurable hit on performance. I know this because I was suspicious
    myself and measured this recently.

    Anyway, multithreading performance is a non-issue for Python so far as
    the Python interpreter runs in a single-threaded regime anyway, under a >global GIL lock. They are planning to get rid of GIL, but this work is
    still in development AFAIK. I'm sure it will take years to stabilize the >whole Python zoo without GIL.

    We use the python shim (SWIG) to run a multithreaded C++ application (well
    over a hundred threads in some cases). It's only the calls back
    into python that end up single threaded - the application itself happily
    runs multithreaded (and never calls back into python at all).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)