The feddies want to regulate software development very much.
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
Nevertheless, C retains the basic philosophy that
programmers know what they are doing; it only requires
that they state their intentions explicitly.
Lynn
Nevertheless, C retains the basic philosophy that
programmers know what they are doing; it only requires
that they state their intentions explicitly.
Well to be fair, the feds regulations in the 60s made COBOL and FORTRAN
very popular, plus POSIX later on.
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
They have been talking about it for at least 20 years now.
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They have been talking about it for at least 20 years now. This is a very
bad thing.
Lynn
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. [...]"
[...]
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. [...]
[...]
Good languages and good tools help, but they are not the root cause of
poor quality software in the world.
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming >languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming
languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They
have been talking about it for at least 20 years now. This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
But if it gets popular enough for schools and colleges to teach Rust programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and experience, lead by incompetent managers, untrained in good development practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
On 2024-03-03, John McCue <jmccue@neutron.jmcunx.com> wrote:
Nevertheless, C retains the basic philosophy that
programmers know what they are doing; it only requires
that they state their intentions explicitly.
fflush(stdin); // my explicit intention is to discard unread input
A lot of what programmers intend is nonportable or undefined,
without their nowledge.
Pretty much all imperative languages require that programmer
state their intentions explicitly. PL/I, Algol, Modula, ...
You can't, for instance, just declare some facts and write a query
against them.
All languages have implicit behaviors. For instance in C you can
write x + y, without having to express a detailed intention about
what happens with every bit.
It's a tautology that you have to be explicit about declaring your
intent using the documented knobs and levers that are available,
using the semantics of the paradigm they control, while not being able declare intent about the inner mechanism that underlies them. Using
any language whatsoever.
Lawrence D'Oliveiro wrote:
Nowadays, POSIX (and *nix generally) is undergoing a resurgence because
of Linux and Open Source. Developers are discovering that the Linux
ecosystem offers a much more productive development environment for a
code-sharing, code-reusing, Web-centric world than anything Microsoft
can offer.
I do not want to live in a web-centric world.
It is not languages like C and C++ that are "unsafe".
On 2024-03-03, David Brown <david.brown@hesbynett.no> wrote:
On 03/03/2024 00:13, Lynn McGuire wrote:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much. They >>> have been talking about it for at least 20 years now. This is a very
bad thing.
Lynn
It's the wrong solution to the wrong problem.
It is not languages like C and C++ that are "unsafe". It is the
programmers that write the code for them. As long as the people
programming in Rust or other modern languages are the more capable and
qualified developers - the ones who think about memory safety, correct
code, testing, and quality software development - then code written in
Rust will be better quality and safer than the average C, C++, Java and
C# code.
Programmers who think about safety, correctness and quality and all that
have way fewer diagnostics and more footguns if they are coding in C
compared to Rust.
I think, you can't just wave away the characteristics of Rust as making
no difference in this regard.
But if it gets popular enough for schools and colleges to teach Rust
programming course to the masses, and it gets used by developers who are
paid per KLoC, given responsibilities well beyond their abilities and
experience, lead by incompetent managers, untrained in good development
practices and pushed to impossible deadlines, then the average quality
of programs in Rust will drop to that of average C and C++ code.
The rhetoric you hear from Rust people about this is that coders taking
a safety shortcut to make something work have to explicitly ask for that
in Rust. It leaves a visible trace. If something goes wrong because of
an unsafe block, you can trace that to the commit which added it.
The rhetoric all sounds good.
However, like you, I also believe it boils down to people, in a
somewhat different way. To use Rust productively, you have to be one of
the rare idiot savants who are smart enough to use it *and* numb to all
the inconveniences.
The reason the average programmer won't make any safety
boo-boos using Rust is that the average programmer either isn't smart
enough to use it at all, or else doesn't want to put up with the fuss:
they will opt for some safe language which is easy to use.
Rust's problem is that we have safe languages in which you can almost
crank out working code with your eyes closed. (Or if not working,
then at least code in which the only uncaught bugs are your logic bugs,
not some undefined behavior from integer overflow or array out of
bounds.)
This is why Rust people are desperately pitching Rust as an alternative
for C and whatnot, and showcasing it being used in the kernel and
whatnot.
Trying to be both safe and efficient to be able to serve as a "C
replacement" is a clumsy hedge that makes Rust an awkward language.
You know the parable about the fox that tries to chase two rabbits.
The alternative to Rust in application development is pretty much any convenient, "easy" high level language, plus a little bit of C.
You can get a small quantity of C right far more easily than a large
quantity of C. It's almost immaterial.
An important aspect of Rust is the ownership-based memory management.
The problem is, the "garbage collection is bad" era is /long/ behind us.
Scoped ownership is a half-baked solution to the object lifetime
problem, that gets in the way of the programmer and isn't appropriate
for the vast majority of software tasks.
Embedded systems often need custom memory management, not something that
the language imposes. C has malloc, yet even that gets disused in favor
of something else.
On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:
Lawrence D'Oliveiro wrote:
Nowadays, POSIX (and *nix generally) is undergoing a resurgence
because of Linux and Open Source. Developers are discovering that the
Linux ecosystem offers a much more productive development environment
for a code-sharing, code-reusing, Web-centric world than anything
Microsoft can offer.
I do not want to live in a web-centric world.
You already do.
On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to
Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
But consider this. When programming in modern C++, you can be risk-free
from buffer overruns and most kinds of memory leak - use container
classes, string classes, and the like, rather than C-style arrays and malloc/free or new/delete.
Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:
I do not want to live in a web-centric world.
You already do.
That does not change the veracity of my statement.
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a
very bad thing.
Lynn
Sure. Putting corks on the forks reduces the chance of eye injuries.
Fwiw, a YouTube link to a scene in the movie Dirty Rotten Scoundrels:
Funny to me:
https://youtu.be/eF8QAeQm3ZM?t=332
Putting the cork on the fork is akin to saying nobody should be using C >and/or C++ in this "modern" age? :^)--
On Sun, 3 Mar 2024 14:06:31 -0800, Chris M. Thomasson wrote:
On 3/3/2024 12:10 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 12:01:57 +0100, David Brown wrote:
It is not languages like C and C++ that are "unsafe".
Some empirical evidence from Google
<https://security.googleblog.com/2022/12/memory-safe-languages-in-android-13.html>
shows a reduction in memory-safety errors in switching from C/C++ to
Rust.
Sure. Putting corks on the forks reduces the chance of eye injuries.
Except this is Google, and they’re doing it in real-world production
code, namely Android. And showing some positive benefits from doing
so, without impairing the functionality of Android in any way.
I remember a while back when some people would try to tell me that [Ada] solves all issues...
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something that >>> the language imposes. C has malloc, yet even that gets disused in favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your various
data structures needs....
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming languages. The tech industry sees their point,
but it won't be easy."
On 04/03/2024 00:06, Chris M. Thomasson wrote:
On 3/3/2024 3:59 PM, David LaRue wrote:And there's ADA, and there's Ada, the lady.
Lynn McGuire <lynnmcguire5@gmail.com> wrote in[...]
news:us0brl$246bf$1@dont-email.me:
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>> invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a >>>> very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages.
They
can be hacked just as easily as C and C++ and many other languages. The >>> government should worry about things they really need to control,
which is
less not more, IMHO. They obviously know very little about computer
development.
I remember a while back when some people would try to tell me that ADA
solves all issues...
And she wrote.
"The Analytical Engine has no pretensions whatever to originate
anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical
relations or truths."
And so she knew what the capabilites of the Analytical Engine were,
exactly what programming was, what and what it could not achieve, and
how set out making it achieve what it could achieved. And so she had it,
and in a sense, ADA solved all issues.
And no formal computer science education. Of course.
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
On 04/03/2024 12:54, Malcolm McLean wrote:
On 04/03/2024 00:06, Chris M. Thomasson wrote:
On 3/3/2024 3:59 PM, David LaRue wrote:And there's ADA, and there's Ada, the lady.
Lynn McGuire <lynnmcguire5@gmail.com> wrote in[...]
news:us0brl$246bf$1@dont-email.me:
"White House to Developers: Using C or C++ Invites Cybersecurity
Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus- >>>>> invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe
programming languages. The tech industry sees their point, but it
won't be easy."
No. The feddies want to regulate software development very much.
They have been talking about it for at least 20 years now. This is a >>>>> very bad thing.
Lynn
I was thinking about this wrt other alledgedly more secure languages.
They
can be hacked just as easily as C and C++ and many other languages. The >>>> government should worry about things they really need to control,
which is
less not more, IMHO. They obviously know very little about computer
development.
I remember a while back when some people would try to tell me that ADA
solves all issues...
No, there's Ada the programming language, named after Lady Ada Lovelace.\
On 03.03.2024 21:23, David Brown wrote:
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control development according to a realistic plan.
And of course Google can solve a problem by inventing a new language and putting up all the infrastructure that that would need around it.
... Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
On Mon, 4 Mar 2024 11:44:06 +0000, Malcolm McLean wrote:
And of course Google can solve a problem by inventing a new
language and putting up all the infrastructure that that would need
around it.
Google has invented quite a lot of languages: Dart and Go come to
mind, and also this “Carbon” effort.
I suppose nowadays a language can find a niche outside the
mainstream, and still be viable. Proprietary products need
mass-market success to stay afloat, but with open-source ones, what’s important is the contributor base, not the user base.
On 04/03/2024 17:05, Janis Papanagnou wrote:
On 03.03.2024 21:23, David Brown wrote:
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only
so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.
Now you are beginning to understand!
Go *is* mainstream, more so than Rust.
On 3/3/2024 9:43 PM, Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 16:06:24 -0800, Chris M. Thomasson wrote:
I remember a while back when some people would try to tell me that [Ada] >>> solves all issues...
It did make a difference. Did you know the life-support system on the
International Space Station was written in Ada? Not something you
would trust C++ code to, let’s face it.
Would you trust a "safe" language that had some critical libraries that
were written in say, C?
The less code you have, the less that can go wrong.
On 3/4/2024 5:54 PM, Lawrence D'Oliveiro wrote:
On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia” OS, >> and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
Why do you mention performance? I thought is was all about safety...
On 3/4/2024 8:43 PM, Lawrence D'Oliveiro wrote:
On Tue, 5 Mar 2024 02:46:33 +0000, Malcolm McLean wrote:
The less code you have, the less that can go wrong.
This can also mean using the build system to automatically generate
some repetitive things, to avoid having to write them manually.
Does the build system depend on anything coded in C?
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for delivery.
... I actually have had a Professional Engineer's License in Texas for
34 years now and can tell you all about what it takes to get one and
what it takes to keep one.
On Tue, 5 Mar 2024 00:59:48 +0200, Michael S wrote:
Go *is* mainstream, more so than Rust.
Google looked at what language to use for its proprietary “Fuchsia”
OS, and decided Rust was a better choice than Go.
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
On 3/4/2024 12:44 AM, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:[...]
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not something
that
the language imposes. C has malloc, yet even that gets disused in
favor
of something else.
For safe embedded systems, you don't want memory management at all.
Avoiding dynamic memory is an important aspect of safety-critical
embedded development.
You still have to think about memory management even if you avoid any
dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic memory
and therefore memory management. And as Kaz says, you will often use
custom solutions such as resource pools rather than generic
malloc/free. Flexible network communication (such as Ethernet or
other IP networking) is hard to do without dynamic memory.
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange sense...
On 04.03.2024 18:24, David Brown wrote:
On 04/03/2024 17:05, Janis Papanagnou wrote:
On 03.03.2024 21:23, David Brown wrote:
[...] Shortcuts are taken because
the sales people need the code by tomorrow morning, and there are only >>>> so many hours in the night to get it working.
An indication of bad project management (or none at all) to control
development according to a realistic plan.
Now you are beginning to understand!
Huh? - I posted about various factors (beyond the programmers'
proficiency and tools) in an earlier reply to you; it was including
the management factor that you missed to note and that you adopted
as factor just in a later post. - So there's neither need nor reason
for such an arrogant, wrong, and disrespectful statement.
On Mon, 4 Mar 2024 15:41:43 +0100, David Brown wrote:
... Lady Ada Lovelace is often regarded (perhaps
incorrectly) as the first computer programmer.
She was the first, in written records, to appreciate some of the not-so- obvious issues in computer programming.
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
On 3/3/2024 9:31 AM, Scott Lurndal wrote:
Lynn McGuire <lynnmcguire5@gmail.com> writes:
"White House to Developers: Using C or C++ Invites Cybersecurity Risks"
https://www.pcmag.com/news/white-house-to-developers-using-c-plus-plus-invites-cybersecurity-risks
"The Biden administration backs a switch to more memory-safe programming >>> languages. The tech industry sees their point, but it won't be easy."
No. The feddies want to regulate software development very much.
You've been reading far to much apocalyptic fiction and seeing the
world through trump-colored glasses. Neither reflect reality.
Nope, I actually have had a Professional Engineer's License in Texas for
34 years now and can tell you all about what it takes to get one and
what it takes to keep one.
This bunch of crazies in the White House wants to do the same thing to >software development.
On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something you would >>>>> trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada for
delivery.
Was it debugged again? Or was it assumed that the translation was bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had been
using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing working
solution in go will take 5 time less man hours than writing it in Rust
That includes realising that computers could do more than number
crunching.
On 3/4/2024 11:07 PM, Lawrence D'Oliveiro wrote:
On Mon, 4 Mar 2024 21:23:49 -0800, Chris M. Thomasson wrote:
Does the build system depend on anything coded in C?
These days, it might be Rust.
The keyword is might... Right?
On 3/5/2024 1:01 AM, David Brown wrote:
On 04/03/2024 21:36, Chris M. Thomasson wrote:
On 3/4/2024 12:44 AM, David Brown wrote:
On 03/03/2024 23:01, Chris M. Thomasson wrote:[...]
On 3/3/2024 12:23 PM, David Brown wrote:
On 03/03/2024 19:18, Kaz Kylheku wrote:
Embedded systems often need custom memory management, not
something that
the language imposes. C has malloc, yet even that gets disused in >>>>>>> favor
of something else.
For safe embedded systems, you don't want memory management at
all. Avoiding dynamic memory is an important aspect of
safety-critical embedded development.
You still have to think about memory management even if you avoid
any dynamic memory? How are you going to mange this memory wrt your
various data structures needs....
To be clear here - sometimes you can't avoid all use of dynamic
memory and therefore memory management. And as Kaz says, you will
often use custom solutions such as resource pools rather than
generic malloc/free. Flexible network communication (such as
Ethernet or other IP networking) is hard to do without dynamic memory.
Think of using a big chunk of memory, never needed to be freed and is
just there per process. Now, you carve it up and store it in a cache
that has functions push and pop. So, you still have to manage memory
even when you are using no dynamic memory at all... Fair enough, in a
sense? The push and the pop are your malloc and free in a strange
sense...
I believe I mentioned that. You do not, in general, "push and pop" -
you malloc and never free. Excluding debugging code and other parts
useful in testing and developing, you have something like :
enum { heap_size = 16384; }
alignas(max_align_t) static uint8_t heap[heap_size];
uint8_t * next_free = heap;
void free(void * ptr) {
(void) ptr;
}
void * malloc(size_t size) {
const size_t align = alignof(max_align_t);
const real_size = size ? (size + (align - 1)) & ~(align - 1)
: align;
void * p = next_free;
next_free += real_size;
return p;
}
Allowing for pops requires storing the size of the allocations (unless
you change the API from that of malloc/free), and is only rarely
useful. Generally if you want memory that temporary, you use a VLA
or alloca to put it on the stack.
wrt systems with no malloc/free I am thinking more along the lines of a region allocator mixed with a LIFO for a cache, so a node based thing.
The region allocator gets fed with a large buffer. Depending on specific needs, it can work out nicely for systems that do not have malloc/free.
The pattern I used iirc, was something like:
// pseudo code...
_______________________
node*
node_pop()
{
// try the lifo first...
node* n = lifo_pop();
if (! n)
{
// resort to the region allocator...
n = region_allocate_node();
// note, n can be null here.
// if it is, we are out of memory.
// note, out of memory on a system
// with no malloc/free...
}
return n;
}
void
node_push(
node* n
) {
lifo_push(n);
}
_______________________
make any sense to you?
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On
average the performance was satisfactory, but every two minutes there
was spike in latency. The latency during the spike was not that big
(300 msec), but they stilled were feeling that they want better.
They tried to tune GC, but the problem appeared to be fundamental.
So they just rewrote this particular server in Rust. Naturally, Rust
does not collect garbage, so this particular problem disappeared.
The key phrase of the story is "This service was a great candidate to
port to Rust since it was small and self-contained".
I'd add to this that even more important for eventual success of
migration was the fact that at time of rewrite server was already
running for several years, so requirements were stable and
well-understood.
Another factor is that their service does not create/free that many
objects. The delay was caused by mere fact of GC scanning rather than
by frequent compacting of memory pools. So, from the beginning it was
obvious that potential fragmentation of the heap, which is the main
weakness of "plain" C/C++/Rust based solutions for Web back-ends, does
not apply in their case.
On 3/5/2024 1:58 PM, Keith Thompson wrote:
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-03-05, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote: >>>> On 3/5/2024 2:27 AM, David Brown wrote:
On 05/03/2024 08:08, Lawrence D'Oliveiro wrote:Really? Any logic errors in the program itself?
On Tue, 5 Mar 2024 00:03:54 -0600, Lynn McGuire wrote:
On 3/3/2024 11:43 PM, Lawrence D'Oliveiro wrote:
Did you know the life-support system on the
International Space Station was written in Ada? Not something
you would
trust C++ code to, let’s face it.
Most of the Ada code was written in C or C++ and converted to Ada >>>>>>> for
delivery.
Was it debugged again? Or was it assumed that the translation was
bug-
free?
With Ada, if you can get it to compile, it's ready to ship :-)
Ariane 5 rocket incident of 1996: The Ada code didn't catch the hardware >>> overflow exception from forcing a 64 bit floating-point value into a 16
bit integer. The situation was not expected by the code which was
developed for the Ariane 4, or something like that.
A numeric overflow occurred during the Ariane 5's initial flight -- and
the software *did* catch the overflow. The same overflow didn't occur
on Ariane 4 because of its different flight profile. There was a
management decision to reuse the Ariane 4 flight software for Ariane 5
without sufficient review.
The code (which had been thoroughly tested on Ariane 4 and was known not
to overflow) emitted an error message describing the overflow exception.
That error message was then processed as data. Another problem was that
systems were designed to shut down on any error; as a result, healthy
and necessary equipment was shut down prematurely.
This is from my vague memory, and may not be entirely accurate.
*Of course* logic errors are possible in Ada programs, but in my
experience and that of many other programmers, if you get an Ada program
to compile (and run without raising unhandled exceptions), you're likely
to be much closer to a working program than if you get a C program to
compile. A typo in a C program is more likely to result in a valid
program with different semantics.
So close you can just feel its a 100% correct and working program?
On 3/5/2024 2:11 PM, Keith Thompson wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...]
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
On Tue, 5 Mar 2024 11:31:11 +0100, David Brown wrote:
That includes realising that computers could do more than number
crunching.
Or, conversely, realizing that all forms of computation (including symbol manipulation) can be expressed as arithmetic?
Maybe that came later, cf
“Gödel numbering”.
On 05/03/2024 23:34, Chris M. Thomasson wrote:
On 3/5/2024 2:11 PM, Keith Thompson wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...]
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Apparently you and Malcolm got confused.
Others who mentioned the language know it is called "Ada". I not only corrected you, but gave an explanation of it, in the hope that with that clarity, you'd learn.
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than writing
it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On
average the performance was satisfactory, but every two minutes there
was spike in latency. The latency during the spike was not that big
(300 msec), but they stilled were feeling that they want better.
I have few questions about the story, most important one is whether the weakness of this sort is specific to GC of Go, due to its relative
immaturity
or more general and applies equally to most mature GCs on the
market, i.e. J2EE and .NET.
On Wed, 6 Mar 2024 13:50:16 +0000
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
https://en.wikipedia.org/wiki/Ada_(programming_language)
Here's also a paper that uses 'ADA' (I assume it is the same
language):
https://www.sciencedirect.com/science/article/abs/pii/0166361582900136
The article published 1982. The language became official in 1983.
Possibly, in 1982 there still was a confusion w.r.t. its name.
Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
written in all-caps or only capitalised? You can't tell!
If only ADA, written in upper case, was not widely used for something
else...
On 06/03/2024 13:31, David Brown wrote:
On 05/03/2024 23:34, Chris M. Thomasson wrote:
On 3/5/2024 2:11 PM, Keith Thompson wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
[...]
ADA is bullet proof... Until its not... ;^)
The language is called Ada, not ADA.
I wonder how many people got confused?
Apparently you and Malcolm got confused.
Others who mentioned the language know it is called "Ada". I not
only corrected you, but gave an explanation of it, in the hope that
with that clarity, you'd learn.
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Here's also a paper that uses 'ADA' (I assume it is the same
language):
https://www.sciencedirect.com/science/article/abs/pii/0166361582900136
Personally I'm not bothered whether anyone uses Ada or ADA. Is 'C'
written in all-caps or only capitalised? You can't tell!
On Wed, 6 Mar 2024 13:50:16 +0000...
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
On 06/03/2024 14:18, Michael S wrote:
If only ADA, written in upper case, was not widely used for something
else...
I don't know what that is without looking it up. In a programming
newsgroup I expect ADA to be the language.
On 06/03/2024 14:38, bart wrote:
On 06/03/2024 14:18, Michael S wrote:
If only ADA, written in upper case, was not widely used for something
else...
I don't know what that is without looking it up. In a programming
newsgroup I expect ADA to be the language.
Here's an interesting pic:
https://upload.wikimedia.org/wikipedia/commons/5/50/AdaLovelaceplaque.JPG
Notice the upper-case name.
On 3/6/24 09:18, Michael S wrote:
On Wed, 6 Mar 2024 13:50:16 +0000...
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
Notice that in your following link, "en" appears at the beginning to
indicate the use of English. "simple" at the beginning of the above link serves the same purpose. "Simple English" is it's own language, closely related to standard English.
On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
On 3/6/24 09:18, Michael S wrote:
On Wed, 6 Mar 2024 13:50:16 +0000...
bart <bc@freeuk.com> wrote:
Whoever wrote this short Wikipedia article on it got confused too as
it uses both Ada and ADA:
https://simple.wikipedia.org/wiki/Ada_(programming_language)
(The example program also includes 'Ada' as some package name. Since
it is case-insensitive, 'ADA' would also work.)
Your link is to "simple Wikipedia". I don't know what it is
exactly, but it does not appear as authoritative as real Wikipedia
Notice that in your following link, "en" appears at the beginning to
indicate the use of English. "simple" at the beginning of the above link
serves the same purpose. "Simple English" is it's own language, closely
related to standard English.
Where is Simple English spoken? Is there some geographic area where
native speakers concentrate?
On 06/03/2024 12:02, Michael S wrote:
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than
writing it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector. On average the performance was satisfactory, but every two minutes
there was spike in latency. The latency during the spike was not
that big (300 msec), but they stilled were feeling that they want
better. They tried to tune GC, but the problem appeared to be
fundamental. So they just rewrote this particular server in Rust. Naturally, Rust does not collect garbage, so this particular
problem disappeared.
The key phrase of the story is "This service was a great candidate
to port to Rust since it was small and self-contained".
I'd add to this that even more important for eventual success of
migration was the fact that at time of rewrite server was already
running for several years, so requirements were stable and
well-understood.
Another factor is that their service does not create/free that many objects. The delay was caused by mere fact of GC scanning rather
than by frequent compacting of memory pools. So, from the beginning
it was obvious that potential fragmentation of the heap, which is
the main weakness of "plain" C/C++/Rust based solutions for Web
back-ends, does not apply in their case.
From the same link:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps
track of who can read and write to memory. It knows when the program
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually impossible to have runtime memory bugs.⁴ You do not need to manually
keep track of memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
But you
have to write your programs in a certain way to make it possible. The programmer has to help the language keep track of what owns what.
So you will probably be able to do the same thing in another
language. But Rust will do more compile-time enforcement by
restricting how you share objects in memory.
On 2024-03-06, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:...
Notice that in your following link, "en" appears at the beginning to
indicate the use of English. "simple" at the beginning of the above link
serves the same purpose. "Simple English" is it's own language, closely
related to standard English.
Where is Simple English spoken? Is there some geographic area where
native speakers concentrate?
This suggests the language automatically takes care of this. But you
have to write your programs in a certain way to make it possible.
Continuously-compacting concurrent collectors like those available for
Java aim for less than 10ms, and often hit 1ms.
Another factor is that their service does not create/free that many
objects. The delay was caused by mere fact of GC scanning rather than
by frequent compacting of memory pools.
It's a constructed language, which probably has no native speakers.
On 3/6/2024 5:46 PM, Lawrence D'Oliveiro wrote:
On Wed, 06 Mar 2024 14:30:58 +0000, aph wrote:
Continuously-compacting concurrent collectors like those available for
Java aim for less than 10ms, and often hit 1ms.
What ... a 1ms potential delay every time you want to allocate a new
object??
GC can be a no go for certain schemes. GC can be fine and it has its place.
On Sun, 3 Mar 2024 22:11:14 -0000 (UTC), Blue-Maned_Hawk wrote:
Lawrence D'Oliveiro wrote:
On Sun, 3 Mar 2024 08:54:36 -0000 (UTC), Blue-Maned_Hawk wrote:
I do not want to live in a web-centric world.
You already do.
That does not change the veracity of my statement.
That doesn’t change the veracity of mine.
On Wed, 6 Mar 2024 12:28:59 +0000
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps
track of who can read and write to memory. It knows when the program
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually
impossible to have runtime memory bugs.⁴ You do not need to manually
keep track of memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based languages like Java, C# or Go.
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps
track of who can read and write to memory. It knows when the
program is using memory and immediately frees the memory once it
is no longer needed. It enforces memory rules at compile time,
making it virtually impossible to have runtime memory bugs.⁴ You
do not need to manually keep track of memory. The compiler takes
care of it."
This suggests the language automatically takes care of this.
Takes care of what?Garbage collection does not stop heap fragmentation. GC does, I
AFAIK, heap fragmentation is as bad problem in Rust as it is in C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the program
at any given time, and having larger heaps reduces fragmentation (or
at least reduces the consequences of it).
On Thu, 7 Mar 2024 11:35:08 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the
program is using memory and immediately frees the memory once it
is no longer needed. It enforces memory rules at compile time,
making it virtually impossible to have runtime memory bugs.⁴ You
do not need to manually keep track of memory. The compiler takes
care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the program
at any given time, and having larger heaps reduces fragmentation (or
at least reduces the consequences of it).
GC does not stop fragmentation, but it allow heap compaction to be
built-in part of environment.
So, it turns heap fragmentation
from denial of service type of problem to mere slowdown, hopefully insignificant slowdown.
I don't say that heap compaction is impossible in other environments,
but it is much harder, esp. in environments where pointers are visible
to programmer. The famous David Wheeler's quote applies here at full
force.
Also when non-GC environments chooses to implement heap compaction they suffer the same or bigger impact to real-time responsiveness as GC.
So, although I don't know it for sure, my impression is that generic
heap compaction extremely rarely implemented in performance-aware
non-GC environments.
Performance-neglecting non-GC environments, first and foremost CPython,
can, of course, have heap compaction, although my googling didn't give
me a definite answer whether it's done or not.
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps
track of who can read and write to memory. It knows when the program
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually
impossible to have runtime memory bugs.⁴ You do not need to manually
keep track of memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the program at
any given time, and having larger heaps reduces fragmentation (or at
least reduces the consequences of it).
On 07/03/2024 12:44, Michael S wrote:
GC does not stop fragmentation, but it allow heap compaction to be
built-in part of environment.
No, GC alone does not do that. But heap compaction is generally done as
part of a GC cycle.
Heap compaction requires indirect pointers.
On Wed, 6 Mar 2024 19:27:24 -0500, James Kuyper wrote:
It's a constructed language, which probably has no native speakers.
Not to be confused with Basic English, which was created, and copyrighted
by, C K Ogden.
It used to be a running joke that if you managed to get your Ada code to compile, it was ready to ship.
On 3/5/2024 4:25 PM, Lawrence D'Oliveiro wrote:
So, what is the right language to use?
One of it's requirements is that the articles be written in Basic
English as much as possible.
On Wed, 6 Mar 2024 14:34:50 +0100, David Brown wrote:
It used to be a running joke that if you managed to get your Ada code to
compile, it was ready to ship.
That joke actually originated with Pascal.
Though I suppose Ada took it to
the next level ...
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust keeps >>>> track of who can read and write to memory. It knows when the program
is using memory and immediately frees the memory once it is no longer
needed. It enforces memory rules at compile time, making it virtually
impossible to have runtime memory bugs.⁴ You do not need to manually >>>> keep track of memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to GC-based
languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the program at
any given time, and having larger heaps reduces fragmentation (or at
least reduces the consequences of it).
Copying garbage collectors literally stop fragmentation.
Reachable
objects are identified and moved to a memory partition where they
are now adjacent. The vacated memory partition is then efficiently used
to bump-allocate new objects.
On 07/03/2024 17:35, Kaz Kylheku wrote:
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust
keeps track of who can read and write to memory. It knows when
the program is using memory and immediately frees the memory
once it is no longer needed. It enforces memory rules at compile
time, making it virtually impossible to have runtime memory
bugs.⁴ You do not need to manually keep track of memory. The
compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the
program at any given time, and having larger heaps reduces
fragmentation (or at least reduces the consequences of it).
Copying garbage collectors literally stop fragmentation.
Yes, but garbage collectors that could be useable for C, C++, or
other efficient compiled languages are not "copying" garbage
collectors.
Reachable
objects are identified and moved to a memory partition where they
are now adjacent. The vacated memory partition is then efficiently
used to bump-allocate new objects.
I think if you have a system with enough memory that copying garbage collection (or other kinds of heap compaction during GC) is a
reasonable option, then it's unlikely that heap fragmentation is a
big problem in the first place. And you won't be running on a small
embedded system.
CPython does use garbage collection, as far as I know.
07.03.2024 17:36 David Brown kirjutas:
CPython does use garbage collection, as far as I know.
AFAIK CPython uses reference counting, i.e. basically the same as C++ std::shared_ptr (except that it does not need to be thread-safe).
With reference counting one only knows how many pointers there are to a
given heap block, but not where they are, so heap compaction would not
be straightforward.
Python also has zillions of extensions written in C or C++ (all of AI
related work for example), so having e.g. heap compaction of Python
objects only might not be worth of it.
On 08/03/2024 11:57, Michael S wrote:
On Fri, 8 Mar 2024 08:25:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 07/03/2024 17:35, Kaz Kylheku wrote:
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust >>>>>> keeps track of who can read and write to memory. It knows when
the program is using memory and immediately frees the memory
once it is no longer needed. It enforces memory rules at
compile time, making it virtually impossible to have runtime
memory bugs.⁴ You do not need to manually keep track of
memory. The compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the
program at any given time, and having larger heaps reduces
fragmentation (or at least reduces the consequences of it).
Copying garbage collectors literally stop fragmentation.
Yes, but garbage collectors that could be useable for C, C++, or
other efficient compiled languages are not "copying" garbage
collectors.
Go, C# and Java are all efficient compiled languages. For Go it was actually a major goal.
C# and Java are, AFAIUI, managed languages - they are byte-compiled
and run on a VM. (JIT compilation to machine code can be used for acceleration, but that does not change the principles.) I don't know
about Go.
On Fri, 8 Mar 2024 08:25:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 07/03/2024 17:35, Kaz Kylheku wrote:
On 2024-03-07, David Brown <david.brown@hesbynett.no> wrote:
On 06/03/2024 23:00, Michael S wrote:
On Wed, 6 Mar 2024 12:28:59 +0000Garbage collection does not stop heap fragmentation. GC does, I
bart <bc@freeuk.com> wrote:
"Rust uses a relatively unique memory management approach that
incorporates the idea of memory “ownership”. Basically, Rust
keeps track of who can read and write to memory. It knows when
the program is using memory and immediately frees the memory
once it is no longer needed. It enforces memory rules at compile
time, making it virtually impossible to have runtime memory
bugs.⁴ You do not need to manually keep track of memory. The
compiler takes care of it."
This suggests the language automatically takes care of this.
Takes care of what?
AFAIK, heap fragmentation is as bad problem in Rust as it is in
C/Pascal/Ada etc... In this aspect Rust is clearly inferior to
GC-based languages like Java, C# or Go.
suppose, mean that you need much more memory and bigger heaps in
proportion to the amount of memory you actually need in the
program at any given time, and having larger heaps reduces
fragmentation (or at least reduces the consequences of it).
Copying garbage collectors literally stop fragmentation.
Yes, but garbage collectors that could be useable for C, C++, or
other efficient compiled languages are not "copying" garbage
collectors.
Go, C# and Java are all efficient compiled languages. For Go it was
actually a major goal.
Reachable
objects are identified and moved to a memory partition where they
are now adjacent. The vacated memory partition is then efficiently
used to bump-allocate new objects.
I think if you have a system with enough memory that copying garbage
collection (or other kinds of heap compaction during GC) is a
reasonable option, then it's unlikely that heap fragmentation is a
big problem in the first place. And you won't be running on a small
embedded system.
You sound like arguing for sake of arguing.
Of course, heap fragmentation is relatively rare problem. But when you process 100s of 1000s of requests of significantly varying sizes for
weeks without interruption then rare things happen with high
probability :(
In case of this particular Discord service, they appear to
have a benefit of size of requests not varying significantly, so
absence of heap compaction is not a major defect.
BTW, I'd like to know if 3 years later they still have their Rust
solution running.
On 08/03/2024 13:41, Paavo Helde wrote:
07.03.2024 17:36 David Brown kirjutas:
CPython does use garbage collection, as far as I know.
AFAIK CPython uses reference counting, i.e. basically the same as C++
std::shared_ptr (except that it does not need to be thread-safe).
Yes, that is my understanding too. (I could be wrong here, so don't
rely on anything I write!) But the way it is used is still a type of garbage collection. When an object no longer has any "live" references,
it is put in a list, and on the next GC it will get cleared up (and call
the asynchronous destructor, __del__, for the object).
On 08/03/2024 14:07, David Brown wrote:
On 08/03/2024 13:41, Paavo Helde wrote:
07.03.2024 17:36 David Brown kirjutas:
CPython does use garbage collection, as far as I know.
AFAIK CPython uses reference counting, i.e. basically the same as C++
std::shared_ptr (except that it does not need to be thread-safe).
Yes, that is my understanding too. (I could be wrong here, so don't
rely on anything I write!) But the way it is used is still a type of
garbage collection. When an object no longer has any "live"
references, it is put in a list, and on the next GC it will get
cleared up (and call the asynchronous destructor, __del__, for the
object).
Is that how CPython works? I can't quite see the point of saving up all
the deallocations so that they are all done as a batch. It's extra
overhead, and will cause those latency spikes that was the problem here.
In my own reference count scheme, when the count reaches zero, the
memory is freed immediately.
I also tend to have most allocations being of either 16 or 32 bytes, so
reuse is easy. It is only individual data items (a long string or long
array) that might have an arbitrary length that needs to be in
contiguous memory.
Most strings however have an average length of well below 16 characters
in my programs, so use a 16-byte allocation.
I don't know the allocation pattern in that Discard app, but Michael S suggested they might not be lots of arbitrary-size objects.
On 3/6/2024 2:18 PM, Chris M. Thomasson wrote:
On 3/6/2024 2:43 AM, David Brown wrote:[...]
This is a fun one:
// pseudo code...
_______________________
node*
node_pop()
{
// try per-thread lifo
// try shared distributed lifo
// try global region
// if all of those failed, return nullptr
}
What I'd like to know about is who keeps dialing the "harmonization"
efforts, which really must give grouse to the "harmonisation"
spellers ...
It seems much more appropriate for Ada (though Pascal also had stricter checking and stronger types than most other popular languages had when
Pascal was developed).
On Tue, 5 Mar 2024 22:58:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 5 Mar 2024 11:11:03 +0200, Michael S wrote:
On Tue, 5 Mar 2024 01:54:46 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Discord did some benchmarking of its back-end servers, which had
been using Go, and decided that switching to Rust offered better
performance.
- for big and complex real-world back-end processing, writing
working solution in go will take 5 time less man hours than
writing it in Rust
Nevertheless, they found the switch to Rust worthwhile.
I read a little more about it. https://discord.com/blog/why-discord-is-switching-from-go-to-rust
Summary: performance of one of Discord's most heavy-duty servers
suffered from weakness in implementation of Go garbage collector.
[...]
I have few questions about the story, most important one is
whether the weakness of this sort is specific to GC of Go, due
to its relative immaturity or more general and applies equally
to most mature GCs on the market, i.e. J2EE and .NET.
Another question is whether the problem is specific to GC-style
of automatic memory management (AMM) or applies, at least to
some degree, to other forms of AMM, most importantly, to AMMs
based on Reference Counting used by Swift and also popular in
C++.
Of course, I don't expected that my questions will be answered
fully on comp.lang.c, but if some knowledgeable posters will try
to answer I would appreciate.
And it will be tough on the cache as everything has to be copied and
moved.
AFAIK CPython uses reference counting ...
Are you an AI?
On 4/28/2024 6:58 PM, Kaz Kylheku wrote:
On 2024-04-29, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
Are you an AI?
That entails two separable propositions; there is moderate evidence for
the one, scant for the other.
Are you a human?
If so, are you using AI?
If not, are you an AI?
Any better?
On Fri, 8 Mar 2024 15:32:22 +0100, David Brown wrote:
And it will be tough on the cache as everything has to be copied and
moved.
I think all kinds of garbage collector end up being tough on the
cache. Because remember, they are doing things with lots of blocks
of memory that haven’t been accessed recently, and therefore are not
likely to be in the cache.
On Fri, 8 Mar 2024 14:41:16 +0200, Paavo Helde wrote:
AFAIK CPython uses reference counting ...
Combination of reference-counting as a first resort, with full garbage collection to deal with those less common cases where you have reference cycles. Trying to get the best of both worlds.
The trouble with reference-counting is it impacts multithreading
performance.
this, by making the reference counts a little less deterministic (i.e.
there may be a slight delay before they become fully correct). I think
this is a complicated idea, and it may take them some time to get it fully implemented.
On 29.04.2024 03:05, Lawrence D'Oliveiro wrote:
On Fri, 8 Mar 2024 14:41:16 +0200, Paavo Helde wrote:
AFAIK CPython uses reference counting ...
Combination of reference-counting as a first resort, with full garbage
collection to deal with those less common cases where you have reference
cycles. Trying to get the best of both worlds.
The trouble with reference-counting is it impacts multithreading
performance.
Maybe only in case of heavy contention. If there is little contention
and the reference counter is implemented as an atomic variable, there is
no measurable hit on performance. I know this because I was suspicious
myself and measured this recently.
Anyway, multithreading performance is a non-issue for Python so far as
the Python interpreter runs in a single-threaded regime anyway, under a >global GIL lock. They are planning to get rid of GIL, but this work is
still in development AFAIK. I'm sure it will take years to stabilize the >whole Python zoo without GIL.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 299 |
Nodes: | 16 (2 / 14) |
Uptime: | 45:41:46 |
Calls: | 6,682 |
Files: | 12,225 |
Messages: | 5,343,927 |