A disruptive technology is a new technological innovation, product,
or service that eventually overturns the existing dominant
technology in the market, despite the fact that the disruptive
technology is both radically different than the leading technology
and that it often initially performs worse than the leading
technology according to existing measures of performance. [2]
programming languages that "scale down".
Tcl or Python are "simpler" than C, but this is a result of the
difficult to do easy things. To even get started, you have to have
some notions of object oriented programming, you have to split your
code up into lots of little files that must be properly named, and
Ben Collver <bencollver@tilde.pink> wrote or quoted:
programming languages that "scale down".
David forgot to tell use what it means for a programming language
to "scale down".
Ben Collver <bencollver@tilde.pink> wrote or quoted:
programming languages that "scale down".
David forgot to tell use what it means for a programming language
to "scale down".
"Good systems should be able to scale down as well as up. They
should run on slower computers that don't have as much memory or
disk storage as the latest models. Likewise, from the human point
of view, downwardly scalable systems should also be small enough to
learn and use without being an expert programmer." ...
I read it mainly out of interest in his ideas for the first aspect
with running on slower computers, but it turns out he doesn't
really discuss that at all. They tend to be contradictory goals, so
without proposing a way to unify them it makes that aspect purely aspirational.
Our task is not to find the maximum amount of content in a work of
art, much less to squeeze more content out of the work than is
already there. Our task is to cut back content so that we can see
the thing at all.
Stefan Ram <ram@zedat.fu-berlin.de> wrote:
Ben Collver <bencollver@tilde.pink> wrote or quoted:
programming languages that "scale down".
David forgot to tell use what it means for a programming language
to "scale down".
Wasn't that in the second paragraph?
"Good systems should be able to scale down as well as up. They
should run on slower computers that don't have as much memory or
disk storage as the latest models. Likewise, from the human point
of view, downwardly scalable systems should also be small enough to
learn and use without being an expert programmer." ...
I read it mainly out of interest in his ideas for the first aspect
with running on slower computers, but it turns out he doesn't
really discuss that at all. They tend to be contradictory goals, so
without proposing a way to unify them it makes that aspect purely aspirational.
In fact in terms of memory and disk storage GCC keeps going
backwards that even for C/C++. Compiling large C/C++ programs with
-Os in ever newer GCC versions keeps producing ever bigger binaries
for unchanged code. Of course other compilers are available and I'm
not sure how other popular ones compare.
On Sun, 14 Apr 2024, Computer Nerd Kev wrote:
In fact in terms of memory and disk storage GCC keeps going
backwards that even for C/C++. Compiling large C/C++ programs with
-Os in ever newer GCC versions keeps producing ever bigger binaries
for unchanged code. Of course other compilers are available and I'm
not sure how other popular ones compare.
Why do they go backwards?
I mean larger binaries must come with some benefit right?
D <nospam@example.net> wrote:
On Sun, 14 Apr 2024, Computer Nerd Kev wrote:
In fact in terms of memory and disk storage GCC keeps going
backwards that even for C/C++. Compiling large C/C++ programs with
-Os in ever newer GCC versions keeps producing ever bigger binaries
for unchanged code. Of course other compilers are available and I'm
not sure how other popular ones compare.
Why do they go backwards?
I'd be quite interested to find out as well. When it comes to the
more fine-tuned optimisation options (a set of which -Os enables),
the GCC documentation is often very lacking in detail, especially
when it comes to changes between versions.
I mean larger binaries must come with some benefit right?
The benchmarks that they're chasing are for speed rather than
binary size. -Os turns on some optimisations which may make a
program run a little slower in return for a smaller binary. My
guess is that the GCC developers aren't very interested in -Os
anymore, but I haven't seen an easy path to understanding why
exactly it keeps getting less effective than in earlier GCC
versions.
On Mon, 15 Apr 2024, Computer Nerd Kev wrote:
D <nospam@example.net> wrote:
On Sun, 14 Apr 2024, Computer Nerd Kev wrote:
In fact in terms of memory and disk storage GCC keeps going
backwards that even for C/C++. Compiling large C/C++ programs with
-Os in ever newer GCC versions keeps producing ever bigger binaries
for unchanged code. Of course other compilers are available and I'm
not sure how other popular ones compare.
Why do they go backwards?
I'd be quite interested to find out as well. When it comes to the
more fine-tuned optimisation options (a set of which -Os enables),
the GCC documentation is often very lacking in detail, especially
when it comes to changes between versions.
I mean larger binaries must come with some benefit right?
The benchmarks that they're chasing are for speed rather than
binary size. -Os turns on some optimisations which may make a
program run a little slower in return for a smaller binary. My
guess is that the GCC developers aren't very interested in -Os
anymore, but I haven't seen an easy path to understanding why
exactly it keeps getting less effective than in earlier GCC
versions.
Got it! Thank you for the information. I guess perhaps it's similar to the old argument that emacs is "too big". With todays disks/ssds it matters
less and less.
D <nospam@example.net> wrote:
On Sun, 14 Apr 2024, Computer Nerd Kev wrote:
In fact in terms of memory and disk storage GCC keeps going
backwards that even for C/C++. Compiling large C/C++ programs with
-Os in ever newer GCC versions keeps producing ever bigger binaries
for unchanged code. Of course other compilers are available and I'm
not sure how other popular ones compare.
Why do they go backwards?
I'd be quite interested to find out as well. When it comes to the
more fine-tuned optimisation options (a set of which -Os enables),
the GCC documentation is often very lacking in detail, especially
when it comes to changes between versions.
not@telling.you.invalid (Computer Nerd Kev) writes:
D <nospam@example.net> wrote:
On Sun, 14 Apr 2024, Computer Nerd Kev wrote:
In fact in terms of memory and disk storage GCC keeps going
backwards that even for C/C++. Compiling large C/C++ programs with
-Os in ever newer GCC versions keeps producing ever bigger binaries
for unchanged code. Of course other compilers are available and I'm
not sure how other popular ones compare.
Why do they go backwards?
I'd be quite interested to find out as well. When it comes to the
more fine-tuned optimisation options (a set of which -Os enables),
the GCC documentation is often very lacking in detail, especially
when it comes to changes between versions.
Interesting question, and I don’t know the answer, but it’s not hard to come up with a small concrete example. https://godbolt.org/z/sG5d99v5z
has the same code compiled at -Os with three different versions, plus
-O2 for comparison, and it does get a bit longer somewhere between 9.1
and 11.1.
The longer code uses one extra register (ebp) and because it’s run out
of callee-owned registers it must generate a push/pop pair for it
(adding 2 bytes to a 90-byte function, in this case).
It’s not easily explainable why it would do so: the gcc 9.1 object code happily uses eax for the same purpose, without having to shuffle else anything around, since eax isn’t being used for anything else at the
time.
Perhaps worth a bug report.
Why do they go backwards? I mean larger binaries must come with some
benefit right?
D <nospam@example.net> wrote:
Why do they go backwards? I mean larger binaries must come with some >>benefit right?
The hello world executable generated with gcc under Oracle Linux 8 is
42Mb long, which is more MASS STORAGE than I had on the first Unix
system I ever used. I can't see this as being a good thing.
D <nospam@example.net> wrote:
Why do they go backwards? I mean larger binaries must come with some
benefit right?
The hello world executable generated with gcc under Oracle Linux 8 is
42Mb long, which is more MASS STORAGE than I had on the first Unix
system I ever used. I can't see this as being a good thing.
--scott
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 403 |
Nodes: | 16 (2 / 14) |
Uptime: | 40:27:47 |
Calls: | 8,407 |
Calls today: | 2 |
Files: | 13,171 |
Messages: | 5,904,811 |