On 10/09/2021 12:47, David Brown wrote:
On 10/09/2021 11:10, Juha Nieminen wrote:
However, gcc -O0 is quite useful in development. For starters, when you
are interactively debugging (eg. with gdb, or any of the myriads of
debuggers in different IDEs), you usually don't want things like your
functions being inlined, loops unrolled, compile-time arithmetic
(other than, of course, that of constexpr/consteval functions), etc.
I always compile with debugging information, and I regularly use
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to
deal with unoptimised "gcc -O0" code - it is truly awful. You have vast
amounts of useless extra code that hides the real action. In the
assembly, the code to load and store variables from the stack, instead
of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder
because all the little calls that should be inlined out of existence,
become layers of calls that you might have to dig through. Most of the
what you are looking for is drowned out in the noise.
I agree with JN. With optimised code, what you have may have little relationship with the original source code. If you're trying to trace a
logic problem, how do you map machine code to the corresponding source?
Or more importantly, how does the debugger do so?
For such purposes, an interpreter might be a better bet for some kinds
of bugs.
However, I've never used a debugger (other than some distant attempts at writing one); what are you actually stepping through: machine code, or
lines of source code, or both?
I would have thought that for a logic problem (or most bugs actually
provided they are reproducable), you'd want to be looking at source
code, not native code. (Unless you're perhaps debugging a compiler,
which is what I do quite a lot.)
And for source code, what difference should it make whether the
generated code is optimised or not?
Of course there's also the question of compilation speed. When compiling >>> small or even medium-sized projects, we seldom tend to pay attention
to how fast "gcc -O0" compiles compared to "gcc -O3", especially since
we tend to have these supercomputers on our desks.
However, when compiling much larger projects, or when compiling on
a very inefficient platform, the difference can become substantial,
and detrimental to development if it's too long.
Don't solve that by using weaker tools. Solve it by improving how you
use the tools - get better build systems to avoid unnecessary
compilation, use ccache if you have build options or variations that you
swap back and forth, use distcc to spread the load, explain to your boss
why a Ryzen or even a ThreadRipper will save money overall as /your/
time costs more than the computer's time. "I don't use optimisation
because it is too slow" is an excuse for hobby developers, not
professionals.
After continually bawling me out for putting too much emphasis on
compilation speed, are you saying for the first time that it might be important after all?!
However you seem to be in favour of letting off the people who write the tools (because it is unheard of for them create an inefficient
product!), and just throwing more hardware - and money - at the problem.
On 11/09/2021 01:33, Bart wrote:
On 10/09/2021 12:47, David Brown wrote:
On 10/09/2021 11:10, Juha Nieminen wrote:
However, gcc -O0 is quite useful in development. For starters, when you >>>> are interactively debugging (eg. with gdb, or any of the myriads of
debuggers in different IDEs), you usually don't want things like your
functions being inlined, loops unrolled, compile-time arithmetic
(other than, of course, that of constexpr/consteval functions), etc.
I always compile with debugging information, and I regularly use
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to
deal with unoptimised "gcc -O0" code - it is truly awful. You have vast >>> amounts of useless extra code that hides the real action. In the
assembly, the code to load and store variables from the stack, instead
of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder
because all the little calls that should be inlined out of existence,
become layers of calls that you might have to dig through. Most of the >>> what you are looking for is drowned out in the noise.
I agree with JN. With optimised code, what you have may have little
relationship with the original source code. If you're trying to trace a
logic problem, how do you map machine code to the corresponding source?
It's a /lot/ easier with -O1 than -O0. Or you use the debugger.
Or more importantly, how does the debugger do so?
The compiler generates lots of debug information, and the debugger reads
it. How else would it work?
And for source code, what difference should it make whether the
generated code is optimised or not?
Because it is not always correct!
Sometimes the issue is on the lines of "Why is this taking so long? I
had expected less than 0.1µs, but it is taking nearly 0.2µs." You need
to look at the assembly for that.
I don't have a problem with compile speed.
I'd be happy if my car could be upgraded to get twice the petrol
mileage. But I don't see its petrol usage as a problem - and I
certainly would not swap it for a moped just because the moped uses much
less petrol.
On 09/09/2021 18:41, James Kuyper wrote:
On 9/9/21 4:54 AM, MisterMule@stubborn.uk wrote:
On Wed, 8 Sep 2021 20:22:52 +0300
Paavo Helde <myfirstname@osa.pri.ee> wrote:
08.09.2021 13:24 MisterMule@stubborn.uk kirjutas:
when
You can write a makefile just as simple. However what happens when you >>>>> want foo.c recompiled when foo.h and bar.h changes but bar.c should only >>>>> be recompiled when bar.h and moo.h change, moo.c should only be recompiled
moo.h changes and main.c should be recompiled when anything changes? >>>>>
Such dependencies are taken care automatically by the gcc -MD option,
No unless the compiler is clairvoyant they arn't.
That option causes a dependencies file to be created specifying all the
dependencies that the compiler notices during compilation. That file can
then be used to avoid unnecessary re-builds the next time the same file
is compiled. The dependency file is therefore always one build
out-of-date; if you created any new dependencies, or removed any old
ones, the dependencies file will be incorrect until after the next time
you do a build. It's therefore not a perfect solution - but neither is
it useless.
The trick is to have makefile (or whatever build system you use) rules
along with gcc so that the dependency file not only labels the object
file as dependent on the C or C++ file and all the include files it
uses, recursively, but also labels the dependency file itself to be
dependent on the same files. Then if the source file or includes are changed, the dependency file is re-created, and make is smart enough to
then reload that dependency file to get the new dependencies for
building the object file.
The makefile rules involved are close to APL in readability, but once
you have figured out what you need, you can re-use it for any other
project. And it solves the problem you have here.
So, for example, if you have these files:
a.h
---
#include "b.h"
b.h
---
#define TEST 1
c.c
---
#include "a.h"
#include <stdio.h>
int main(void) {
printf("Test is %d\n", TEST);
}
Then "gcc -MD c.c" makes a file
c.d
---
c.o: c.c /usr/include/stdc-predef.h a.h b.h /usr/include/stdio.h \
/usr/include/x86_64-linux-gnu/bits/libc-header-start.h \
/usr/include/features.h /usr/include/x86_64-linux-gnu/sys/cdefs.h \
/usr/include/x86_64-linux-gnu/bits/wordsize.h \
...
Using "gcc -MMD c.c" is more helpful, usually, because it skips the
system includes:
c.d
---
c.o: c.c a.h b.h
But the real trick is "gcc -MMD -MT 'c.d c.o' c.c" :
c.d
---
c.d c.o: c.c a.h b.h
Now "make" knows that the dependency file is also dependent on the C
file and headers.
On 11/09/2021 18:15, David Brown wrote:
On 11/09/2021 01:33, Bart wrote:
On 10/09/2021 12:47, David Brown wrote:
On 10/09/2021 11:10, Juha Nieminen wrote:
However, gcc -O0 is quite useful in development. For starters, when you >>>>> are interactively debugging (eg. with gdb, or any of the myriads of
debuggers in different IDEs), you usually don't want things like your >>>>> functions being inlined, loops unrolled, compile-time arithmetic
(other than, of course, that of constexpr/consteval functions), etc.
I always compile with debugging information, and I regularly use
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to >>>> deal with unoptimised "gcc -O0" code - it is truly awful. You have vast >>>> amounts of useless extra code that hides the real action. In the
assembly, the code to load and store variables from the stack, instead >>>> of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder
because all the little calls that should be inlined out of existence,
become layers of calls that you might have to dig through. Most of the >>>> what you are looking for is drowned out in the noise.
I agree with JN. With optimised code, what you have may have little
relationship with the original source code. If you're trying to trace a
logic problem, how do you map machine code to the corresponding source?
It's a /lot/ easier with -O1 than -O0. Or you use the debugger.
Oh, you mean look at the ASM manually? In that case definitely through
-O0. If I take this fragment:
for (int i=0; i<100; ++i) {
a[i]=b+c*d;
fn(a[i]);
}
So which one gets the prize?
But it's still not as easy to follow as either of mine.
So, yes, decent tools are important...
Or more importantly, how does the debugger do so?
The compiler generates lots of debug information, and the debugger reads
it. How else would it work?
Have a look at my first example above; would the a[i]=b+c*d be
associated with anything more meaningful than those two lines of assembly?
And for source code, what difference should it make whether the
generated code is optimised or not?
Because it is not always correct!
Sometimes the issue is on the lines of "Why is this taking so long? I
had expected less than 0.1µs, but it is taking nearly 0.2µs." You need
to look at the assembly for that.
That's the kind of thing that the unit tests Ian is always on about
don't really work.
I don't have a problem with compile speed.
Then just scale up the size of the project; you will hit a point where
it /is/ a problem! Or change the threshold at which any hanging about
becomes incredibly annoying; mine is about half a second.
Just evading the issue by, insteading of getting a tool to work more
quickly, making it try to avoid compiling things as much as possible,
isn't a satisfactory solution IMO.
It's like avoiding spending too long driving your car, due to its only managing to do 3 mph, by cutting down on your trips as much as possible.
It's a slow car - /that's/ the problem.
Very nice. Now you have a single globals.h type file (VERY common in large projects). How does gcc figure out which C files it needs to build from that?
The 'supercomputer' on my desk is not significantly faster than the RPi4
you mention below.
If your code is fairly standard C, try using Tiny C. I expect your
program will build in one second or thereabouts.
I always compile with debugging information, and I regularly use
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to
deal with unoptimised "gcc -O0" code - it is truly awful. You have vast amounts of useless extra code that hides the real action. In the
assembly, the code to load and store variables from the stack, instead
of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder
because all the little calls that should be inlined out of existence,
become layers of calls that you might have to dig through. Most of the
what you are looking for is drowned out in the noise.
When debugging using an interactive debugger, the execution path
should follow the source code line-by-line, with *each* line included
and nothing optimized away.
That was true 20 years ago, perhaps, with C. Not now, and not with C++.
However, when compiling much larger projects, or when compiling on
a very inefficient platform, the difference can become substantial,
and detrimental to development if it's too long.
Don't solve that by using weaker tools. Solve it by improving how you
use the tools
get better build systems to avoid unnecessary
compilation, use ccache if you have build options or variations that you
swap back and forth, use distcc to spread the load, explain to your boss
why a Ryzen or even a ThreadRipper will save money overall as /your/
time costs more than the computer's time. "I don't use optimisation
because it is too slow" is an excuse for hobby developers, not
professionals.
It is better to only compile the bits that need to be compiled. Who
cares how long a full build takes?
On 9/9/2021 10:17 PM, David Brown wrote:
On 09/09/2021 18:41, James Kuyper wrote:
On 9/9/21 4:54 AM, Miste...@stubborn.uk wrote:
On Wed, 8 Sep 2021 20:22:52 +0300
Paavo Helde <myfir...@osa.pri.ee> wrote:
08.09.2021 13:24 Miste...@stubborn.uk kirjutas:No unless the compiler is clairvoyant they arn't.
when
You can write a makefile just as simple. However what happens when you >>>>> want foo.c recompiled when foo.h and bar.h changes but bar.c should only
be recompiled when bar.h and moo.h change, moo.c should only be recompiled
moo.h changes and main.c should be recompiled when anything changes? >>>>>
Such dependencies are taken care automatically by the gcc -MD option, >>>
That option causes a dependencies file to be created specifying all the >> dependencies that the compiler notices during compilation. That file can >> then be used to avoid unnecessary re-builds the next time the same file >> is compiled. The dependency file is therefore always one build
out-of-date; if you created any new dependencies, or removed any old
ones, the dependencies file will be incorrect until after the next time >> you do a build. It's therefore not a perfect solution - but neither is
it useless.
The trick is to have makefile (or whatever build system you use) rules along with gcc so that the dependency file not only labels the object
file as dependent on the C or C++ file and all the include files it
uses, recursively, but also labels the dependency file itself to be dependent on the same files. Then if the source file or includes are changed, the dependency file is re-created, and make is smart enough to then reload that dependency file to get the new dependencies for
building the object file.
The makefile rules involved are close to APL in readability, but once
you have figured out what you need, you can re-use it for any other project. And it solves the problem you have here.
So, for example, if you have these files:
a.h
---
#include "b.h"
b.h
---
#define TEST 1
c.c
---
#include "a.h"
#include <stdio.h>
int main(void) {
printf("Test is %d\n", TEST);
}
Then "gcc -MD c.c" makes a file
c.d
---
c.o: c.c /usr/include/stdc-predef.h a.h b.h /usr/include/stdio.h \ /usr/include/x86_64-linux-gnu/bits/libc-header-start.h \ /usr/include/features.h /usr/include/x86_64-linux-gnu/sys/cdefs.h \ /usr/include/x86_64-linux-gnu/bits/wordsize.h \
...
Using "gcc -MMD c.c" is more helpful, usually, because it skips the
system includes:
c.d
---
c.o: c.c a.h b.h
But the real trick is "gcc -MMD -MT 'c.d c.o' c.c" :
c.d
---
c.d c.o: c.c a.h b.h
Now "make" knows that the dependency file is also dependent on the C
file and headers.
What you are describing is substantially:
https://www.gnu.org/software/make/manual/html_node/Automatic-Prerequisites.html
with the addition of the -MT gcc option, which removes the need for the nasty 'sed' command in the "%.d: %.c" rule - which is the kind of thing
that tends to keep people away.
Thanks for pointing this out.
I guess in that rule one can use the single command:
%.d: %.c
$(CC) $(CPPFLAGS) -MM -MT '$*.o $@' -MF $@ $<
(As a side note, it wouldn't hurt if the GCC people updated their docs
from time to time...)
HorseyWorsey@the_stables.com wrote:
Very nice. Now you have a single globals.h type file (VERY common in large >> projects). How does gcc figure out which C files it needs to build from that?
It doesn't. It only compiles what you tell it to compile.
It has to be *something else* that runs it and tells it what to
compile. Often this is the 'make' program (which is reading a
file usually named 'Makefile').
On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
Juha Nieminen <nospam@thanks.invalid> wrote:
HorseyWorsey@the_stables.com wrote:
Very nice. Now you have a single globals.h type file (VERY common in large >>> projects). How does gcc figure out which C files it needs to build from that?
It doesn't. It only compiles what you tell it to compile.
It has to be *something else* that runs it and tells it what to
compile. Often this is the 'make' program (which is reading a
file usually named 'Makefile').
Well thanks for that valuable input, we're all so much more informed now.
On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
On Fri, 10 Sep 2021 17:38:23 +0200
David Brown <david.brown@hesbynett.no> wrote:
Except its dependency automation for noddy builds. For any complex builds
you're going to need a build system hence the examples I gave.
Do you still not understand what is being discussed here? "gcc -MD" is
/not/ a replacement for a build system. It is a tool to help automate
your build systems. The output of "gcc -MD" is a dependency file, which
your makefile (or other build system) imports.
Also using
the compiler is sod all use if you need to fire off a script to auto generate
some code first.
No, it is not. It works fine - as long as you understand how your build
well as lots of other C and header files). If I change the text file or
the Python script and type "make", then first a new header and C file
are created from the text file. Then "gcc -MD" is run on the C file, >generating a new dependency file, since the dependency file depends on
the header and the C file. Then this updated dependency file is
imported by make, and shows that the object file (needed by for the
link) depends on the updated C file, so the compiler is called on the file.
Last place I worked used used python to generate various
language header files based on json and that in turn depended on whether the >> json had been updated since the last build. Good luck using gcc to sort that
out.
As noted above, I do that fine. It's not rocket science, but it does
require a bit of thought and trial-and-error to get the details right.
I know how it works. For simple student examples or pet projects its fine, >for
the real world its little use.
OK, so you are ignorant and nasty. You don't know how automatic
HorseyWorsey@the_stables.com wrote:
On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
Juha Nieminen <nospam@thanks.invalid> wrote:
HorseyWorsey@the_stables.com wrote:
Very nice. Now you have a single globals.h type file (VERY common in large
projects). How does gcc figure out which C files it needs to build from >that?
It doesn't. It only compiles what you tell it to compile.
It has to be *something else* that runs it and tells it what to
compile. Often this is the 'make' program (which is reading a
file usually named 'Makefile').
Well thanks for that valuable input, we're all so much more informed now.
You made that sound sarcastic. If it is indeed sarcasm, I don't
really understand why.
On 9/9/2021 10:17 PM, David Brown wrote:
On 09/09/2021 18:41, James Kuyper wrote:
On 9/9/21 4:54 AM, MisterMule@stubborn.uk wrote:
On Wed, 8 Sep 2021 20:22:52 +0300
Paavo Helde <myfirstname@osa.pri.ee> wrote:
08.09.2021 13:24 MisterMule@stubborn.uk kirjutas:No unless the compiler is clairvoyant they arn't.
when
You can write a makefile just as simple. However what happens when >>>>>> you
want foo.c recompiled when foo.h and bar.h changes but bar.c
should only
be recompiled when bar.h and moo.h change, moo.c should only be
recompiled
moo.h changes and main.c should be recompiled when anything changes? >>>>>>
Such dependencies are taken care automatically by the gcc -MD option, >>>>
That option causes a dependencies file to be created specifying all the
dependencies that the compiler notices during compilation. That file can >>> then be used to avoid unnecessary re-builds the next time the same file
is compiled. The dependency file is therefore always one build
out-of-date; if you created any new dependencies, or removed any old
ones, the dependencies file will be incorrect until after the next time
you do a build. It's therefore not a perfect solution - but neither is
it useless.
The trick is to have makefile (or whatever build system you use) rules
along with gcc so that the dependency file not only labels the object
file as dependent on the C or C++ file and all the include files it
uses, recursively, but also labels the dependency file itself to be
dependent on the same files. Then if the source file or includes are
changed, the dependency file is re-created, and make is smart enough to
then reload that dependency file to get the new dependencies for
building the object file.
The makefile rules involved are close to APL in readability, but once
you have figured out what you need, you can re-use it for any other
project. And it solves the problem you have here.
So, for example, if you have these files:
a.h
---
#include "b.h"
b.h
---
#define TEST 1
c.c
---
#include "a.h"
#include <stdio.h>
int main(void) {
printf("Test is %d\n", TEST);
}
Then "gcc -MD c.c" makes a file
c.d
---
c.o: c.c /usr/include/stdc-predef.h a.h b.h /usr/include/stdio.h \
/usr/include/x86_64-linux-gnu/bits/libc-header-start.h \
/usr/include/features.h /usr/include/x86_64-linux-gnu/sys/cdefs.h \
/usr/include/x86_64-linux-gnu/bits/wordsize.h \
...
Using "gcc -MMD c.c" is more helpful, usually, because it skips the
system includes:
c.d
---
c.o: c.c a.h b.h
But the real trick is "gcc -MMD -MT 'c.d c.o' c.c" :
c.d
---
c.d c.o: c.c a.h b.h
Now "make" knows that the dependency file is also dependent on the C
file and headers.
What you are describing is substantially:
https://www.gnu.org/software/make/manual/html_node/Automatic-Prerequisites.html
with the addition of the -MT gcc option, which removes the need for the
nasty 'sed' command in the "%.d: %.c" rule - which is the kind of thing
that tends to keep people away.
Thanks for pointing this out.
I guess in that rule one can use the single command:
%.d: %.c
$(CC) $(CPPFLAGS) -MM -MT '$*.o $@' -MF $@ $<
(As a side note, it wouldn't hurt if the GCC people updated their docs
from time to time...)
There's no reason to use optimizations while writing code and testing it.
On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:
(As a side note, it wouldn't hurt if the GCC people updated their docs
from time to time...)
gcc maintainers have policy against updating/fixing docs.
From their perspective, compiler and docs are inseparable parts of holy "release".
I tried to change their mind about it few years ago, but didn't succeed.
So, if you are not satisfied with quality of gcc docs supplied with
your release of gcc compiler then the best you can do is to look at the
docs for the most recent "release". I.e. right now 11.2. Naturally, in
order to be sure that these docs apply, you'd have to update the
compiler itself too.
On 12/09/2021 07:56, Bart wrote:
So which one gets the prize?
The one which runs correctly the fastest!
You appear to be stuck in the "C as a high level assembler" mindset.
This shouldn't be true for C and definitely isn't true for C++.
Optimised code often bears little resemblance to the original source and
the same source compiled with the same compiler can be optimised in
different ways depending on the context.
On Sat, 11 Sep 2021 18:40:58 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
On Fri, 10 Sep 2021 17:38:23 +0200
David Brown <david.brown@hesbynett.no> wrote:
Except its dependency automation for noddy builds. For any complex builds >>> you're going to need a build system hence the examples I gave.
Do you still not understand what is being discussed here? "gcc -MD" is
/not/ a replacement for a build system. It is a tool to help automate
your build systems. The output of "gcc -MD" is a dependency file, which
your makefile (or other build system) imports.
Yes, I understand perfectly. You create huge dependency files which either have to be stored in git (or similar) and updated when appropriate, or auto
David Brown <david.brown@hesbynett.no> wrote:
I always compile with debugging information, and I regularly use
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to
deal with unoptimised "gcc -O0" code - it is truly awful. You have vast
amounts of useless extra code that hides the real action. In the
assembly, the code to load and store variables from the stack, instead
of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder
because all the little calls that should be inlined out of existence,
become layers of calls that you might have to dig through. Most of the
what you are looking for is drowned out in the noise.
Most interactive debuggers support stepping into a function call, or
stepping over it (ie. call the function but don't break there and
just wait for it to return).
When debugging using an interactive debugger, the execution path
should follow the source code line-by-line, with *each* line included
and nothing optimized away.
That was true 20 years ago, perhaps, with C. Not now, and not with C++.
I don't see how it isn't true now. If there's a bug in your code, you
need to see and examine every line of code that could be the culprit.
If the compiler has done things at compile time and essentially
optimized the faulty line of code away (essentially "merging" it with subsequent lines), you'll be drawn to the wrong line of code. The first
line of code that exhibits the wrong values may not be the one that's actually creating the wrong values, because that line has been optimized away. (The same applies to optimizing away function calls.)
However, when compiling much larger projects, or when compiling on
a very inefficient platform, the difference can become substantial,
and detrimental to development if it's too long.
Don't solve that by using weaker tools. Solve it by improving how you
use the tools
That's exactly what I'm doing by doing a fast "g++ -O0" compilation
instead of a slow "g++ -O3" compilation.
get better build systems to avoid unnecessary
compilation, use ccache if you have build options or variations that you
swap back and forth, use distcc to spread the load, explain to your boss
why a Ryzen or even a ThreadRipper will save money overall as /your/
time costs more than the computer's time. "I don't use optimisation
because it is too slow" is an excuse for hobby developers, not
professionals.
There's no reason to use optimizations while writing code and testing it.
"I don't use optimization because it is too slow" is *perfectly valid*.
If it is too slow, and is slowing down your development, it's *good*
to make it faster. I doubt your boss will be unhappy with you developing
the program in less time.
You can compile the final result with optimizations, of course.
It is better to only compile the bits that need to be compiled. Who
cares how long a full build takes?
In the example I provided the project consists of two source files
and one header file. It's very heavy to compile. Inclusion optimization
isn't of much help.
On 12/09/2021 07:56, Bart wrote:
On 11/09/2021 18:15, David Brown wrote:<snip listings>
On 11/09/2021 01:33, Bart wrote:
On 10/09/2021 12:47, David Brown wrote:
On 10/09/2021 11:10, Juha Nieminen wrote:
However, gcc -O0 is quite useful in development. For starters,I always compile with debugging information, and I regularly use
when you
are interactively debugging (eg. with gdb, or any of the myriads of >>>>>> debuggers in different IDEs), you usually don't want things like your >>>>>> functions being inlined, loops unrolled, compile-time arithmetic
(other than, of course, that of constexpr/consteval functions), etc. >>>>>
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to >>>>> deal with unoptimised "gcc -O0" code - it is truly awful. You have >>>>> vast
amounts of useless extra code that hides the real action. In the
assembly, the code to load and store variables from the stack, instead >>>>> of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder
because all the little calls that should be inlined out of existence, >>>>> become layers of calls that you might have to dig through. Most of >>>>> the
what you are looking for is drowned out in the noise.
I agree with JN. With optimised code, what you have may have little
relationship with the original source code. If you're trying to trace a >>>> logic problem, how do you map machine code to the corresponding source? >>>>
It's a /lot/ easier with -O1 than -O0. Or you use the debugger.
Oh, you mean look at the ASM manually? In that case definitely through
-O0. If I take this fragment:
for (int i=0; i<100; ++i) {
a[i]=b+c*d;
fn(a[i]);
}
So which one gets the prize?
The one which runs correctly the fastest!
Have a look at my first example above; would the a[i]=b+c*d be
associated with anything more meaningful than those two lines of
assembly?
Does it matter?
And for source code, what difference should it make whether the
generated code is optimised or not?
Because it is not always correct!
Sometimes the issue is on the lines of "Why is this taking so long? I
had expected less than 0.1µs, but it is taking nearly 0.2µs." You need >>> to look at the assembly for that.
That's the kind of thing that the unit tests Ian is always on about
don't really work.
Unit tests test logic, not performance. We run automated regression
tests on real hardware to track performance. If there's a change
between builds, it's trivial to identify the code commits that caused
the change.
I don't have a problem with compile speed.
Then just scale up the size of the project; you will hit a point where
it /is/ a problem! Or change the threshold at which any hanging about
becomes incredibly annoying; mine is about half a second.
Correct, so you scale up the thing you have control over, the build infrastructure. It's safe to say that no one here has their own C++ compiler the can tweak to go faster!
It's like avoiding spending too long driving your car, due to its only
managing to do 3 mph, by cutting down on your trips as much as possible.
It's a slow car - /that's/ the problem.
Poor analogy. A better one is your car is slow because it only has a
single cylinder engine, so you can make it faster with a bigger cylinder
or more of them!
Bart <bc@freeuk.com> wrote:
The 'supercomputer' on my desk is not significantly faster than the RPi4
you mention below.
Then you must have a PC from the 1990's, because the Raspberry Pi 4
is a *very slow* system, believe me. I know, I have one. What takes
a few seconds to compile on my PC can take a minute to compile on
the Pi.
If your code is fairly standard C, try using Tiny C. I expect your
program will build in one second or thereabouts.
It's C++. (This is a C++ newsgroup, after all.)
On Sat, 11 Sep 2021 18:40:58 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
On Fri, 10 Sep 2021 17:38:23 +0200
David Brown <david.brown@hesbynett.no> wrote:
Except its dependency automation for noddy builds. For any complex builds >>> you're going to need a build system hence the examples I gave.
Do you still not understand what is being discussed here? "gcc -MD" is
/not/ a replacement for a build system. It is a tool to help automate
your build systems. The output of "gcc -MD" is a dependency file, which
your makefile (or other build system) imports.
Yes, I understand perfectly. You create huge dependency files which either have to be stored in git (or similar) and updated when appropriate, or auto generated in the makefile and then further used in the makefile which has to be manually written anyway unless its simple so what exactly is the point?
Also using
the compiler is sod all use if you need to fire off a script to auto generate
some code first.
No, it is not. It works fine - as long as you understand how your build
Excuse me? Ok, please do tell me how the compiler knows which script file to run to generate the header file. This'll be interesting.
well as lots of other C and header files). If I change the text file or
the Python script and type "make", then first a new header and C file
are created from the text file. Then "gcc -MD" is run on the C file,
generating a new dependency file, since the dependency file depends on
the header and the C file. Then this updated dependency file is
imported by make, and shows that the object file (needed by for the
link) depends on the updated C file, so the compiler is called on the file.
And that is supposed to be simpler than writing a Makefile yourself is it? Riiiiiight.
Last place I worked used used python to generate various
language header files based on json and that in turn depended on whether the
json had been updated since the last build. Good luck using gcc to sort that
out.
As noted above, I do that fine. It's not rocket science, but it does
require a bit of thought and trial-and-error to get the details right.
And is far more work that just putting 2 lines in a makefile consisting of a dummy target and a script call. But each to their own.
I know how it works. For simple student examples or pet projects its fine, >> for
the real world its little use.
OK, so you are ignorant and nasty. You don't know how automatic
Nasty? Don't be such a baby.
On Sun, 12 Sep 2021 09:21:50 -0000 (UTC)
Juha Nieminen <nospam@thanks.invalid> wrote:
HorseyWorsey@the_stables.com wrote:
On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)You made that sound sarcastic. If it is indeed sarcasm, I don't
Juha Nieminen <nospam@thanks.invalid> wrote:
HorseyWorsey@the_stables.com wrote:
Very nice. Now you have a single globals.h type file (VERY common in large
projects). How does gcc figure out which C files it needs to build from >> that?
It doesn't. It only compiles what you tell it to compile.
It has to be *something else* that runs it and tells it what to
compile. Often this is the 'make' program (which is reading a
file usually named 'Makefile').
Well thanks for that valuable input, we're all so much more informed now. >>
really understand why.
Try following a thread before replying. A couple of posters were claiming the compiler could automate the entire build system and I gave some basic examples
of why it couldn't. Now one of them is back peddaling and basically saying it can automate all the bits except the bits it can't when you need to edit the makefile yourself. Genius. Then you come along and mention Makefiles. Well thanks for the heads up, I'd forgotten what they were called.
Yes, it was sarcasm.
On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown
wrote:
On 12/09/2021 10:29, Michael S wrote:
On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:Your reference here was to the "make" manual, rather than the gcc
(As a side note, it wouldn't hurt if the GCC people updated
their docs from time to time...)
documentation. But the gcc folk could add an example like this to
their manual for the "-MT" option.
Well, yes. The gcc manual of a particular version documents the gcc
gcc maintainers have policy against updating/fixing docs. From
their perspective, compiler and docs are inseparable parts of
holy "release".
of that version. It seems an excellent policy to me.
It would be a little different if they were publishing a tutorial
on using gcc.
I tried to change their mind about it few years ago, but didn'tThankfully. It would be rather messy if they only had one reference
succeed.
manual which was full of comments about which versions the
particular options or features applied to, as these come and go
over time.
That's not what I was suggesting. I was suggesting to add an
clarifications and suggestions for a feature (it was something about
function attribute 'optimize') that existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs Obviously, a
previous version of the manual could have been available to
historians among us in gcc source control database.
Instead, said clarifications+suggestions were added to the *next*
release of the manual. Oh, in fact, no, it didn't made it into gcc6
manual. It was added to gcc 7 manual. So, gcc5 users now have no way
to know that changes in docs apply to gcc5 every bit as much as they
apply to gcc7 and later.
I suppose it would be possible to make some kind of interactive
reference where you selected your choice of compiler version,
target processor, etc., and the text adapted to suit. That could be
a useful tool, and help people see exactly what applied to their
exact toolchain. But it would take a good deal of work, and a
rather different thing from the current manuals.
So, if you are not satisfied with quality of gcc docs suppliedI think most people /do/ look up the gcc documents online, rather
with your release of gcc compiler then the best you can do is to
look at the docs for the most recent "release". I.e. right now
11.2. Naturally, in order to be sure that these docs apply, you'd
have to update the compiler itself too.
than locally.
I am pretty sure that it is a case. And that was exactly my argument
*for* updating online copy of gcc5 docs. And the argument of
maintainers was that people that read manuals locally do exist.
The gcc website has many versions easily available, so you can read
the manual for the version you are using. And while new features in
later gcc versions add to the manuals, it's rare that there are
changes to the text for existing features.
In my specific case it was a change to the text of existing feature.
The documentation for "-MT" is substantially the same for the
latest development version of gcc 12 and for gcc 3.0 from about 20
years ago.
On 12/09/2021 10:29, Michael S wrote:
On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:
(As a side note, it wouldn't hurt if the GCC people updated their docs
from time to time...)
Your reference here was to the "make" manual, rather than the gcc documentation. But the gcc folk could add an example like this to their manual for the "-MT" option.
gcc maintainers have policy against updating/fixing docs.Well, yes. The gcc manual of a particular version documents the gcc of
From their perspective, compiler and docs are inseparable parts of holy "release".
that version. It seems an excellent policy to me.
It would be a little different if they were publishing a tutorial on
using gcc.
I tried to change their mind about it few years ago, but didn't succeed.Thankfully. It would be rather messy if they only had one reference
manual which was full of comments about which versions the particular
options or features applied to, as these come and go over time.
I suppose it would be possible to make some kind of interactive
reference where you selected your choice of compiler version, target processor, etc., and the text adapted to suit. That could be a useful
tool, and help people see exactly what applied to their exact toolchain.
But it would take a good deal of work, and a rather different thing
from the current manuals.
So, if you are not satisfied with quality of gcc docs supplied with
your release of gcc compiler then the best you can do is to look at the docs for the most recent "release". I.e. right now 11.2. Naturally, in order to be sure that these docs apply, you'd have to update the
compiler itself too.
I think most people /do/ look up the gcc documents online, rather than locally.
The gcc website has many versions easily available, so you can
read the manual for the version you are using. And while new features
in later gcc versions add to the manuals, it's rare that there are
changes to the text for existing features.
The documentation for "-MT"
is substantially the same for the latest development version of gcc 12
and for gcc 3.0 from about 20 years ago.
On 12/09/2021 13:42, Michael S wrote:
On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown
wrote:
On 12/09/2021 10:29, Michael S wrote:
On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:Your reference here was to the "make" manual, rather than the gcc
(As a side note, it wouldn't hurt if the GCC people updated
their docs from time to time...)
documentation. But the gcc folk could add an example like this to
their manual for the "-MT" option.
Well, yes. The gcc manual of a particular version documents the gcc
gcc maintainers have policy against updating/fixing docs. From
their perspective, compiler and docs are inseparable parts of
holy "release".
of that version. It seems an excellent policy to me.
It would be a little different if they were publishing a tutorial
on using gcc.
I tried to change their mind about it few years ago, but didn'tThankfully. It would be rather messy if they only had one reference
succeed.
manual which was full of comments about which versions the
particular options or features applied to, as these come and go
over time.
That's not what I was suggesting. I was suggesting to add an
clarifications and suggestions for a feature (it was something about function attribute 'optimize') that existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs Obviously, a
previous version of the manual could have been available to
historians among us in gcc source control database.
Instead, said clarifications+suggestions were added to the *next*gcc 5 users /do/ have a way to see the change - they can look at later
release of the manual. Oh, in fact, no, it didn't made it into gcc6
manual. It was added to gcc 7 manual. So, gcc5 users now have no way
to know that changes in docs apply to gcc5 every bit as much as they
apply to gcc7 and later.
gcc references just as easily as older ones.
Occasionally, changes to the manuals might be back-ported a couple of versions, just as changes to the compilers are back-ported if they are important enough (wrong code generation bugs).
I'm sure the policies could be be better in some aspects - there are
always going to be cases where new improvements to the manual would
apply equally to older versions. But such flexibility comes at a cost -
more work, and more risk of getting things wrong.
I suppose it would be possible to make some kind of interactive
reference where you selected your choice of compiler version,
target processor, etc., and the text adapted to suit. That could be
a useful tool, and help people see exactly what applied to their
exact toolchain. But it would take a good deal of work, and a
rather different thing from the current manuals.
So, if you are not satisfied with quality of gcc docs suppliedI think most people /do/ look up the gcc documents online, rather
with your release of gcc compiler then the best you can do is to
look at the docs for the most recent "release". I.e. right now
11.2. Naturally, in order to be sure that these docs apply, you'd
have to update the compiler itself too.
than locally.
I am pretty sure that it is a case. And that was exactly my argument
*for* updating online copy of gcc5 docs. And the argument of
maintainers was that people that read manuals locally do exist.
The gcc website has many versions easily available, so you can read
the manual for the version you are using. And while new features in
later gcc versions add to the manuals, it's rare that there are
changes to the text for existing features.
In my specific case it was a change to the text of existing feature.
I think it's fair to say there is scope for improvement in the way gcc documentation is handled, but it is still a good deal better than many compilers and other projects.
The documentation for "-MT" is substantially the same for the
latest development version of gcc 12 and for gcc 3.0 from about 20
years ago.
[snip] Comparatively to Microsoft - it depends. Some parts of the gcc
docs are better others are worse. However I would think that when
Microsoft's maintainers see a mistake in their online docs for old
compiler, or, more likely, are pointed to mistake by community, they
fix it without hesitation.
On 12/09/2021 21:10, Juha Nieminen wrote:
There's no reason to use optimizations while writing code and testing it.
There may be many!
Unoptimised code being too slow or too big to run on the target is
common in real-time or pretend (i.e. Linux) real-time systems. Getting
more comprehensive error checking is another.
On 12/09/2021 11:23, HorseyWorsey@the_stables.com wrote:
Yes, the main makefile is written manually (or at least, that's what I
It is /not/ the /compiler's/ job to know this! It is the /build/ system
that says what programs are run on which files in order to create all
the files needed.
And that is supposed to be simpler than writing a Makefile yourself is it? >> Riiiiiight.
Who do you think wrote the makefile? A friendly goblin? /I/ wrote the >makefile. /I/ put rules in the makefile to run "gcc -MD" as and when
needed in order to generate the dependencies. The point is that no one
- not me, nor anyone else - needs to keep manually updating the makefile
to track the simple dependencies that can be calculated automatically.
I don't yet know whether you are wilfully ignorant, or trolling.
On 12/09/2021 11:29, HorseyWorsey@the_stables.com wrote:
On Sun, 12 Sep 2021 09:21:50 -0000 (UTC)
Juha Nieminen <nospam@thanks.invalid> wrote:
HorseyWorsey@the_stables.com wrote:
On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)You made that sound sarcastic. If it is indeed sarcasm, I don't
Juha Nieminen <nospam@thanks.invalid> wrote:
HorseyWorsey@the_stables.com wrote:
Very nice. Now you have a single globals.h type file (VERY common in >large
projects). How does gcc figure out which C files it needs to build from >>> that?
It doesn't. It only compiles what you tell it to compile.
It has to be *something else* that runs it and tells it what to
compile. Often this is the 'make' program (which is reading a
file usually named 'Makefile').
Well thanks for that valuable input, we're all so much more informed now. >>>
really understand why.
Try following a thread before replying. A couple of posters were claiming the
compiler could automate the entire build system and I gave some basic >examples
of why it couldn't. Now one of them is back peddaling and basically saying it
can automate all the bits except the bits it can't when you need to edit the >> makefile yourself. Genius. Then you come along and mention Makefiles. Well >> thanks for the heads up, I'd forgotten what they were called.
Ah, so you are saying that /you/ have completely misunderstood the
thread and what people wrote, and thought mocking would make you look
clever.
12.09.2021 12:23 HorseyWorsey@the_stables.com kirjutas:
On Sat, 11 Sep 2021 18:40:58 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
On Fri, 10 Sep 2021 17:38:23 +0200
David Brown <david.brown@hesbynett.no> wrote:
Except its dependency automation for noddy builds. For any complex builds >>>> you're going to need a build system hence the examples I gave.
Do you still not understand what is being discussed here? "gcc -MD" is
/not/ a replacement for a build system. It is a tool to help automate
your build systems. The output of "gcc -MD" is a dependency file, which >>> your makefile (or other build system) imports.
Yes, I understand perfectly. You create huge dependency files which either >> have to be stored in git (or similar) and updated when appropriate, or auto
What on earth are you babbling about? That's becoming insane.
In the rare chance you are not actually trolling: the dependency files
are generated by each build afresh, and they get used by the next build
in the same build tree for deciding which source files need to be
recompiled when some header file has changed. This is all automatic,
there are no manual steps involved except for setting it up once when
writing the initial Makefile (in case one still insists on writing
Makefiles manually).
There is no more point to put the dependency files into git than there
is to put the compiled object files there (in fact, a dependency file is >useless without object files).
I don't yet know whether you are wilfully ignorant, or trolling.
I'm rapidly getting the impression you and the others completely missed my original point despite stating it numerous time.
Frankly I can't be bothered
to continue with this.
On Sun, 12 Sep 2021 12:49:14 +0200
David Brown <david...@hesbynett.no> wrote:
On 12/09/2021 11:23, HorseyWorsey@the_stables.com wrote:Exactly.
Yes, the main makefile is written manually (or at least, that's what I Exactly.
It is /not/ the /compiler's/ job to know this! It is the /build/ system >that says what programs are run on which files in order to create all
the files needed.
And that is supposed to be simpler than writing a Makefile yourself is it? >> Riiiiiight.
Who do you think wrote the makefile? A friendly goblin? /I/ wrote the >makefile. /I/ put rules in the makefile to run "gcc -MD" as and whenI'm rapidly getting the impression you and the others completely missed my original point despite stating it numerous time. Frankly I can't be bothered to continue with this.
needed in order to generate the dependencies. The point is that no one
- not me, nor anyone else - needs to keep manually updating the makefile
to track the simple dependencies that can be calculated automatically. "Simple". Exactly.
I don't yet know whether you are wilfully ignorant, or trolling.
On Sunday, September 12, 2021 at 2:55:15 PM UTC+3, David Brown
I think it's fair to say there is scope for improvement in the way
gcc documentation is handled, but it is still a good deal better
than many compilers and other projects.
Comparatively to public variant of llvm/clang docs - sure, but that's
pretty low bar. I never looked at apple's and google's releases of
clang docs, hopefully they are better than public release.
Comparatively to Microsoft - it depends. Some parts of the gcc docs
are better others are worse. However I would think that when
Microsoft's maintainers see a mistake in their online docs for old
compiler, or, more likely, are pointed to mistake by community, they
fix it without hesitation.
On 12/09/2021 15:01, Michael S wrote:
On Sunday, September 12, 2021 at 2:55:15 PM UTC+3, David Brown
I think it's fair to say there is scope for improvement in the way
gcc documentation is handled, but it is still a good deal better
than many compilers and other projects.
Comparatively to public variant of llvm/clang docs - sure, but that's pretty low bar. I never looked at apple's and google's releases of
clang docs, hopefully they are better than public release.
Comparatively to Microsoft - it depends. Some parts of the gcc docs
are better others are worse. However I would think that when
Microsoft's maintainers see a mistake in their online docs for old compiler, or, more likely, are pointed to mistake by community, they
fix it without hesitation.
I have not nearly enough experience with the documentation of MS's
compiler to tell - I have only ever looked up a few points. (The same
with clang.) I've read manuals for many other compilers over the years,
which are often much worse, but none of these tools are direct
comparisons with gcc (being commercial embedded toolchains targetting
one or a few specific microcontroller cores).
One especially "fun" case was a toolchain that failed to zero-initialise non-local objects that were not explicitly initialised - what you
normally get by startup code clearing the ".bss" segment. This
"feature" was documented in a footnote in the middle of the manual,
noting that the behaviour was not standards conforming and would
silently break existing C code.
On Sun, 12 Sep 2021 21:51:32 +1200
Ian Collins <ian-news@hotmail.com> wrote:
On 12/09/2021 21:10, Juha Nieminen wrote:
There may be many!
There's no reason to use optimizations while writing code and testing it. >>
Unoptimised code being too slow or too big to run on the target is
common in real-time or pretend (i.e. Linux) real-time systems. Getting
more comprehensive error checking is another.
Checking that code actually tests correctly when fully optimized is also important.
The number of programmers who understand basic things like the strict aliasing rule, or when pointer arithmetic is permitted in C++, is in my experience low (it became obvious during consideration of P0593 for
C++20 that many of those on the C++ standard committee didn't understand
the second of those). In fact I suspect the number of programmers who
fully understand all aspects of C++ and who are fully versed in the
standard is very small and approaching zero. Correspondingly, I suspect
that the number of C++ programs which do not unwittingly rely on
undefined behaviour also approaches 0.
Choosing -O3 on testing will at least tell you whether your particular compiler version in question, when optimizing code with undefined
behaviour that you had not previously recognized as undefined, will
give results contradicting your expectations. Programmers who treat C
as if it were a high level assembler language are particularly prone to
this problem.
On 12/09/2021 01:11, Ian Collins wrote:
On 12/09/2021 07:56, Bart wrote:
On 11/09/2021 18:15, David Brown wrote:<snip listings>
On 11/09/2021 01:33, Bart wrote:
On 10/09/2021 12:47, David Brown wrote:
On 10/09/2021 11:10, Juha Nieminen wrote:
However, gcc -O0 is quite useful in development. For starters,I always compile with debugging information, and I regularly use
when you
are interactively debugging (eg. with gdb, or any of the myriads of >>>>>>> debuggers in different IDEs), you usually don't want things like your >>>>>>> functions being inlined, loops unrolled, compile-time arithmetic >>>>>>> (other than, of course, that of constexpr/consteval functions), etc. >>>>>>
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to >>>>>> deal with unoptimised "gcc -O0" code - it is truly awful. You have >>>>>> vast
amounts of useless extra code that hides the real action. In the >>>>>> assembly, the code to load and store variables from the stack, instead >>>>>> of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder >>>>>> because all the little calls that should be inlined out of existence, >>>>>> become layers of calls that you might have to dig through. Most of >>>>>> the
what you are looking for is drowned out in the noise.
I agree with JN. With optimised code, what you have may have little
relationship with the original source code. If you're trying to trace a >>>>> logic problem, how do you map machine code to the corresponding source? >>>>>
It's a /lot/ easier with -O1 than -O0. Or you use the debugger.
Oh, you mean look at the ASM manually? In that case definitely through
-O0. If I take this fragment:
for (int i=0; i<100; ++i) {
a[i]=b+c*d;
fn(a[i]);
}
So which one gets the prize?
The one which runs correctly the fastest!
Let's say none of them run correctly and your job is to find out why. Or maybe you're comparing two compilers at the same optimisation level, and
you want to find why one runs correctly and the other doesn't.
Or maybe this is part of a benchmark where writing to a[i] is part of
the test, but it's hard to gauge where one lot of generated code is
better than another, because the other has disappeared completely!
(I suppose in your world, a set of benchmark results where every one
runs in 0.0 seconds is perfection! I would say those are terrible benchmarks.)
Have a look at my first example above; would the a[i]=b+c*d be
associated with anything more meaningful than those two lines of
assembly?
Does it matter?
Ask why you're looking at the ASM in the first place. If there's no discernible correspondence with your source, then you might as well look
at any random bit of ASM code; it would be just as useful!
And for source code, what difference should it make whether the
generated code is optimised or not?
Because it is not always correct!
Sometimes the issue is on the lines of "Why is this taking so long? I >>>> had expected less than 0.1µs, but it is taking nearly 0.2µs." You need >>>> to look at the assembly for that.
That's the kind of thing that the unit tests Ian is always on about
don't really work.
Unit tests test logic, not performance. We run automated regression
tests on real hardware to track performance. If there's a change
between builds, it's trivial to identify the code commits that caused
the change.
Most of the stuff I do is not helped with unit tests.
Where tshere are things that possibly be tested by ticking off entries in
a list, you find the real problems come up with combinations or contexts
you haven't anticipated and that can't be enumerated.
On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown wrote:[...]
On 12/09/2021 10:29, Michael S wrote:
gcc maintainers have policy against updating/fixing docs.Well, yes. The gcc manual of a particular version documents the gcc of
From their perspective, compiler and docs are inseparable parts of holy "release".
that version. It seems an excellent policy to me.
It would be a little different if they were publishing a tutorial on
using gcc.
I tried to change their mind about it few years ago, but didn't succeed. >> Thankfully. It would be rather messy if they only had one referencemanual which was full of comments about which versions the particular
options or features applied to, as these come and go over time.
That's not what I was suggesting.
I was suggesting to add an clarifications and suggestions for a
feature (it was something about function attribute 'optimize') that
existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs Obviously, a previous version of the manual
could have been available to historians among us in gcc source control database.
Instead, said clarifications+suggestions were added to the *next*
release of the manual. Oh, in fact, no, it didn't made it into gcc6
manual. It was added to gcc 7 manual.
So, gcc5 users now have no way to know that changes in docs apply to
gcc5 every bit as much as they apply to gcc7 and later.
Yes, for the record the page that I linked is not about a specific
version of 'make', it is part of the GNU make online manual.
On 12/09/2021 10:29, Michael S wrote:
On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:
(As a side note, it wouldn't hurt if the GCC people updated their docs
from time to time...)
Your reference here was to the "make" manual, rather than the gcc documentation. But the gcc folk could add an example like this to their manual for the "-MT" option.
gcc maintainers have policy against updating/fixing docs.
From their perspective, compiler and docs are inseparable parts of holy "release".
Well, yes. The gcc manual of a particular version documents the gcc of
that version. It seems an excellent policy to me.
It would be a little different if they were publishing a tutorial on
using gcc.
I tried to change their mind about it few years ago, but didn't succeed.
Thankfully. It would be rather messy if they only had one reference
manual which was full of comments about which versions the particular
options or features applied to, as these come and go over time.
I suppose it would be possible to make some kind of interactive
reference where you selected your choice of compiler version, target processor, etc., and the text adapted to suit. That could be a useful
tool, and help people see exactly what applied to their exact toolchain.
But it would take a good deal of work, and a rather different thing
from the current manuals.
Try following a thread before replying. A couple of posters were claiming the compiler could automate the entire build system
Such dependencies are taken care automatically by the gcc -MD option,
which you have to specify for both Makefile and CMake based builds
On Sat, 11 Sep 2021 18:40:58 +0200...
David Brown <david.brown@hesbynett.no> wrote:
On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
Do you still not understand what is being discussed here? "gcc -MD" is
/not/ a replacement for a build system. It is a tool to help automate
your build systems. The output of "gcc -MD" is a dependency file, which
your makefile (or other build system) imports.
Yes, I understand perfectly. You create huge dependency files which either have to be stored in git (or similar) and updated when appropriate, or auto
generated in the makefile and then further used in the makefile which has to be manually written anyway unless its simple so what exactly is the point?
Also using
the compiler is sod all use if you need to fire off a script to auto generate
some code first.
No, it is not. It works fine - as long as you understand how your build
Excuse me? Ok, please do tell me how the compiler knows which script file to run to generate the header file. This'll be interesting.
well as lots of other C and header files). If I change the text file or
the Python script and type "make", then first a new header and C file
are created from the text file. Then "gcc -MD" is run on the C file,
generating a new dependency file, since the dependency file depends on
the header and the C file. Then this updated dependency file is
imported by make, and shows that the object file (needed by for the
link) depends on the updated C file, so the compiler is called on the file.
And that is supposed to be simpler than writing a Makefile yourself is it?
On 12/09/2021 21:10, Juha Nieminen wrote:
There's no reason to use optimizations while writing code and testing it.
There may be many!
Unoptimised code being too slow or too big to run on the target is
common in real-time or pretend (i.e. Linux) real-time systems. Getting
more comprehensive error checking is another.
Yes, it was sarcasm.
On Sunday, September 12, 2021 at 7:53:29 PM UTC+3, David Brown wrote:
On 12/09/2021 15:01, Michael S wrote:
On Sunday, September 12, 2021 at 2:55:15 PM UTC+3, David BrownI have not nearly enough experience with the documentation of MS's
I think it's fair to say there is scope for improvement in the way
gcc documentation is handled, but it is still a good deal better
than many compilers and other projects.
Comparatively to public variant of llvm/clang docs - sure, but that's
pretty low bar. I never looked at apple's and google's releases of
clang docs, hopefully they are better than public release.
Comparatively to Microsoft - it depends. Some parts of the gcc docs
are better others are worse. However I would think that when
Microsoft's maintainers see a mistake in their online docs for old
compiler, or, more likely, are pointed to mistake by community, they
fix it without hesitation.
compiler to tell - I have only ever looked up a few points. (The same
with clang.) I've read manuals for many other compilers over the years,
which are often much worse, but none of these tools are direct
comparisons with gcc (being commercial embedded toolchains targetting
one or a few specific microcontroller cores).
One especially "fun" case was a toolchain that failed to zero-initialise
non-local objects that were not explicitly initialised - what you
normally get by startup code clearing the ".bss" segment. This
"feature" was documented in a footnote in the middle of the manual,
noting that the behaviour was not standards conforming and would
silently break existing C code.
I suppose, you are talking about TI compilers.
IIRC, in their old docs (around 1998 to 2002) it was documented in relatively clear way.
But it was quite a long time ago so it's possible that I misremember.
On 12/09/2021 15:55, Chris Vine wrote:
On Sun, 12 Sep 2021 21:51:32 +1200
Ian Collins <ian-news@hotmail.com> wrote:
On 12/09/2021 21:10, Juha Nieminen wrote:
There's no reason to use optimizations while writing code and
testing it.
There may be many!
Unoptimised code being too slow or too big to run on the target is
common in real-time or pretend (i.e. Linux) real-time systems. Getting >>> more comprehensive error checking is another.
Checking that code actually tests correctly when fully optimized is also
important.
The number of programmers who understand basic things like the strict
aliasing rule, or when pointer arithmetic is permitted in C++, is in my
experience low (it became obvious during consideration of P0593 for
C++20 that many of those on the C++ standard committee didn't understand
the second of those). In fact I suspect the number of programmers who
fully understand all aspects of C++ and who are fully versed in the
standard is very small and approaching zero. Correspondingly, I suspect
that the number of C++ programs which do not unwittingly rely on
undefined behaviour also approaches 0.
Choosing -O3 on testing will at least tell you whether your particular
compiler version in question, when optimizing code with undefined
behaviour that you had not previously recognized as undefined, will
give results contradicting your expectations. Programmers who treat C
as if it were a high level assembler language are particularly prone to
this problem.
The fun one I've had when debugging is when the compiler correctly spots
that a bit of code really ought to be a function because it's
duplicated. Which means you end up in the same bit of machine code from
two different source locations.
This is especially fun when looking at post-mortem dump files of some
code somebody else wrote.
While I've only ever once or twice found a genuine
compiler-made-bad-code bug in my entire career UB resulting in different behaviour from bad source is much more common. And if you want to be
sure about that you need to debug with the target optimisation level.
Ian Collins <ian-news@hotmail.com> wrote:
On 12/09/2021 21:10, Juha Nieminen wrote:
There may be many!
There's no reason to use optimizations while writing code and testing it. >>
Unoptimised code being too slow or too big to run on the target is
common in real-time or pretend (i.e. Linux) real-time systems. Getting
more comprehensive error checking is another.
Rather obviously you need to test that your program works when compiled
with optimizations (there are situations where bugs manifest themselves
only when optimizations are turned on).
But wasn't my point. My point is that during development, when you are writing, testing and debugging your code, you rarely need to turn on optimizations. You can, of course (especially if it makes little
difference in compilation speed), but at the point where -O0 takes
10 seconds to compile and -O3 takes 1 minute to compile, you might reconsider.
On 12/09/2021 22:17, Bart wrote:
Most of the stuff I do is not helped with unit tests.
So it doesn't have any functions that can be tested? That's a new on
one me!
Where tshere are things that possibly be tested by ticking off entries in
a list, you find the real problems come up with combinations or contexts
you haven't anticipated and that can't be enumerated.
If you find a problem, add a test fro it to prove that you have fixed it
and to make sure it does not recur.
On 12/09/2021 18:13, HorseyWorsey@the_stables.com wrote:
I don't yet know whether you are wilfully ignorant, or trolling.
I'm rapidly getting the impression you and the others completely missed my >> original point despite stating it numerous time.
You had no point - you failed to read or understand someone's post,
thought it would make you look smart or cool to mock them, and have been >digging yourself deeper in a hole ever since.
As a side effect, you might have learned something - but I am sure you
will deny that. Other people have, which is the beauty of Usenet - even
the worst posters can sometimes inspire a thread that is helpful or >interesting to others.
Frankly I can't be bothered
to continue with this.
I suppose that is as close to an apology and admission of error as we
will ever get.
On Sunday, September 12, 2021 at 7:13:27 PM UTC+3, Horsey...@the_stables.com >wrote:
On Sun, 12 Sep 2021 12:49:14 +0200
David Brown <david...@hesbynett.no> wrote:
On 12/09/2021 11:23, HorseyWorsey@the_stables.com wrote:Exactly.
Yes, the main makefile is written manually (or at least, that's what I
It is /not/ the /compiler's/ job to know this! It is the /build/ systemExactly.
that says what programs are run on which files in order to create all
the files needed.
"Simple". Exactly.And that is supposed to be simpler than writing a Makefile yourself is >it?
Riiiiiight.
Who do you think wrote the makefile? A friendly goblin? /I/ wrote the
makefile. /I/ put rules in the makefile to run "gcc -MD" as and when
needed in order to generate the dependencies. The point is that no one
- not me, nor anyone else - needs to keep manually updating the makefile
to track the simple dependencies that can be calculated automatically.
I don't yet know whether you are wilfully ignorant, or trolling.I'm rapidly getting the impression you and the others completely missed my >> original point despite stating it numerous time. Frankly I can't be bothered
to continue with this.
Frankly, from the view of discussion I see on google groups it's quite >difficult to figure out what you are arguing foe. Or against.
Are you saying that all of us have to teach ourselves cmake even despite the >fact that writing makefiles by hand + utilizing .d files generated by compiles >served our needs rather well for last 10-20-30 years?
HorseyWorsey@the_stables.com wrote:
Yes, it was sarcasm.
Well, good luck trying to get any more answers from me. I don't usually >placate to assholes.
On Sun, 12 Sep 2021 18:45:07 +0200
I've learned of you move the goalposts when you're losing the argument. "Cmake is better than makefiles which are ancient and useless"
"Oh ok, makefiles are fine. You can do everything with dependency files and
don't need to write the makefile yourself"
"Oh ok, you can't do everything with dependency files and do need to write some
of the makefile yourself".
Etc etc etc.
On 12/09/2021 22:18, Vir Campestris wrote:ace!
This is especially fun when looking at post-mortem dump files of some
code somebody else wrote.
Debugging someone else's code is always horrible...
While I've only ever once or twice found a genuine
compiler-made-bad-code bug in my entire career UB resulting in different
behaviour from bad source is much more common. And if you want to be
sure about that you need to debug with the target optimisation level.
I have hit a few bugs in compilers over the years.
HorseyWorsey@the_stables.com wrote:
Yes, it was sarcasm.
Well, good luck trying to get any more answers from me. I don't usually >placate to assholes.
On Mon, 13 Sep 2021 14:15:19 +0200...
David Brown <david.brown@hesbynett.no> wrote:
...I'd love to see a reference where I mention CMake at all. It's not a
tool I have ever used. As for other people's posts, can you give any
reference to posts that suggest that "gcc -MD" is anything other than an
aid to generating dependency information that can be used by a build
system (make, ninja, presumably CMake, and no doubt many other systems)?
The usual response from people on this group, pretend something wasn't said when it becomes inconvenient.
On 13/09/2021 12:55, HorseyWorsey@the_stables.com wrote:
On Sun, 12 Sep 2021 18:45:07 +0200
I've learned of you move the goalposts when you're losing the argument.
"Cmake is better than makefiles which are ancient and useless"
"Oh ok, makefiles are fine. You can do everything with dependency files and >> don't need to write the makefile yourself"
"Oh ok, you can't do everything with dependency files and do need to write >some
of the makefile yourself".
Etc etc etc.
I'd love to see a reference where I mention CMake at all. It's not a
tool I have ever used. As for other people's posts, can you give any >reference to posts that suggest that "gcc -MD" is anything other than an
aid to generating dependency information that can be used by a build
system (make, ninja, presumably CMake, and no doubt many other systems)?
No, I am confident that you cannot.
You have misunderstood and misrepresented others all along. It's fine
to misunderstand or misread - it happens to everyone at times. Mocking, >lying, sarcasm to try to hide your mistakes when they are pointed out to
you - that is much less fine.
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated automatically when needed.
On 14/09/2021 09:00, Vir Campestris wrote:
On 13/09/2021 04:12, James Kuyper wrote:
<snip>
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated
automatically when needed.
We absolutely keep executables in a controlled archive.
The build system produces a system image; this is in turn made from
various files, some of which are executable; we store the image, the
executables, the symbol files that go with them, and put a label in the
source control system to show where it was build from.
The executables and symbols can save a _lot_ of time when looking at dumps.
But they can be recreated from the source and a given source control
hash or tag?
On 13/09/2021 04:12, James Kuyper wrote:
<snip>
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated
automatically when needed.
We absolutely keep executables in a controlled archive.
The build system produces a system image; this is in turn made from
various files, some of which are executable; we store the image, the executables, the symbol files that go with them, and put a label in the source control system to show where it was build from.
The executables and symbols can save a _lot_ of time when looking at dumps.
Ian Collins <ian-news@hotmail.com> writes:
On 14/09/2021 09:00, Vir Campestris wrote:
On 13/09/2021 04:12, James Kuyper wrote:But they can be recreated from the source and a given source control
<snip>
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated
automatically when needed.
We absolutely keep executables in a controlled archive.
The build system produces a system image; this is in turn made from
various files, some of which are executable; we store the image, the
executables, the symbol files that go with them, and put a label in the
source control system to show where it was build from.
The executables and symbols can save a _lot_ of time when looking at dumps. >>
hash or tag?
Maybe. If you have the exact same compiler, assembler and linker. Maybe.
And not if the linker uses any form of address space randomization.
On 13/09/2021 01:51, Ian Collins wrote:
On 12/09/2021 22:17, Bart wrote:
You mean some tiny leaf function that has a well-defined task with aMost of the stuff I do is not helped with unit tests.
So it doesn't have any functions that can be tested? That's a new on
one me!
known range of inputs? That would be in the minority.
The problems I have to deal with are several levels above that and
involve the bigger picture. A flaw in an approach might be discovered
that means changes to global data structures and new or rewritten functions.
Also, if you're developing languages then you might have multiple sets
of source code where the problem might lie.
Maybe unit tests could have applied to one of those sources, such as
that C compiler, which might have inherent bugs exposed by the revised implementation language.
Where tshere are things that possibly be tested by ticking off entries in >>> a list, you find the real problems come up with combinations or contexts >>> you haven't anticipated and that can't be enumerated.
If you find a problem, add a test fro it to prove that you have fixed it
and to make sure it does not recur.
My 'unit tests' for language products consist of running non-trivial applications to see if they still work.
Or running multiple generations of a compiler.
So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe
and the result can build a range of C programs, if I take that tcc.exe
and build Tiny C with it, that new tcc2.exe doesn't work (error in the generated binaries).
So where do you start with that?
On Mon, 13 Sep 2021 14:15:19 +0200...
David Brown <david.brown@hesbynett.no> wrote:
...I'd love to see a reference where I mention CMake at all. It's not a
tool I have ever used. As for other people's posts, can you give any
reference to posts that suggest that "gcc -MD" is anything other than an
aid to generating dependency information that can be used by a build
system (make, ninja, presumably CMake, and no doubt many other systems)?
The usual response from people on this group, pretend something wasn't said when it becomes inconvenient.
Ian Collins <ian-news@hotmail.com> writes:
On 14/09/2021 09:00, Vir Campestris wrote:
On 13/09/2021 04:12, James Kuyper wrote:But they can be recreated from the source and a given source control
<snip>
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated
automatically when needed.
We absolutely keep executables in a controlled archive.
The build system produces a system image; this is in turn made from
various files, some of which are executable; we store the image, the
executables, the symbol files that go with them, and put a label in the
source control system to show where it was build from.
The executables and symbols can save a _lot_ of time when looking at dumps. >>
hash or tag?
Maybe. If you have the exact same compiler, assembler and linker. Maybe.
And not if the linker uses any form of address space randomization.
On 13/09/2021 22:03, Bart wrote:
So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe
and the result can build a range of C programs, if I take that tcc.exe
and build Tiny C with it, that new tcc2.exe doesn't work (error in the
generated binaries).
So where do you start with that?
By testing the logic in your code?
David Brown <david.brown@hesbynett.no> writes:
On 13/09/2021 23:42, Scott Lurndal wrote:
Ian Collins <ian-news@hotmail.com> writes:
On 14/09/2021 09:00, Vir Campestris wrote:
On 13/09/2021 04:12, James Kuyper wrote:
<snip>
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o >>>>>> files in git? How about executables? You don't need to update them >>>>>> manually; if you set up your build system properly, they get updated >>>>>> automatically when needed.
We absolutely keep executables in a controlled archive.
The build system produces a system image; this is in turn made from
various files, some of which are executable; we store the image, the >>>>> executables, the symbol files that go with them, and put a label in the >>>>> source control system to show where it was build from.
The executables and symbols can save a _lot_ of time when looking at dumps.
But they can be recreated from the source and a given source control
hash or tag?
Maybe. If you have the exact same compiler, assembler and linker. Maybe. >>>
And not if the linker uses any form of address space randomization.
All these things vary by project. For the kinds of things I do, I make
a point of archiving the toolchain tool (though not in a git
repository). Reproducible builds are important for me. Other kinds of
projects have different setups and are perhaps build using a variety of
different tools.
In our case, the debuginfo files (DWARF data extracted from the ELF
prior to shipping to customers) are saved for each software 'drop'
to a customer. Much easier to deal with than finding the particular version of the toolset used to build the product.
On 13/09/2021 23:42, Scott Lurndal wrote:
Ian Collins <ian-news@hotmail.com> writes:
On 14/09/2021 09:00, Vir Campestris wrote:
On 13/09/2021 04:12, James Kuyper wrote:
<snip>
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated >>>>> automatically when needed.
We absolutely keep executables in a controlled archive.
The build system produces a system image; this is in turn made from
various files, some of which are executable; we store the image, the
executables, the symbol files that go with them, and put a label in the >>>> source control system to show where it was build from.
The executables and symbols can save a _lot_ of time when looking at dumps.
But they can be recreated from the source and a given source control
hash or tag?
Maybe. If you have the exact same compiler, assembler and linker. Maybe. >>
And not if the linker uses any form of address space randomization.
All these things vary by project. For the kinds of things I do, I make
a point of archiving the toolchain tool (though not in a git
repository). Reproducible builds are important for me. Other kinds of >projects have different setups and are perhaps build using a variety of >different tools.
On 14/09/2021 01:05, Ian Collins wrote:
On 13/09/2021 22:03, Bart wrote:
So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe >>> and the result can build a range of C programs, if I take that tcc.exe
and build Tiny C with it, that new tcc2.exe doesn't work (error in the
generated binaries).
So where do you start with that?
By testing the logic in your code?
Which bit of logic out of 10s of 1000s of lines?
I get the impression from you that, with a product like a compiler, if
it passes all its unit tests, then it is unnecessary to test it further
with any actual applications! Just ship it immediately.
On 14/09/2021 09:00, Vir Campestris wrote:
On 13/09/2021 04:12, James Kuyper wrote:
<snip>
Why in the world would you store them in git? They are</snip>
compiler-generated files, not source files. Do you normally keep .o
files in git? How about executables? You don't need to update them
manually; if you set up your build system properly, they get updated
automatically when needed.
We absolutely keep executables in a controlled archive.
The build system produces a system image; this is in turn made from
various files, some of which are executable; we store the image, the
executables, the symbol files that go with them, and put a label in the
source control system to show where it was build from.
The executables and symbols can save a _lot_ of time when looking at
dumps.
But they can be recreated from the source and a given source control
hash or tag?
On 11/09/2021 11:33, Bart wrote:
On 10/09/2021 12:47, David Brown wrote:
On 10/09/2021 11:10, Juha Nieminen wrote:
However, gcc -O0 is quite useful in development. For starters, when you >>>> are interactively debugging (eg. with gdb, or any of the myriads of
debuggers in different IDEs), you usually don't want things like your
functions being inlined, loops unrolled, compile-time arithmetic
(other than, of course, that of constexpr/consteval functions), etc.
I always compile with debugging information, and I regularly use
breakpoints, stepping, assembly-level debug, etc. I /hate/ having to
deal with unoptimised "gcc -O0" code - it is truly awful. You have vast >>> amounts of useless extra code that hides the real action. In the
assembly, the code to load and store variables from the stack, instead
of registers, often outweighs the actual interesting stuff.
Single-stepping through your important functions becomes far harder
because all the little calls that should be inlined out of existence,
become layers of calls that you might have to dig through. Most of the >>> what you are looking for is drowned out in the noise.
I agree with JN. With optimised code, what you have may have little
relationship with the original source code. If you're trying to trace a
logic problem, how do you map machine code to the corresponding source?
If you are trying to trace a logic problem, unit tests are your friend.
The debugger is the last resort...
After continually bawling me out for putting too much emphasis on
compilation speed, are you saying for the first time that it might be
important after all?!
If your project has thousands of source files and builds for several
targets, then build times are obviously important.
However you seem to be in favour of letting off the people who write the
tools (because it is unheard of for them create an inefficient
product!), and just throwing more hardware - and money - at the problem.
Build systems are a collection of tools, one of which is the compiler.
The collection gives you the performance you want.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 113 |
Nodes: | 8 (1 / 7) |
Uptime: | 45:41:40 |
Calls: | 2,499 |
Calls today: | 1 |
Files: | 8,655 |
Messages: | 1,908,734 |