• Examples of current platforms/architectures where sizeof(void*) >

    From David Brown@21:1/5 to Bart on Sat Sep 11 19:15:43 2021
    On 11/09/2021 01:33, Bart wrote:
    On 10/09/2021 12:47, David Brown wrote:
    On 10/09/2021 11:10, Juha Nieminen wrote:

    However, gcc -O0 is quite useful in development. For starters, when you
    are interactively debugging (eg. with gdb, or any of the myriads of
    debuggers in different IDEs), you usually don't want things like your
    functions being inlined, loops unrolled, compile-time arithmetic
    (other than, of course, that of constexpr/consteval functions), etc.

    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc.  I /hate/ having to
    deal with unoptimised "gcc -O0" code - it is truly awful.  You have vast
    amounts of useless extra code that hides the real action.  In the
    assembly, the code to load and store variables from the stack, instead
    of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder
    because all the little calls that should be inlined out of existence,
    become layers of calls that you might have to dig through.  Most of the
    what you are looking for is drowned out in the noise.

    I agree with JN. With optimised code, what you have may have little relationship with the original source code. If you're trying to trace a
    logic problem, how do you map machine code to the corresponding source?


    It's a /lot/ easier with -O1 than -O0. Or you use the debugger.

    Or more importantly, how does the debugger do so?

    The compiler generates lots of debug information, and the debugger reads
    it. How else would it work?


    For such purposes, an interpreter might be a better bet for some kinds
    of bugs.


    There are many kinds of bugs, and many tools and tactics for finding and squashing them. As well as compiling at -O2 with debugging information enabled, and running with a good debugger, there are many other tools in
    my toolbox. I mentioned some, such as varying compiler options with
    pragmas or adding virtual variables for tracing. Like most people, I
    will also add debug output (though it is usually on a UART in my
    systems, rather than a console printf). Sometimes I will drive test
    pins and have an oscilloscope connected - there are many possibilities.

    And I can quite happy accept that for some kinds of bugs an interpreter
    could be handy. Certainly there are many tools that I /don't/ use, such
    as some types of sanitizers and tracing malloc libraries, since they are
    not relevant to my needs.

    However, I've never used a debugger (other than some distant attempts at writing one); what are you actually stepping through: machine code, or
    lines of source code, or both?


    Yes, yes and yes :-)

    I would have thought that for a logic problem (or most bugs actually
    provided they are reproducable), you'd want to be looking at source
    code, not native code. (Unless you're perhaps debugging a compiler,
    which is what I do quite a lot.)


    Mostly, that is correct.

    And for source code, what difference should it make whether the
    generated code is optimised or not?


    Because it is not always correct!

    Sometimes the issue is on the lines of "Why is this taking so long? I
    had expected less than 0.1µs, but it is taking nearly 0.2µs." You need
    to look at the assembly for that.


    Of course there's also the question of compilation speed. When compiling >>> small or even medium-sized projects, we seldom tend to pay attention
    to how fast "gcc -O0" compiles compared to "gcc -O3", especially since
    we tend to have these supercomputers on our desks.

    However, when compiling much larger projects, or when compiling on
    a very inefficient platform, the difference can become substantial,
    and detrimental to development if it's too long.

    Don't solve that by using weaker tools.  Solve it by improving how you
    use the tools - get better build systems to avoid unnecessary
    compilation, use ccache if you have build options or variations that you
    swap back and forth, use distcc to spread the load, explain to your boss
    why a Ryzen or even a ThreadRipper will save money overall as /your/
    time costs more than the computer's time.  "I don't use optimisation
    because it is too slow" is an excuse for hobby developers, not
    professionals.

    After continually bawling me out for putting too much emphasis on
    compilation speed, are you saying for the first time that it might be important after all?!

    No, I am saying that if compilation speed is a problem for how you work,
    then you are probably not using the tools in the appropriate way. Use
    the good tools in a better way, rather than using poorer tools (and I
    count "gcc -O0" as a poor tool here).


    However you seem to be in favour of letting off the people who write the tools (because it is unheard of for them create an inefficient
    product!), and just throwing more hardware - and money - at the problem.

    The people making gcc do not work for me. They /do/ get some money from
    me (or rather, my employer) in indirect ways - for example, we buy microcontrollers with ARM cores, and ARM pays some gcc developers. But
    I am hardly in a position to tell the gcc developers "stop adding
    optimisations and extra warning checks, and forget supporting new
    devices and new language standards - make everything faster instead".

    I don't have a problem with compile speed. I think the same applies to
    most people who get appropriate tools for their tasks, learn to use them
    in the most suitable ways, and match their development processes
    thoughtfully.

    Of course it is always great if a compiler runs faster - I have never
    heard anyone complain that their tools are too fast! But that is very different from saying there is a /problem/.

    I'd be happy if my car could be upgraded to get twice the petrol
    mileage. But I don't see its petrol usage as a problem - and I
    certainly would not swap it for a moped just because the moped uses much
    less petrol.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to David Brown on Sat Sep 11 20:56:05 2021
    On 11/09/2021 18:15, David Brown wrote:
    On 11/09/2021 01:33, Bart wrote:
    On 10/09/2021 12:47, David Brown wrote:
    On 10/09/2021 11:10, Juha Nieminen wrote:

    However, gcc -O0 is quite useful in development. For starters, when you >>>> are interactively debugging (eg. with gdb, or any of the myriads of
    debuggers in different IDEs), you usually don't want things like your
    functions being inlined, loops unrolled, compile-time arithmetic
    (other than, of course, that of constexpr/consteval functions), etc.

    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc.  I /hate/ having to
    deal with unoptimised "gcc -O0" code - it is truly awful.  You have vast >>> amounts of useless extra code that hides the real action.  In the
    assembly, the code to load and store variables from the stack, instead
    of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder
    because all the little calls that should be inlined out of existence,
    become layers of calls that you might have to dig through.  Most of the >>> what you are looking for is drowned out in the noise.

    I agree with JN. With optimised code, what you have may have little
    relationship with the original source code. If you're trying to trace a
    logic problem, how do you map machine code to the corresponding source?


    It's a /lot/ easier with -O1 than -O0. Or you use the debugger.

    Oh, you mean look at the ASM manually? In that case definitely through
    -O0. If I take this fragment:

    for (int i=0; i<100; ++i) {
    a[i]=b+c*d;
    fn(a[i]);
    }

    then using 'gcc -S -O1 -fverbose-asm' gives me:

    # c.c:8: a[i]=b+c*d;
    movl $0, %esi #, _2
    movl $100, %ebx #, ivtmp_4
    .L2:
    # c.c:9: fn(a[i]);
    movl %esi, %ecx # _2,
    call fn #
    # c.c:7: for (int i=0; i<100; ++i) {
    subl $1, %ebx #, ivtmp_4
    jne .L2 #,

    Where's all my code gone?! With -O0 I get this:

    # c.c:7: for (int i=0; i<100; ++i) {
    movl $0, -4(%rbp) #, i
    # c.c:7: for (int i=0; i<100; ++i) {
    jmp .L2 #
    .L3:
    # c.c:8: a[i]=b+c*d;
    movl -8(%rbp), %eax # c, tmp87
    imull -12(%rbp), %eax # d, tmp87
    movl %eax, %edx # tmp87, _1
    # c.c:8: a[i]=b+c*d;
    movl -16(%rbp), %eax # b, tmp88
    addl %eax, %edx # tmp88, _2
    # c.c:8: a[i]=b+c*d;
    movl -4(%rbp), %eax # i, tmp90
    cltq
    movl %edx, -64(%rbp,%rax,4) # _2, a[i_4]
    # c.c:9: fn(a[i]);
    movl -4(%rbp), %eax # i, tmp92
    cltq
    movl -64(%rbp,%rax,4), %eax # a[i_4], _3
    movl %eax, %ecx # _3,
    call fn #
    # c.c:7: for (int i=0; i<100; ++i) {
    addl $1, -4(%rbp) #, i
    .L2:
    # c.c:7: for (int i=0; i<100; ++i) {
    cmpl $99, -4(%rbp) #, i
    jle .L3 #,

    This looks pretty dreadful (gas format always does; gcc can produce
    Intel-style eee below). However you can much more clearly match the
    elements of your C code with lines of the ASM.

    My own C compiler can produce this, which is one step back from it's -S
    output:

    mov word32 [Dframe+i], 0
    jmp L4
    L5:
    iwiden D0, word32 [Dframe+i]
    mov A1, [Dframe+b]
    mov A3, [Dframe+c]
    imul A3, [Dframe+d]
    add A1, A3
    mov [Dframe+D0*4+a], A1
    sub Dstack, 32
    iwiden D10, word32 [Dframe+i]
    mov A10, [Dframe+D10*4+a]
    call fn*
    add Dstack, 32
    L2:
    inc word32 [Dframe+i]
    L4:
    mov A0, [Dframe+i]
    cmp A0, 100
    jl L5

    (The proper ASM changes 'iwiden' to 'movsx', and the local variable
    names to offsets.)

    If I code the equivalent in my language [where ints are 64 bits], and
    turn on its very modest optimiser, it produces this (with simplified
    local names specific for debugging purposes):

    mov R.i, 0
    L5:
    mov D0, R.c
    imul2 D0, R.d
    mov D1, R.b
    add D1, D0
    mov [Dframe+R.i*8+start.a], D1
    mov D10, D1
    call t.fn
    L6:
    inc R.i
    cmp R.i, 9
    jle L5

    So which one gets the prize?

    To me, both gcc-produced listings look like a nightmare. TBF that can
    probably be cleaned up; the listing produced via godbolt.org looks like
    this for -O0 in Intel format:

    mov DWORD PTR [rbp-4], 0
    jmp .L2
    .L3:
    mov eax, DWORD PTR [rbp-8]
    imul eax, DWORD PTR [rbp-12]
    mov edx, eax
    mov eax, DWORD PTR [rbp-16]
    add edx, eax
    mov eax, DWORD PTR [rbp-4]
    cdqe
    mov DWORD PTR [rbp-64+rax*4], edx
    mov eax, DWORD PTR [rbp-4]
    cdqe
    mov eax, DWORD PTR [rbp-64+rax*4]
    mov edi, eax
    call fn
    add DWORD PTR [rbp-4], 1
    .L2:
    cmp DWORD PTR [rbp-4], 99
    jle .L3
    mov eax, 0

    But it's still not as easy to follow as either of mine.

    So, yes, decent tools are important...

    Or more importantly, how does the debugger do so?

    The compiler generates lots of debug information, and the debugger reads
    it. How else would it work?

    Have a look at my first example above; would the a[i]=b+c*d be
    associated with anything more meaningful than those two lines of assembly?

    And for source code, what difference should it make whether the
    generated code is optimised or not?


    Because it is not always correct!

    Sometimes the issue is on the lines of "Why is this taking so long? I
    had expected less than 0.1µs, but it is taking nearly 0.2µs." You need
    to look at the assembly for that.

    That's the kind of thing that the unit tests Ian is always on about
    don't really work.

    I don't have a problem with compile speed.

    Then just scale up the size of the project; you will hit a point where
    it /is/ a problem! Or change the threshold at which any hanging about
    becomes incredibly annoying; mine is about half a second.

    Just evading the issue by, insteading of getting a tool to work more
    quickly, making it try to avoid compiling things as much as possible,
    isn't a satisfactory solution IMO.

    It's like avoiding spending too long driving your car, due to its only
    managing to do 3 mph, by cutting down on your trips as much as possible.
    It's a slow car - /that's/ the problem.


    I'd be happy if my car could be upgraded to get twice the petrol
    mileage. But I don't see its petrol usage as a problem - and I
    certainly would not swap it for a moped just because the moped uses much
    less petrol.

    There are probably other reasons why deliveries are often done by moped.
    Being cheap to run is one, that it's small and nippy and can go anywhere
    might be others. You might also be able to afford to have a dozen on the
    go at any one time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Manfred@21:1/5 to David Brown on Sun Sep 12 01:09:08 2021
    On 9/9/2021 10:17 PM, David Brown wrote:
    On 09/09/2021 18:41, James Kuyper wrote:
    On 9/9/21 4:54 AM, MisterMule@stubborn.uk wrote:
    On Wed, 8 Sep 2021 20:22:52 +0300
    Paavo Helde <myfirstname@osa.pri.ee> wrote:
    08.09.2021 13:24 MisterMule@stubborn.uk kirjutas:

    You can write a makefile just as simple. However what happens when you >>>>> want foo.c recompiled when foo.h and bar.h changes but bar.c should only >>>>> be recompiled when bar.h and moo.h change, moo.c should only be recompiled
    when
    moo.h changes and main.c should be recompiled when anything changes? >>>>>

    Such dependencies are taken care automatically by the gcc -MD option,

    No unless the compiler is clairvoyant they arn't.

    That option causes a dependencies file to be created specifying all the
    dependencies that the compiler notices during compilation. That file can
    then be used to avoid unnecessary re-builds the next time the same file
    is compiled. The dependency file is therefore always one build
    out-of-date; if you created any new dependencies, or removed any old
    ones, the dependencies file will be incorrect until after the next time
    you do a build. It's therefore not a perfect solution - but neither is
    it useless.


    The trick is to have makefile (or whatever build system you use) rules
    along with gcc so that the dependency file not only labels the object
    file as dependent on the C or C++ file and all the include files it
    uses, recursively, but also labels the dependency file itself to be
    dependent on the same files. Then if the source file or includes are changed, the dependency file is re-created, and make is smart enough to
    then reload that dependency file to get the new dependencies for
    building the object file.

    The makefile rules involved are close to APL in readability, but once
    you have figured out what you need, you can re-use it for any other
    project. And it solves the problem you have here.


    So, for example, if you have these files:

    a.h
    ---
    #include "b.h"

    b.h
    ---
    #define TEST 1

    c.c
    ---
    #include "a.h"
    #include <stdio.h>

    int main(void) {
    printf("Test is %d\n", TEST);
    }


    Then "gcc -MD c.c" makes a file

    c.d
    ---
    c.o: c.c /usr/include/stdc-predef.h a.h b.h /usr/include/stdio.h \
    /usr/include/x86_64-linux-gnu/bits/libc-header-start.h \
    /usr/include/features.h /usr/include/x86_64-linux-gnu/sys/cdefs.h \
    /usr/include/x86_64-linux-gnu/bits/wordsize.h \
    ...


    Using "gcc -MMD c.c" is more helpful, usually, because it skips the
    system includes:

    c.d
    ---
    c.o: c.c a.h b.h


    But the real trick is "gcc -MMD -MT 'c.d c.o' c.c" :

    c.d
    ---
    c.d c.o: c.c a.h b.h


    Now "make" knows that the dependency file is also dependent on the C
    file and headers.


    What you are describing is substantially:

    https://www.gnu.org/software/make/manual/html_node/Automatic-Prerequisites.html

    with the addition of the -MT gcc option, which removes the need for the
    nasty 'sed' command in the "%.d: %.c" rule - which is the kind of thing
    that tends to keep people away.

    Thanks for pointing this out.

    I guess in that rule one can use the single command:
    %.d: %.c
    $(CC) $(CPPFLAGS) -MM -MT '$*.o $@' -MF $@ $<

    (As a side note, it wouldn't hurt if the GCC people updated their docs
    from time to time...)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Bart on Sun Sep 12 12:11:34 2021
    On 12/09/2021 07:56, Bart wrote:
    On 11/09/2021 18:15, David Brown wrote:
    On 11/09/2021 01:33, Bart wrote:
    On 10/09/2021 12:47, David Brown wrote:
    On 10/09/2021 11:10, Juha Nieminen wrote:

    However, gcc -O0 is quite useful in development. For starters, when you >>>>> are interactively debugging (eg. with gdb, or any of the myriads of
    debuggers in different IDEs), you usually don't want things like your >>>>> functions being inlined, loops unrolled, compile-time arithmetic
    (other than, of course, that of constexpr/consteval functions), etc.

    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc.  I /hate/ having to >>>> deal with unoptimised "gcc -O0" code - it is truly awful.  You have vast >>>> amounts of useless extra code that hides the real action.  In the
    assembly, the code to load and store variables from the stack, instead >>>> of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder
    because all the little calls that should be inlined out of existence,
    become layers of calls that you might have to dig through.  Most of the >>>> what you are looking for is drowned out in the noise.

    I agree with JN. With optimised code, what you have may have little
    relationship with the original source code. If you're trying to trace a
    logic problem, how do you map machine code to the corresponding source?


    It's a /lot/ easier with -O1 than -O0. Or you use the debugger.

    Oh, you mean look at the ASM manually? In that case definitely through
    -O0. If I take this fragment:

    for (int i=0; i<100; ++i) {
    a[i]=b+c*d;
    fn(a[i]);
    }

    <snip listings>

    So which one gets the prize?

    The one which runs correctly the fastest!

    You appear to be stuck in the "C as a high level assembler" mindset.
    This shouldn't be true for C and definitely isn't true for C++.

    Optimised code often bears little resemblance to the original source and
    the same source compiled with the same compiler can be optimised in
    different ways depending on the context.

    <snip>

    But it's still not as easy to follow as either of mine.

    So, yes, decent tools are important...

    They are, and decent appears to have different meanings to different
    people! From my perspective a decent compiler will correctly compile my
    code, provide excellent diagnostics and a high degree of optimisation.

    Or more importantly, how does the debugger do so?

    The compiler generates lots of debug information, and the debugger reads
    it. How else would it work?

    Have a look at my first example above; would the a[i]=b+c*d be
    associated with anything more meaningful than those two lines of assembly?

    Does it matter?

    And for source code, what difference should it make whether the
    generated code is optimised or not?


    Because it is not always correct!

    Sometimes the issue is on the lines of "Why is this taking so long? I
    had expected less than 0.1µs, but it is taking nearly 0.2µs." You need
    to look at the assembly for that.

    That's the kind of thing that the unit tests Ian is always on about
    don't really work.

    Unit tests test logic, not performance. We run automated regression
    tests on real hardware to track performance. If there's a change
    between builds, it's trivial to identify the code commits that caused
    the change.

    I don't have a problem with compile speed.

    Then just scale up the size of the project; you will hit a point where
    it /is/ a problem! Or change the threshold at which any hanging about
    becomes incredibly annoying; mine is about half a second.

    Correct, so you scale up the thing you have control over, the build infrastructure. It's safe to say that no one here has their own C++
    compiler the can tweak to go faster! Even with your tools, you have to sacrifice diagnostics and optimisations for speed.

    Just evading the issue by, insteading of getting a tool to work more
    quickly, making it try to avoid compiling things as much as possible,
    isn't a satisfactory solution IMO.

    A build system is more than just a compiler, there are plenty of other
    tools you can deploy to speed up builds.

    It's like avoiding spending too long driving your car, due to its only managing to do 3 mph, by cutting down on your trips as much as possible.
    It's a slow car - /that's/ the problem.

    Poor analogy. A better one is your car is slow because it only has a
    single cylinder engine, so you can make it faster with a bigger cylinder
    or more of them!

    --
    Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Juha Nieminen@21:1/5 to HorseyWorsey@the_stables.com on Sun Sep 12 08:56:42 2021
    HorseyWorsey@the_stables.com wrote:
    Very nice. Now you have a single globals.h type file (VERY common in large projects). How does gcc figure out which C files it needs to build from that?

    It doesn't. It only compiles what you tell it to compile.

    It has to be *something else* that runs it and tells it what to
    compile. Often this is the 'make' program (which is reading a
    file usually named 'Makefile').

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Juha Nieminen@21:1/5 to Bart on Sun Sep 12 09:00:07 2021
    Bart <bc@freeuk.com> wrote:
    The 'supercomputer' on my desk is not significantly faster than the RPi4
    you mention below.

    Then you must have a PC from the 1990's, because the Raspberry Pi 4
    is a *very slow* system, believe me. I know, I have one. What takes
    a few seconds to compile on my PC can take a minute to compile on
    the Pi.

    If your code is fairly standard C, try using Tiny C. I expect your
    program will build in one second or thereabouts.

    It's C++. (This is a C++ newsgroup, after all.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Juha Nieminen@21:1/5 to David Brown on Sun Sep 12 09:10:10 2021
    David Brown <david.brown@hesbynett.no> wrote:
    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc. I /hate/ having to
    deal with unoptimised "gcc -O0" code - it is truly awful. You have vast amounts of useless extra code that hides the real action. In the
    assembly, the code to load and store variables from the stack, instead
    of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder
    because all the little calls that should be inlined out of existence,
    become layers of calls that you might have to dig through. Most of the
    what you are looking for is drowned out in the noise.

    Most interactive debuggers support stepping into a function call, or
    stepping over it (ie. call the function but don't break there and
    just wait for it to return).

    When debugging using an interactive debugger, the execution path
    should follow the source code line-by-line, with *each* line included
    and nothing optimized away.

    That was true 20 years ago, perhaps, with C. Not now, and not with C++.

    I don't see how it isn't true now. If there's a bug in your code, you
    need to see and examine every line of code that could be the culprit.
    If the compiler has done things at compile time and essentially
    optimized the faulty line of code away (essentially "merging" it with subsequent lines), you'll be drawn to the wrong line of code. The first
    line of code that exhibits the wrong values may not be the one that's
    actually creating the wrong values, because that line has been optimized
    away. (The same applies to optimizing away function calls.)

    However, when compiling much larger projects, or when compiling on
    a very inefficient platform, the difference can become substantial,
    and detrimental to development if it's too long.

    Don't solve that by using weaker tools. Solve it by improving how you
    use the tools

    That's exactly what I'm doing by doing a fast "g++ -O0" compilation
    instead of a slow "g++ -O3" compilation.

    get better build systems to avoid unnecessary
    compilation, use ccache if you have build options or variations that you
    swap back and forth, use distcc to spread the load, explain to your boss
    why a Ryzen or even a ThreadRipper will save money overall as /your/
    time costs more than the computer's time. "I don't use optimisation
    because it is too slow" is an excuse for hobby developers, not
    professionals.

    There's no reason to use optimizations while writing code and testing it.
    "I don't use optimization because it is too slow" is *perfectly valid*.
    If it is too slow, and is slowing down your development, it's *good*
    to make it faster. I doubt your boss will be unhappy with you developing
    the program in less time.

    You can compile the final result with optimizations, of course.

    It is better to only compile the bits that need to be compiled. Who
    cares how long a full build takes?

    In the example I provided the project consists of two source files
    and one header file. It's very heavy to compile. Inclusion optimization
    isn't of much help.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Manfred on Sun Sep 12 01:29:08 2021
    On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:
    On 9/9/2021 10:17 PM, David Brown wrote:
    On 09/09/2021 18:41, James Kuyper wrote:
    On 9/9/21 4:54 AM, Miste...@stubborn.uk wrote:
    On Wed, 8 Sep 2021 20:22:52 +0300
    Paavo Helde <myfir...@osa.pri.ee> wrote:
    08.09.2021 13:24 Miste...@stubborn.uk kirjutas:

    You can write a makefile just as simple. However what happens when you >>>>> want foo.c recompiled when foo.h and bar.h changes but bar.c should only
    be recompiled when bar.h and moo.h change, moo.c should only be recompiled
    when
    moo.h changes and main.c should be recompiled when anything changes? >>>>>

    Such dependencies are taken care automatically by the gcc -MD option, >>>
    No unless the compiler is clairvoyant they arn't.

    That option causes a dependencies file to be created specifying all the >> dependencies that the compiler notices during compilation. That file can >> then be used to avoid unnecessary re-builds the next time the same file >> is compiled. The dependency file is therefore always one build
    out-of-date; if you created any new dependencies, or removed any old
    ones, the dependencies file will be incorrect until after the next time >> you do a build. It's therefore not a perfect solution - but neither is
    it useless.


    The trick is to have makefile (or whatever build system you use) rules along with gcc so that the dependency file not only labels the object
    file as dependent on the C or C++ file and all the include files it
    uses, recursively, but also labels the dependency file itself to be dependent on the same files. Then if the source file or includes are changed, the dependency file is re-created, and make is smart enough to then reload that dependency file to get the new dependencies for
    building the object file.

    The makefile rules involved are close to APL in readability, but once
    you have figured out what you need, you can re-use it for any other project. And it solves the problem you have here.


    So, for example, if you have these files:

    a.h
    ---
    #include "b.h"

    b.h
    ---
    #define TEST 1

    c.c
    ---
    #include "a.h"
    #include <stdio.h>

    int main(void) {
    printf("Test is %d\n", TEST);
    }


    Then "gcc -MD c.c" makes a file

    c.d
    ---
    c.o: c.c /usr/include/stdc-predef.h a.h b.h /usr/include/stdio.h \ /usr/include/x86_64-linux-gnu/bits/libc-header-start.h \ /usr/include/features.h /usr/include/x86_64-linux-gnu/sys/cdefs.h \ /usr/include/x86_64-linux-gnu/bits/wordsize.h \
    ...


    Using "gcc -MMD c.c" is more helpful, usually, because it skips the
    system includes:

    c.d
    ---
    c.o: c.c a.h b.h


    But the real trick is "gcc -MMD -MT 'c.d c.o' c.c" :

    c.d
    ---
    c.d c.o: c.c a.h b.h


    Now "make" knows that the dependency file is also dependent on the C
    file and headers.

    What you are describing is substantially:

    https://www.gnu.org/software/make/manual/html_node/Automatic-Prerequisites.html

    with the addition of the -MT gcc option, which removes the need for the nasty 'sed' command in the "%.d: %.c" rule - which is the kind of thing
    that tends to keep people away.

    Thanks for pointing this out.

    I guess in that rule one can use the single command:
    %.d: %.c
    $(CC) $(CPPFLAGS) -MM -MT '$*.o $@' -MF $@ $<

    (As a side note, it wouldn't hurt if the GCC people updated their docs
    from time to time...)


    gcc maintainers have policy against updating/fixing docs.
    From their perspective, compiler and docs are inseparable parts of holy "release".
    I tried to change their mind about it few years ago, but didn't succeed.
    So, if you are not satisfied with quality of gcc docs supplied with your release of gcc compiler then the best you can do is to look at the docs for the most recent "release". I.e. right now 11.2. Naturally, in order to be sure that these docs apply,
    you'd have to update the compiler itself too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to Juha Nieminen on Sun Sep 12 09:18:21 2021
    On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    Very nice. Now you have a single globals.h type file (VERY common in large >> projects). How does gcc figure out which C files it needs to build from that?


    It doesn't. It only compiles what you tell it to compile.

    It has to be *something else* that runs it and tells it what to
    compile. Often this is the 'make' program (which is reading a
    file usually named 'Makefile').

    Well thanks for that valuable input, we're all so much more informed now.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Juha Nieminen@21:1/5 to HorseyWorsey@the_stables.com on Sun Sep 12 09:21:50 2021
    HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    Very nice. Now you have a single globals.h type file (VERY common in large >>> projects). How does gcc figure out which C files it needs to build from that?


    It doesn't. It only compiles what you tell it to compile.

    It has to be *something else* that runs it and tells it what to
    compile. Often this is the 'make' program (which is reading a
    file usually named 'Makefile').

    Well thanks for that valuable input, we're all so much more informed now.

    You made that sound sarcastic. If it is indeed sarcasm, I don't
    really understand why.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to David Brown on Sun Sep 12 09:23:03 2021
    On Sat, 11 Sep 2021 18:40:58 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
    On Fri, 10 Sep 2021 17:38:23 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    Except its dependency automation for noddy builds. For any complex builds
    you're going to need a build system hence the examples I gave.

    Do you still not understand what is being discussed here? "gcc -MD" is
    /not/ a replacement for a build system. It is a tool to help automate
    your build systems. The output of "gcc -MD" is a dependency file, which
    your makefile (or other build system) imports.

    Yes, I understand perfectly. You create huge dependency files which either
    have to be stored in git (or similar) and updated when appropriate, or auto generated in the makefile and then further used in the makefile which has to
    be manually written anyway unless its simple so what exactly is the point?

    Also using
    the compiler is sod all use if you need to fire off a script to auto generate

    some code first.

    No, it is not. It works fine - as long as you understand how your build

    Excuse me? Ok, please do tell me how the compiler knows which script file to run to generate the header file. This'll be interesting.

    well as lots of other C and header files). If I change the text file or
    the Python script and type "make", then first a new header and C file
    are created from the text file. Then "gcc -MD" is run on the C file, >generating a new dependency file, since the dependency file depends on
    the header and the C file. Then this updated dependency file is
    imported by make, and shows that the object file (needed by for the
    link) depends on the updated C file, so the compiler is called on the file.

    And that is supposed to be simpler than writing a Makefile yourself is it? Riiiiiight.

    Last place I worked used used python to generate various
    language header files based on json and that in turn depended on whether the >> json had been updated since the last build. Good luck using gcc to sort that

    out.


    As noted above, I do that fine. It's not rocket science, but it does
    require a bit of thought and trial-and-error to get the details right.

    And is far more work that just putting 2 lines in a makefile consisting of a dummy target and a script call. But each to their own.

    I know how it works. For simple student examples or pet projects its fine, >for
    the real world its little use.


    OK, so you are ignorant and nasty. You don't know how automatic

    Nasty? Don't be such a baby.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to Juha Nieminen on Sun Sep 12 09:29:16 2021
    On Sun, 12 Sep 2021 09:21:50 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    Very nice. Now you have a single globals.h type file (VERY common in large

    projects). How does gcc figure out which C files it needs to build from >that?


    It doesn't. It only compiles what you tell it to compile.

    It has to be *something else* that runs it and tells it what to
    compile. Often this is the 'make' program (which is reading a
    file usually named 'Makefile').

    Well thanks for that valuable input, we're all so much more informed now.

    You made that sound sarcastic. If it is indeed sarcasm, I don't
    really understand why.

    Try following a thread before replying. A couple of posters were claiming the compiler could automate the entire build system and I gave some basic examples of why it couldn't. Now one of them is back peddaling and basically saying it can automate all the bits except the bits it can't when you need to edit the makefile yourself. Genius. Then you come along and mention Makefiles. Well thanks for the heads up, I'd forgotten what they were called.

    Yes, it was sarcasm.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Manfred on Sun Sep 12 11:32:17 2021
    On 12/09/2021 01:09, Manfred wrote:
    On 9/9/2021 10:17 PM, David Brown wrote:
    On 09/09/2021 18:41, James Kuyper wrote:
    On 9/9/21 4:54 AM, MisterMule@stubborn.uk wrote:
    On Wed, 8 Sep 2021 20:22:52 +0300
    Paavo Helde <myfirstname@osa.pri.ee> wrote:
    08.09.2021 13:24 MisterMule@stubborn.uk kirjutas:

    You can write a makefile just as simple. However what happens when >>>>>> you
    want foo.c recompiled when foo.h and bar.h changes but bar.c
    should only
    be recompiled when bar.h and moo.h change, moo.c should only be
    recompiled
    when
    moo.h changes and main.c should be recompiled when anything changes? >>>>>>

    Such dependencies are taken care automatically by the gcc -MD option, >>>>
    No unless the compiler is clairvoyant they arn't.

    That option causes a dependencies file to be created specifying all the
    dependencies that the compiler notices during compilation. That file can >>> then be used to avoid unnecessary re-builds the next time the same file
    is compiled. The dependency file is therefore always one build
    out-of-date; if you created any new dependencies, or removed any old
    ones, the dependencies file will be incorrect until after the next time
    you do a build. It's therefore not a perfect solution - but neither is
    it useless.


    The trick is to have makefile (or whatever build system you use) rules
    along with gcc so that the dependency file not only labels the object
    file as dependent on the C or C++ file and all the include files it
    uses, recursively, but also labels the dependency file itself to be
    dependent on the same files.  Then if the source file or includes are
    changed, the dependency file is re-created, and make is smart enough to
    then reload that dependency file to get the new dependencies for
    building the object file.

    The makefile rules involved are close to APL in readability, but once
    you have figured out what you need, you can re-use it for any other
    project.  And it solves the problem you have here.


    So, for example, if you have these files:

    a.h
    ---
    #include "b.h"

    b.h
    ---
    #define TEST 1

    c.c
    ---
    #include "a.h"
    #include <stdio.h>

    int main(void) {
        printf("Test is %d\n", TEST);
    }


    Then "gcc -MD c.c" makes a file

    c.d
    ---
    c.o: c.c /usr/include/stdc-predef.h a.h b.h /usr/include/stdio.h \
      /usr/include/x86_64-linux-gnu/bits/libc-header-start.h \
      /usr/include/features.h /usr/include/x86_64-linux-gnu/sys/cdefs.h \
      /usr/include/x86_64-linux-gnu/bits/wordsize.h \
      ...


    Using "gcc -MMD c.c" is more helpful, usually, because it skips the
    system includes:

    c.d
    ---
    c.o: c.c a.h b.h


    But the real trick is "gcc -MMD -MT 'c.d c.o' c.c" :

    c.d
    ---
    c.d c.o: c.c a.h b.h


    Now "make" knows that the dependency file is also dependent on the C
    file and headers.


    What you are describing is substantially:

    https://www.gnu.org/software/make/manual/html_node/Automatic-Prerequisites.html


    with the addition of the -MT gcc option, which removes the need for the
    nasty 'sed' command in the "%.d: %.c" rule - which is the kind of thing
    that tends to keep people away.

    That is /exactly/ where I got all this! I had been using the
    suggestions from that page, plus a couple of other "automate
    dependencies with makefiles and gcc" web pages, along with sed, for many
    years. Then when starting a new project not long ago, I needed a bit
    more complicated makefile than usual (building a few different
    variations at the same time). While thinking about the makefile and
    looking at gcc manual, I realised this could replace the "sed" and make
    things a little easier for others working on the same project but using Windows.


    Thanks for pointing this out.

    I guess in that rule one can use the single command:
    %.d: %.c
        $(CC) $(CPPFLAGS) -MM -MT '$*.o $@' -MF $@ $<


    Yes, something like that is the starting point - but there's usually a
    bit in there about directories if you want your build directories
    separate from the source directories.

    (As a side note, it wouldn't hurt if the GCC people updated their docs
    from time to time...)

    Agreed!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Juha Nieminen on Sun Sep 12 21:51:32 2021
    On 12/09/2021 21:10, Juha Nieminen wrote:

    There's no reason to use optimizations while writing code and testing it.

    There may be many!

    Unoptimised code being too slow or too big to run on the target is
    common in real-time or pretend (i.e. Linux) real-time systems. Getting
    more comprehensive error checking is another.


    --
    Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Sun Sep 12 11:46:23 2021
    On 12/09/2021 10:29, Michael S wrote:
    On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:

    (As a side note, it wouldn't hurt if the GCC people updated their docs
    from time to time...)


    Your reference here was to the "make" manual, rather than the gcc documentation. But the gcc folk could add an example like this to their
    manual for the "-MT" option.


    gcc maintainers have policy against updating/fixing docs.
    From their perspective, compiler and docs are inseparable parts of holy "release".

    Well, yes. The gcc manual of a particular version documents the gcc of
    that version. It seems an excellent policy to me.

    It would be a little different if they were publishing a tutorial on
    using gcc.

    I tried to change their mind about it few years ago, but didn't succeed.

    Thankfully. It would be rather messy if they only had one reference
    manual which was full of comments about which versions the particular
    options or features applied to, as these come and go over time.

    I suppose it would be possible to make some kind of interactive
    reference where you selected your choice of compiler version, target
    processor, etc., and the text adapted to suit. That could be a useful
    tool, and help people see exactly what applied to their exact toolchain.
    But it would take a good deal of work, and a rather different thing
    from the current manuals.

    So, if you are not satisfied with quality of gcc docs supplied with
    your release of gcc compiler then the best you can do is to look at the
    docs for the most recent "release". I.e. right now 11.2. Naturally, in
    order to be sure that these docs apply, you'd have to update the
    compiler itself too.


    I think most people /do/ look up the gcc documents online, rather than
    locally. The gcc website has many versions easily available, so you can
    read the manual for the version you are using. And while new features
    in later gcc versions add to the manuals, it's rare that there are
    changes to the text for existing features. The documentation for "-MT"
    is substantially the same for the latest development version of gcc 12
    and for gcc 3.0 from about 20 years ago.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Ian Collins on Sun Sep 12 11:53:09 2021
    On 12/09/2021 02:11, Ian Collins wrote:
    On 12/09/2021 07:56, Bart wrote:

    So which one gets the prize?

    The one which runs correctly the fastest!

    You appear to be stuck in the "C as a high level assembler" mindset.
    This shouldn't be true for C and definitely isn't true for C++.

    Optimised code often bears little resemblance to the original source and
    the same source compiled with the same compiler can be optimised in
    different ways depending on the context.


    With modern C++, generated code - optimised or unoptimised - regularly
    bears no resemblance to the original code. Use a few templates and
    lambdas, and the source code structure is guaranteed to be very
    different from the object code structure. But I suspect that Bart
    disapproves of templates and lambdas.

    But sometimes I find it useful to look at, debug or otherwise work with
    the generated assembly - and for that, for me, -O1 or -O2 is better than
    -O0 because there is so much less noise.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paavo Helde@21:1/5 to All on Sun Sep 12 12:57:10 2021
    12.09.2021 12:23 HorseyWorsey@the_stables.com kirjutas:
    On Sat, 11 Sep 2021 18:40:58 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
    On Fri, 10 Sep 2021 17:38:23 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    Except its dependency automation for noddy builds. For any complex builds >>> you're going to need a build system hence the examples I gave.

    Do you still not understand what is being discussed here? "gcc -MD" is
    /not/ a replacement for a build system. It is a tool to help automate
    your build systems. The output of "gcc -MD" is a dependency file, which
    your makefile (or other build system) imports.

    Yes, I understand perfectly. You create huge dependency files which either have to be stored in git (or similar) and updated when appropriate, or auto

    What on earth are you babbling about? That's becoming insane.

    In the rare chance you are not actually trolling: the dependency files
    are generated by each build afresh, and they get used by the next build
    in the same build tree for deciding which source files need to be
    recompiled when some header file has changed. This is all automatic,
    there are no manual steps involved except for setting it up once when
    writing the initial Makefile (in case one still insists on writing
    Makefiles manually).

    There is no more point to put the dependency files into git than there
    is to put the compiled object files there (in fact, a dependency file is useless without object files).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Juha Nieminen on Sun Sep 12 12:21:52 2021
    On 12/09/2021 11:10, Juha Nieminen wrote:
    David Brown <david.brown@hesbynett.no> wrote:
    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc. I /hate/ having to
    deal with unoptimised "gcc -O0" code - it is truly awful. You have vast
    amounts of useless extra code that hides the real action. In the
    assembly, the code to load and store variables from the stack, instead
    of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder
    because all the little calls that should be inlined out of existence,
    become layers of calls that you might have to dig through. Most of the
    what you are looking for is drowned out in the noise.

    Most interactive debuggers support stepping into a function call, or
    stepping over it (ie. call the function but don't break there and
    just wait for it to return).

    They do indeed. But in the world of embedded development, things can
    get complicated. Stepping over functions works in some cases, but often
    there is interaction with interrupts, timers, hardware, etc., that means
    you simply cannot step like that.


    When debugging using an interactive debugger, the execution path
    should follow the source code line-by-line, with *each* line included
    and nothing optimized away.

    That was true 20 years ago, perhaps, with C. Not now, and not with C++.

    I don't see how it isn't true now. If there's a bug in your code, you
    need to see and examine every line of code that could be the culprit.
    If the compiler has done things at compile time and essentially
    optimized the faulty line of code away (essentially "merging" it with subsequent lines), you'll be drawn to the wrong line of code. The first
    line of code that exhibits the wrong values may not be the one that's actually creating the wrong values, because that line has been optimized away. (The same applies to optimizing away function calls.)

    If the compiler is optimising away a line of code, and that is causing a problem, then the bug lies in the surrounding code that causes it to be optimised away.


    However, when compiling much larger projects, or when compiling on
    a very inefficient platform, the difference can become substantial,
    and detrimental to development if it's too long.

    Don't solve that by using weaker tools. Solve it by improving how you
    use the tools

    That's exactly what I'm doing by doing a fast "g++ -O0" compilation
    instead of a slow "g++ -O3" compilation.


    People clearly have different needs, experiences, and ideas. All I can
    tell you is that since "gcc -O0" does not work as a compiler that
    handles my needs for either static analysis or code generation, it is irrelevant how fast it runs - it is almost entirely useless to me.

    When I write code, I make mistakes sometimes. I want all the help I can
    get to avoid making mistakes, and to find the mistakes as soon as
    possible. One major help is the static analysis from a good compiler.
    I want that feedback as soon as I have finished writing a piece of code,
    not the next day after a nightly build. "gcc -O2" gives me what I need,
    "gcc -O0" does not.

    get better build systems to avoid unnecessary
    compilation, use ccache if you have build options or variations that you
    swap back and forth, use distcc to spread the load, explain to your boss
    why a Ryzen or even a ThreadRipper will save money overall as /your/
    time costs more than the computer's time. "I don't use optimisation
    because it is too slow" is an excuse for hobby developers, not
    professionals.

    There's no reason to use optimizations while writing code and testing it.

    I'm sorry, but you are simply wrong.

    There are some kinds of programming where it doesn't matter how big and
    slow the result is.

    There are other kinds of programming where it /does/ matter. In my
    world, object code that is too big is broken - it will not fit on the
    device, and cannot possibly work. Object code that is too slow is
    broken - it will not do what it has to do within the time required.

    For many programmers, a major reason they are using C or C++ in the
    first place is because the efficiency of the results matters. Otherwise
    they would likely use languages that offer greater developer efficiency
    for many tasks, such as Python. (I'm not suggesting efficiency is the
    /only/ reason for using C or C++, but it is a major one.) Trying to do
    your development and testing without a care for the speed is then a very questionable strategy.

    I can happily believe that for /you/, and the kind of code /you/ work
    on, optimisation is not an issue. But that is not the case for a great
    many other programmers.

    And since using an optimised compiler - on appropriate hardware for a professional developer - is rarely an issue, it seems to me a very
    backwards idea to disable optimisation for a gain in build speed. (If
    you really find it helpful in debugging, then I can appreciate that as a justification.) Who cares if it takes 0.1 seconds or 2 seconds to
    compile the file you've just saved? It is vastly more important that
    you get good warning feedback after 2 seconds instead of the next day,
    and that you test the real code rather than a naïve build that hides
    many potential errors.


    "I don't use optimization because it is too slow" is *perfectly valid*.
    If it is too slow, and is slowing down your development, it's *good*
    to make it faster. I doubt your boss will be unhappy with you developing
    the program in less time.

    You can compile the final result with optimizations, of course.

    It is better to only compile the bits that need to be compiled. Who
    cares how long a full build takes?

    In the example I provided the project consists of two source files
    and one header file. It's very heavy to compile. Inclusion optimization
    isn't of much help.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Ian Collins on Sun Sep 12 11:17:08 2021
    On 12/09/2021 01:11, Ian Collins wrote:
    On 12/09/2021 07:56, Bart wrote:
    On 11/09/2021 18:15, David Brown wrote:
    On 11/09/2021 01:33, Bart wrote:
    On 10/09/2021 12:47, David Brown wrote:
    On 10/09/2021 11:10, Juha Nieminen wrote:

    However, gcc -O0 is quite useful in development. For starters,
    when you
    are interactively debugging (eg. with gdb, or any of the myriads of >>>>>> debuggers in different IDEs), you usually don't want things like your >>>>>> functions being inlined, loops unrolled, compile-time arithmetic
    (other than, of course, that of constexpr/consteval functions), etc. >>>>>
    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc.  I /hate/ having to >>>>> deal with unoptimised "gcc -O0" code - it is truly awful.  You have >>>>> vast
    amounts of useless extra code that hides the real action.  In the
    assembly, the code to load and store variables from the stack, instead >>>>> of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder
    because all the little calls that should be inlined out of existence, >>>>> become layers of calls that you might have to dig through.  Most of >>>>> the
    what you are looking for is drowned out in the noise.

    I agree with JN. With optimised code, what you have may have little
    relationship with the original source code. If you're trying to trace a >>>> logic problem, how do you map machine code to the corresponding source? >>>>

    It's a /lot/ easier with -O1 than -O0.  Or you use the debugger.

    Oh, you mean look at the ASM manually? In that case definitely through
    -O0. If I take this fragment:

          for (int i=0; i<100; ++i) {
              a[i]=b+c*d;
              fn(a[i]);
          }

    <snip listings>

    So which one gets the prize?

    The one which runs correctly the fastest!

    Let's say none of them run correctly and your job is to find out why. Or
    maybe you're comparing two compilers at the same optimisation level, and
    you want to find why one runs correctly and the other doesn't.

    Or maybe this is part of a benchmark where writing to a[i] is part of
    the test, but it's hard to gauge where one lot of generated code is
    better than another, because the other has disappeared completely!

    (I suppose in your world, a set of benchmark results where every one
    runs in 0.0 seconds is perfection! I would say those are terrible
    benchmarks.)

    Have a look at my first example above; would the a[i]=b+c*d be
    associated with anything more meaningful than those two lines of
    assembly?

    Does it matter?

    Ask why you're looking at the ASM in the first place. If there's no
    discernible correspondence with your source, then you might as well look
    at any random bit of ASM code; it would be just as useful!


    And for source code, what difference should it make whether the
    generated code is optimised or not?


    Because it is not always correct!

    Sometimes the issue is on the lines of "Why is this taking so long?  I
    had expected less than 0.1µs, but it is taking nearly 0.2µs."  You need >>> to look at the assembly for that.

    That's the kind of thing that the unit tests Ian is always on about
    don't really work.

    Unit tests test logic, not performance.  We run automated regression
    tests on real hardware to track performance.  If there's a change
    between builds, it's trivial to identify the code commits that caused
    the change.

    Most of the stuff I do is not helped with unit tests.

    Where there are things that possibly be tested by ticking off entries in
    a list, you find the real problems come up with combinations or contexts
    you haven't anticipated and that can't be enumerated.


    I don't have a problem with compile speed.

    Then just scale up the size of the project; you will hit a point where
    it /is/ a problem! Or change the threshold at which any hanging about
    becomes incredibly annoying; mine is about half a second.

    Correct, so you scale up the thing you have control over, the build infrastructure.  It's safe to say that no one here has their own C++ compiler the can tweak to go faster!

    There are lots of C compilers around that are faster than ones like gcc,
    clang and msvc. Tiny C is an extreme example.

    I guess there are not so many independent compilers for C++ written by individuals, which tend to be the faster ones.

    I don't have the skills, knowledge and inclination to have a go at C++,
    but I just get the feeling that such a streamlined product ought to be possible.

    After all, most functionality of C++ is implemented in user-code (isn't
    it?), so the core language must be quite small?

    It's like avoiding spending too long driving your car, due to its only
    managing to do 3 mph, by cutting down on your trips as much as possible.
    It's a slow car - /that's/ the problem.

    Poor analogy.  A better one is your car is slow because it only has a
    single cylinder engine, so you can make it faster with a bigger cylinder
    or more of them!

    OK, let's say it's slow around town because your car is a 40-ton truck,
    and you need to file the equivalent of a flight-plan with the
    authorities before any trip.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Juha Nieminen on Sun Sep 12 11:29:34 2021
    On 12/09/2021 10:00, Juha Nieminen wrote:
    Bart <bc@freeuk.com> wrote:
    The 'supercomputer' on my desk is not significantly faster than the RPi4
    you mention below.

    Then you must have a PC from the 1990's, because the Raspberry Pi 4
    is a *very slow* system, believe me. I know, I have one. What takes
    a few seconds to compile on my PC can take a minute to compile on
    the Pi.

    My machine is from 2010. I probably under-represented the RPi4 timings
    because it was running a 32-bit OS, so programs were 32-bit, but my PC compilers were 64-bit.

    There might be other reasons for the discrepancy you see; maybe your
    projects uses lots of files, and your PC uses SSD while the RPI uses the
    slower (?) SD.

    Or your project is large and is putting pressure on the RPi4's perhaps
    more limited RAM.

    (I chose my comparisons to be within the capabilities of both machines.)

    If your code is fairly standard C, try using Tiny C. I expect your
    program will build in one second or thereabouts.

    It's C++. (This is a C++ newsgroup, after all.)

    See my reply to Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to HorseyWorsey@the_stables.com on Sun Sep 12 12:49:14 2021
    On 12/09/2021 11:23, HorseyWorsey@the_stables.com wrote:
    On Sat, 11 Sep 2021 18:40:58 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
    On Fri, 10 Sep 2021 17:38:23 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    Except its dependency automation for noddy builds. For any complex builds >>> you're going to need a build system hence the examples I gave.

    Do you still not understand what is being discussed here? "gcc -MD" is
    /not/ a replacement for a build system. It is a tool to help automate
    your build systems. The output of "gcc -MD" is a dependency file, which
    your makefile (or other build system) imports.

    Yes, I understand perfectly. You create huge dependency files which either have to be stored in git (or similar) and updated when appropriate, or auto generated in the makefile and then further used in the makefile which has to be manually written anyway unless its simple so what exactly is the point?


    No, you still don't understand.

    Of course the dependency files are /not/ stored in your repositories -
    the whole point is that they are created and updated when appropriate.

    Yes, the main makefile is written manually (or at least, that's what I
    do - there are tools that generate makefiles, and there are other build systems). The automatic dependency generation means that I never have
    to track or manually update the dependencies.

    So when I add a new C or C++ file to my project, I don't need to make
    /any/ changes to my makefile or build setup. It is found automatically,
    and its dependencies are tracked automatically, next time I do a "make".
    If I change the headers included by a C or C++ file, or by another
    header file, I don't need to change anything - it is all automatic when
    I do a "make".

    The dependency files are re-built automatically, if and when needed.

    Also using
    the compiler is sod all use if you need to fire off a script to auto generate

    some code first.

    No, it is not. It works fine - as long as you understand how your build

    Excuse me? Ok, please do tell me how the compiler knows which script file to run to generate the header file. This'll be interesting.

    It knows in the same way as the programmer knows where to file the sales department accounts.

    Confused? Yes, you surely are.

    It is /not/ the /compiler's/ job to know this! It is the /build/ system
    that says what programs are run on which files in order to create all
    the files needed.


    well as lots of other C and header files). If I change the text file or
    the Python script and type "make", then first a new header and C file
    are created from the text file. Then "gcc -MD" is run on the C file,
    generating a new dependency file, since the dependency file depends on
    the header and the C file. Then this updated dependency file is
    imported by make, and shows that the object file (needed by for the
    link) depends on the updated C file, so the compiler is called on the file.

    And that is supposed to be simpler than writing a Makefile yourself is it? Riiiiiight.

    Who do you think wrote the makefile? A friendly goblin? /I/ wrote the makefile. /I/ put rules in the makefile to run "gcc -MD" as and when
    needed in order to generate the dependencies. The point is that no one
    - not me, nor anyone else - needs to keep manually updating the makefile
    to track the simple dependencies that can be calculated automatically.


    Last place I worked used used python to generate various
    language header files based on json and that in turn depended on whether the
    json had been updated since the last build. Good luck using gcc to sort that

    out.


    As noted above, I do that fine. It's not rocket science, but it does
    require a bit of thought and trial-and-error to get the details right.

    And is far more work that just putting 2 lines in a makefile consisting of a dummy target and a script call. But each to their own.

    I know how it works. For simple student examples or pet projects its fine, >> for
    the real world its little use.


    OK, so you are ignorant and nasty. You don't know how automatic

    Nasty? Don't be such a baby.


    I don't yet know whether you are wilfully ignorant, or trolling.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to HorseyWorsey@the_stables.com on Sun Sep 12 12:55:21 2021
    On 12/09/2021 11:29, HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 09:21:50 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    Very nice. Now you have a single globals.h type file (VERY common in large

    projects). How does gcc figure out which C files it needs to build from >> that?


    It doesn't. It only compiles what you tell it to compile.

    It has to be *something else* that runs it and tells it what to
    compile. Often this is the 'make' program (which is reading a
    file usually named 'Makefile').

    Well thanks for that valuable input, we're all so much more informed now. >>
    You made that sound sarcastic. If it is indeed sarcasm, I don't
    really understand why.

    Try following a thread before replying. A couple of posters were claiming the compiler could automate the entire build system and I gave some basic examples
    of why it couldn't. Now one of them is back peddaling and basically saying it can automate all the bits except the bits it can't when you need to edit the makefile yourself. Genius. Then you come along and mention Makefiles. Well thanks for the heads up, I'd forgotten what they were called.


    Ah, so you are saying that /you/ have completely misunderstood the
    thread and what people wrote, and thought mocking would make you look
    clever.

    I guess you'll figure it out in the end, and we can look forward to
    another name change so you can pretend it wasn't you who got everything
    so wrong.

    Yes, it was sarcasm.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Sun Sep 12 13:54:58 2021
    On 12/09/2021 13:42, Michael S wrote:
    On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown
    wrote:
    On 12/09/2021 10:29, Michael S wrote:
    On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:


    (As a side note, it wouldn't hurt if the GCC people updated
    their docs from time to time...)

    Your reference here was to the "make" manual, rather than the gcc
    documentation. But the gcc folk could add an example like this to
    their manual for the "-MT" option.

    gcc maintainers have policy against updating/fixing docs. From
    their perspective, compiler and docs are inseparable parts of
    holy "release".
    Well, yes. The gcc manual of a particular version documents the gcc
    of that version. It seems an excellent policy to me.

    It would be a little different if they were publishing a tutorial
    on using gcc.
    I tried to change their mind about it few years ago, but didn't
    succeed.
    Thankfully. It would be rather messy if they only had one reference
    manual which was full of comments about which versions the
    particular options or features applied to, as these come and go
    over time.


    That's not what I was suggesting. I was suggesting to add an
    clarifications and suggestions for a feature (it was something about
    function attribute 'optimize') that existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs Obviously, a
    previous version of the manual could have been available to
    historians among us in gcc source control database.

    Instead, said clarifications+suggestions were added to the *next*
    release of the manual. Oh, in fact, no, it didn't made it into gcc6
    manual. It was added to gcc 7 manual. So, gcc5 users now have no way
    to know that changes in docs apply to gcc5 every bit as much as they
    apply to gcc7 and later.

    gcc 5 users /do/ have a way to see the change - they can look at later
    gcc references just as easily as older ones.

    Occasionally, changes to the manuals might be back-ported a couple of
    versions, just as changes to the compilers are back-ported if they are important enough (wrong code generation bugs).

    I'm sure the policies could be be better in some aspects - there are
    always going to be cases where new improvements to the manual would
    apply equally to older versions. But such flexibility comes at a cost -
    more work, and more risk of getting things wrong.


    I suppose it would be possible to make some kind of interactive
    reference where you selected your choice of compiler version,
    target processor, etc., and the text adapted to suit. That could be
    a useful tool, and help people see exactly what applied to their
    exact toolchain. But it would take a good deal of work, and a
    rather different thing from the current manuals.
    So, if you are not satisfied with quality of gcc docs supplied
    with your release of gcc compiler then the best you can do is to
    look at the docs for the most recent "release". I.e. right now
    11.2. Naturally, in order to be sure that these docs apply, you'd
    have to update the compiler itself too.

    I think most people /do/ look up the gcc documents online, rather
    than locally.

    I am pretty sure that it is a case. And that was exactly my argument
    *for* updating online copy of gcc5 docs. And the argument of
    maintainers was that people that read manuals locally do exist.

    The gcc website has many versions easily available, so you can read
    the manual for the version you are using. And while new features in
    later gcc versions add to the manuals, it's rare that there are
    changes to the text for existing features.

    In my specific case it was a change to the text of existing feature.


    I think it's fair to say there is scope for improvement in the way gcc documentation is handled, but it is still a good deal better than many compilers and other projects.

    The documentation for "-MT" is substantially the same for the
    latest development version of gcc 12 and for gcc 3.0 from about 20
    years ago.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Sun Sep 12 04:42:24 2021
    On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown wrote:
    On 12/09/2021 10:29, Michael S wrote:
    On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:

    (As a side note, it wouldn't hurt if the GCC people updated their docs
    from time to time...)

    Your reference here was to the "make" manual, rather than the gcc documentation. But the gcc folk could add an example like this to their manual for the "-MT" option.

    gcc maintainers have policy against updating/fixing docs.
    From their perspective, compiler and docs are inseparable parts of holy "release".
    Well, yes. The gcc manual of a particular version documents the gcc of
    that version. It seems an excellent policy to me.

    It would be a little different if they were publishing a tutorial on
    using gcc.
    I tried to change their mind about it few years ago, but didn't succeed.
    Thankfully. It would be rather messy if they only had one reference
    manual which was full of comments about which versions the particular
    options or features applied to, as these come and go over time.


    That's not what I was suggesting.
    I was suggesting to add an clarifications and suggestions for a feature (it was something about function attribute 'optimize')
    that existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs
    Obviously, a previous version of the manual could have been available to historians among us in gcc source control database.

    Instead, said clarifications+suggestions were added to the *next* release of the manual. Oh, in fact, no, it didn't made it into gcc6 manual. It was added to gcc 7 manual.
    So, gcc5 users now have no way to know that changes in docs apply to gcc5 every bit as much as they apply to gcc7 and later.

    I suppose it would be possible to make some kind of interactive
    reference where you selected your choice of compiler version, target processor, etc., and the text adapted to suit. That could be a useful
    tool, and help people see exactly what applied to their exact toolchain.
    But it would take a good deal of work, and a rather different thing
    from the current manuals.
    So, if you are not satisfied with quality of gcc docs supplied with
    your release of gcc compiler then the best you can do is to look at the docs for the most recent "release". I.e. right now 11.2. Naturally, in order to be sure that these docs apply, you'd have to update the
    compiler itself too.

    I think most people /do/ look up the gcc documents online, rather than locally.

    I am pretty sure that it is a case. And that was exactly my argument *for* updating online copy of gcc5 docs.
    And the argument of maintainers was that people that read manuals locally do exist.

    The gcc website has many versions easily available, so you can
    read the manual for the version you are using. And while new features
    in later gcc versions add to the manuals, it's rare that there are
    changes to the text for existing features.

    In my specific case it was a change to the text of existing feature.

    The documentation for "-MT"
    is substantially the same for the latest development version of gcc 12
    and for gcc 3.0 from about 20 years ago.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Sun Sep 12 06:01:01 2021
    On Sunday, September 12, 2021 at 2:55:15 PM UTC+3, David Brown wrote:
    On 12/09/2021 13:42, Michael S wrote:
    On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown
    wrote:
    On 12/09/2021 10:29, Michael S wrote:
    On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:


    (As a side note, it wouldn't hurt if the GCC people updated
    their docs from time to time...)

    Your reference here was to the "make" manual, rather than the gcc
    documentation. But the gcc folk could add an example like this to
    their manual for the "-MT" option.

    gcc maintainers have policy against updating/fixing docs. From
    their perspective, compiler and docs are inseparable parts of
    holy "release".
    Well, yes. The gcc manual of a particular version documents the gcc
    of that version. It seems an excellent policy to me.

    It would be a little different if they were publishing a tutorial
    on using gcc.
    I tried to change their mind about it few years ago, but didn't
    succeed.
    Thankfully. It would be rather messy if they only had one reference
    manual which was full of comments about which versions the
    particular options or features applied to, as these come and go
    over time.


    That's not what I was suggesting. I was suggesting to add an
    clarifications and suggestions for a feature (it was something about function attribute 'optimize') that existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs Obviously, a
    previous version of the manual could have been available to
    historians among us in gcc source control database.

    Instead, said clarifications+suggestions were added to the *next*
    release of the manual. Oh, in fact, no, it didn't made it into gcc6
    manual. It was added to gcc 7 manual. So, gcc5 users now have no way
    to know that changes in docs apply to gcc5 every bit as much as they
    apply to gcc7 and later.
    gcc 5 users /do/ have a way to see the change - they can look at later
    gcc references just as easily as older ones.


    They can see a change, but they can't be sure that it applies to they version of compiler.

    Occasionally, changes to the manuals might be back-ported a couple of versions, just as changes to the compilers are back-ported if they are important enough (wrong code generation bugs).


    As far as I understand, the policy is strict - even obviously wrong statements in the released manual can't be fixed.

    I'm sure the policies could be be better in some aspects - there are
    always going to be cases where new improvements to the manual would
    apply equally to older versions. But such flexibility comes at a cost -
    more work, and more risk of getting things wrong.

    I suppose it would be possible to make some kind of interactive
    reference where you selected your choice of compiler version,
    target processor, etc., and the text adapted to suit. That could be
    a useful tool, and help people see exactly what applied to their
    exact toolchain. But it would take a good deal of work, and a
    rather different thing from the current manuals.
    So, if you are not satisfied with quality of gcc docs supplied
    with your release of gcc compiler then the best you can do is to
    look at the docs for the most recent "release". I.e. right now
    11.2. Naturally, in order to be sure that these docs apply, you'd
    have to update the compiler itself too.

    I think most people /do/ look up the gcc documents online, rather
    than locally.

    I am pretty sure that it is a case. And that was exactly my argument
    *for* updating online copy of gcc5 docs. And the argument of
    maintainers was that people that read manuals locally do exist.

    The gcc website has many versions easily available, so you can read
    the manual for the version you are using. And while new features in
    later gcc versions add to the manuals, it's rare that there are
    changes to the text for existing features.

    In my specific case it was a change to the text of existing feature.

    I think it's fair to say there is scope for improvement in the way gcc documentation is handled, but it is still a good deal better than many compilers and other projects.

    Comparatively to public variant of llvm/clang docs - sure, but that's pretty low bar.
    I never looked at apple's and google's releases of clang docs, hopefully they are better than public release.
    Comparatively to Microsoft - it depends. Some parts of the gcc docs are better others are worse.
    However I would think that when Microsoft's maintainers see a mistake in their online docs for old compiler, or, more likely, are pointed to mistake by community, they fix it without hesitation.

    The documentation for "-MT" is substantially the same for the
    latest development version of gcc 12 and for gcc 3.0 from about 20
    years ago.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Alf P. Steinbach@21:1/5 to Michael S on Sun Sep 12 15:22:55 2021
    On 12 Sep 2021 15:01, Michael S wrote:
    [snip] Comparatively to Microsoft - it depends. Some parts of the gcc
    docs are better others are worse. However I would think that when
    Microsoft's maintainers see a mistake in their online docs for old
    compiler, or, more likely, are pointed to mistake by community, they
    fix it without hesitation.

    Sometimes fastish, sometimes decades (!), sometimes never.

    As an example, `WinMain` was erroneously documented as the machine code
    entry point of a program, and the actual machine code entry point was undocumented, for decades. That has in turn caused other documentation
    errors. I don't know if it's been fixed.

    Apparently MS documentation fix delays depend on politics, as I see it
    politics of three kinds (this is my impression over ~20 years) :

    * Vendor lock-in measures. MS became infamous for this when certain
    internal e-mails became public during a court case fought against Sun.
    Well, maybe also earlier, for they were already infamous for the
    ¹“embrace, extend, extinguish” strategy to deal with common standards.

    * Internal sabotage. Different parts of Microsoft fight the others by
    subtle and not-so-subtle means. Including missing or misleading docs.

    * Unwillingness to admit sheer incompetence.

    - Alf

    Notes:
    ¹ https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Chris Vine@21:1/5 to Ian Collins on Sun Sep 12 15:55:17 2021
    On Sun, 12 Sep 2021 21:51:32 +1200
    Ian Collins <ian-news@hotmail.com> wrote:
    On 12/09/2021 21:10, Juha Nieminen wrote:

    There's no reason to use optimizations while writing code and testing it.

    There may be many!

    Unoptimised code being too slow or too big to run on the target is
    common in real-time or pretend (i.e. Linux) real-time systems. Getting
    more comprehensive error checking is another.

    Checking that code actually tests correctly when fully optimized is also important.

    The number of programmers who understand basic things like the strict
    aliasing rule, or when pointer arithmetic is permitted in C++, is in my experience low (it became obvious during consideration of P0593 for
    C++20 that many of those on the C++ standard committee didn't understand
    the second of those). In fact I suspect the number of programmers who
    fully understand all aspects of C++ and who are fully versed in the
    standard is very small and approaching zero. Correspondingly, I suspect
    that the number of C++ programs which do not unwittingly rely on
    undefined behaviour also approaches 0.

    Choosing -O3 on testing will at least tell you whether your particular
    compiler version in question, when optimizing code with undefined
    behaviour that you had not previously recognized as undefined, will
    give results contradicting your expectations. Programmers who treat C
    as if it were a high level assembler language are particularly prone to
    this problem.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to David Brown on Sun Sep 12 16:13:11 2021
    On Sun, 12 Sep 2021 12:49:14 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 12/09/2021 11:23, HorseyWorsey@the_stables.com wrote:
    Yes, the main makefile is written manually (or at least, that's what I

    Exactly.

    It is /not/ the /compiler's/ job to know this! It is the /build/ system
    that says what programs are run on which files in order to create all
    the files needed.

    Exactly.

    And that is supposed to be simpler than writing a Makefile yourself is it? >> Riiiiiight.

    Who do you think wrote the makefile? A friendly goblin? /I/ wrote the >makefile. /I/ put rules in the makefile to run "gcc -MD" as and when
    needed in order to generate the dependencies. The point is that no one
    - not me, nor anyone else - needs to keep manually updating the makefile
    to track the simple dependencies that can be calculated automatically.

    "Simple". Exactly.

    I don't yet know whether you are wilfully ignorant, or trolling.

    I'm rapidly getting the impression you and the others completely missed my original point despite stating it numerous time. Frankly I can't be bothered
    to continue with this.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to David Brown on Sun Sep 12 16:14:38 2021
    On Sun, 12 Sep 2021 12:55:21 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 12/09/2021 11:29, HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 09:21:50 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 08:56:42 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    Very nice. Now you have a single globals.h type file (VERY common in >large

    projects). How does gcc figure out which C files it needs to build from >>> that?


    It doesn't. It only compiles what you tell it to compile.

    It has to be *something else* that runs it and tells it what to
    compile. Often this is the 'make' program (which is reading a
    file usually named 'Makefile').

    Well thanks for that valuable input, we're all so much more informed now. >>>
    You made that sound sarcastic. If it is indeed sarcasm, I don't
    really understand why.

    Try following a thread before replying. A couple of posters were claiming the

    compiler could automate the entire build system and I gave some basic >examples
    of why it couldn't. Now one of them is back peddaling and basically saying it

    can automate all the bits except the bits it can't when you need to edit the >> makefile yourself. Genius. Then you come along and mention Makefiles. Well >> thanks for the heads up, I'd forgotten what they were called.


    Ah, so you are saying that /you/ have completely misunderstood the
    thread and what people wrote, and thought mocking would make you look
    clever.

    Its nice to know you can spot mocking even if you can't following a simple thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to Paavo Helde on Sun Sep 12 16:22:25 2021
    On Sun, 12 Sep 2021 12:57:10 +0300
    Paavo Helde <myfirstname@osa.pri.ee> wrote:
    12.09.2021 12:23 HorseyWorsey@the_stables.com kirjutas:
    On Sat, 11 Sep 2021 18:40:58 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
    On Fri, 10 Sep 2021 17:38:23 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    Except its dependency automation for noddy builds. For any complex builds >>>> you're going to need a build system hence the examples I gave.

    Do you still not understand what is being discussed here? "gcc -MD" is
    /not/ a replacement for a build system. It is a tool to help automate
    your build systems. The output of "gcc -MD" is a dependency file, which >>> your makefile (or other build system) imports.

    Yes, I understand perfectly. You create huge dependency files which either >> have to be stored in git (or similar) and updated when appropriate, or auto

    What on earth are you babbling about? That's becoming insane.

    In the rare chance you are not actually trolling: the dependency files
    are generated by each build afresh, and they get used by the next build
    in the same build tree for deciding which source files need to be
    recompiled when some header file has changed. This is all automatic,
    there are no manual steps involved except for setting it up once when
    writing the initial Makefile (in case one still insists on writing
    Makefiles manually).

    There is no more point to put the dependency files into git than there
    is to put the compiled object files there (in fact, a dependency file is >useless without object files).

    Oh dear. I give up.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to HorseyWorsey@the_stables.com on Sun Sep 12 18:45:07 2021
    On 12/09/2021 18:13, HorseyWorsey@the_stables.com wrote:

    I don't yet know whether you are wilfully ignorant, or trolling.

    I'm rapidly getting the impression you and the others completely missed my original point despite stating it numerous time.

    You had no point - you failed to read or understand someone's post,
    thought it would make you look smart or cool to mock them, and have been digging yourself deeper in a hole ever since.

    As a side effect, you might have learned something - but I am sure you
    will deny that. Other people have, which is the beauty of Usenet - even
    the worst posters can sometimes inspire a thread that is helpful or
    interesting to others.

    Frankly I can't be bothered
    to continue with this.


    I suppose that is as close to an apology and admission of error as we
    will ever get.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Horsey...@the_stables.com on Sun Sep 12 09:48:22 2021
    On Sunday, September 12, 2021 at 7:13:27 PM UTC+3, Horsey...@the_stables.com wrote:
    On Sun, 12 Sep 2021 12:49:14 +0200
    David Brown <david...@hesbynett.no> wrote:
    On 12/09/2021 11:23, HorseyWorsey@the_stables.com wrote:
    Yes, the main makefile is written manually (or at least, that's what I Exactly.
    It is /not/ the /compiler's/ job to know this! It is the /build/ system >that says what programs are run on which files in order to create all
    the files needed.
    Exactly.
    And that is supposed to be simpler than writing a Makefile yourself is it? >> Riiiiiight.

    Who do you think wrote the makefile? A friendly goblin? /I/ wrote the >makefile. /I/ put rules in the makefile to run "gcc -MD" as and when
    needed in order to generate the dependencies. The point is that no one
    - not me, nor anyone else - needs to keep manually updating the makefile
    to track the simple dependencies that can be calculated automatically. "Simple". Exactly.
    I don't yet know whether you are wilfully ignorant, or trolling.
    I'm rapidly getting the impression you and the others completely missed my original point despite stating it numerous time. Frankly I can't be bothered to continue with this.


    Frankly, from the view of discussion I see on google groups it's quite difficult to figure out what you are arguing foe. Or against.

    Are you saying that all of us have to teach ourselves cmake even despite the fact that writing makefiles by hand + utilizing .d files generated by compiles served our needs rather well for last 10-20-30 years?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Sun Sep 12 18:53:10 2021
    On 12/09/2021 15:01, Michael S wrote:
    On Sunday, September 12, 2021 at 2:55:15 PM UTC+3, David Brown

    I think it's fair to say there is scope for improvement in the way
    gcc documentation is handled, but it is still a good deal better
    than many compilers and other projects.

    Comparatively to public variant of llvm/clang docs - sure, but that's
    pretty low bar. I never looked at apple's and google's releases of
    clang docs, hopefully they are better than public release.
    Comparatively to Microsoft - it depends. Some parts of the gcc docs
    are better others are worse. However I would think that when
    Microsoft's maintainers see a mistake in their online docs for old
    compiler, or, more likely, are pointed to mistake by community, they
    fix it without hesitation.


    I have not nearly enough experience with the documentation of MS's
    compiler to tell - I have only ever looked up a few points. (The same
    with clang.) I've read manuals for many other compilers over the years,
    which are often much worse, but none of these tools are direct
    comparisons with gcc (being commercial embedded toolchains targetting
    one or a few specific microcontroller cores).

    One especially "fun" case was a toolchain that failed to zero-initialise non-local objects that were not explicitly initialised - what you
    normally get by startup code clearing the ".bss" segment. This
    "feature" was documented in a footnote in the middle of the manual,
    noting that the behaviour was not standards conforming and would
    silently break existing C code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Sun Sep 12 11:29:28 2021
    On Sunday, September 12, 2021 at 7:53:29 PM UTC+3, David Brown wrote:
    On 12/09/2021 15:01, Michael S wrote:
    On Sunday, September 12, 2021 at 2:55:15 PM UTC+3, David Brown
    I think it's fair to say there is scope for improvement in the way
    gcc documentation is handled, but it is still a good deal better
    than many compilers and other projects.

    Comparatively to public variant of llvm/clang docs - sure, but that's pretty low bar. I never looked at apple's and google's releases of
    clang docs, hopefully they are better than public release.
    Comparatively to Microsoft - it depends. Some parts of the gcc docs
    are better others are worse. However I would think that when
    Microsoft's maintainers see a mistake in their online docs for old compiler, or, more likely, are pointed to mistake by community, they
    fix it without hesitation.

    I have not nearly enough experience with the documentation of MS's
    compiler to tell - I have only ever looked up a few points. (The same
    with clang.) I've read manuals for many other compilers over the years,
    which are often much worse, but none of these tools are direct
    comparisons with gcc (being commercial embedded toolchains targetting
    one or a few specific microcontroller cores).

    One especially "fun" case was a toolchain that failed to zero-initialise non-local objects that were not explicitly initialised - what you
    normally get by startup code clearing the ".bss" segment. This
    "feature" was documented in a footnote in the middle of the manual,
    noting that the behaviour was not standards conforming and would
    silently break existing C code.

    I suppose, you are talking about TI compilers.
    IIRC, in their old docs (around 1998 to 2002) it was documented in relatively clear way.
    But it was quite a long time ago so it's possible that I misremember.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Chris Vine on Sun Sep 12 21:18:43 2021
    On 12/09/2021 15:55, Chris Vine wrote:
    On Sun, 12 Sep 2021 21:51:32 +1200
    Ian Collins <ian-news@hotmail.com> wrote:
    On 12/09/2021 21:10, Juha Nieminen wrote:

    There's no reason to use optimizations while writing code and testing it. >>
    There may be many!

    Unoptimised code being too slow or too big to run on the target is
    common in real-time or pretend (i.e. Linux) real-time systems. Getting
    more comprehensive error checking is another.

    Checking that code actually tests correctly when fully optimized is also important.

    The number of programmers who understand basic things like the strict aliasing rule, or when pointer arithmetic is permitted in C++, is in my experience low (it became obvious during consideration of P0593 for
    C++20 that many of those on the C++ standard committee didn't understand
    the second of those). In fact I suspect the number of programmers who
    fully understand all aspects of C++ and who are fully versed in the
    standard is very small and approaching zero. Correspondingly, I suspect
    that the number of C++ programs which do not unwittingly rely on
    undefined behaviour also approaches 0.

    Choosing -O3 on testing will at least tell you whether your particular compiler version in question, when optimizing code with undefined
    behaviour that you had not previously recognized as undefined, will
    give results contradicting your expectations. Programmers who treat C
    as if it were a high level assembler language are particularly prone to
    this problem.


    The fun one I've had when debugging is when the compiler correctly spots
    that a bit of code really ought to be a function because it's
    duplicated. Which means you end up in the same bit of machine code from
    two different source locations.

    This is especially fun when looking at post-mortem dump files of some
    code somebody else wrote.

    While I've only ever once or twice found a genuine
    compiler-made-bad-code bug in my entire career UB resulting in different behaviour from bad source is much more common. And if you want to be
    sure about that you need to debug with the target optimisation level.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Bart on Mon Sep 13 12:51:22 2021
    On 12/09/2021 22:17, Bart wrote:
    On 12/09/2021 01:11, Ian Collins wrote:
    On 12/09/2021 07:56, Bart wrote:
    On 11/09/2021 18:15, David Brown wrote:
    On 11/09/2021 01:33, Bart wrote:
    On 10/09/2021 12:47, David Brown wrote:
    On 10/09/2021 11:10, Juha Nieminen wrote:

    However, gcc -O0 is quite useful in development. For starters,
    when you
    are interactively debugging (eg. with gdb, or any of the myriads of >>>>>>> debuggers in different IDEs), you usually don't want things like your >>>>>>> functions being inlined, loops unrolled, compile-time arithmetic >>>>>>> (other than, of course, that of constexpr/consteval functions), etc. >>>>>>
    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc.  I /hate/ having to >>>>>> deal with unoptimised "gcc -O0" code - it is truly awful.  You have >>>>>> vast
    amounts of useless extra code that hides the real action.  In the >>>>>> assembly, the code to load and store variables from the stack, instead >>>>>> of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder >>>>>> because all the little calls that should be inlined out of existence, >>>>>> become layers of calls that you might have to dig through.  Most of >>>>>> the
    what you are looking for is drowned out in the noise.

    I agree with JN. With optimised code, what you have may have little
    relationship with the original source code. If you're trying to trace a >>>>> logic problem, how do you map machine code to the corresponding source? >>>>>

    It's a /lot/ easier with -O1 than -O0.  Or you use the debugger.

    Oh, you mean look at the ASM manually? In that case definitely through
    -O0. If I take this fragment:

          for (int i=0; i<100; ++i) {
              a[i]=b+c*d;
              fn(a[i]);
          }

    <snip listings>

    So which one gets the prize?

    The one which runs correctly the fastest!

    Let's say none of them run correctly and your job is to find out why. Or maybe you're comparing two compilers at the same optimisation level, and
    you want to find why one runs correctly and the other doesn't.

    In that rare event, unit tests pass, target behavior is in correct,
    study whatever assembly is generated. This will be optimised code,
    which is often fun to read...

    Or maybe this is part of a benchmark where writing to a[i] is part of
    the test, but it's hard to gauge where one lot of generated code is
    better than another, because the other has disappeared completely!

    (I suppose in your world, a set of benchmark results where every one
    runs in 0.0 seconds is perfection! I would say those are terrible benchmarks.)

    Where did you get the strange idea from? Benchmarks measure
    performance, a step change in the results is worth checking out (and we
    often do).

    Have a look at my first example above; would the a[i]=b+c*d be
    associated with anything more meaningful than those two lines of
    assembly?

    Does it matter?

    Ask why you're looking at the ASM in the first place. If there's no discernible correspondence with your source, then you might as well look
    at any random bit of ASM code; it would be just as useful!

    If your target code is, by necessity, optimised then you don't have a
    choice.

    And for source code, what difference should it make whether the
    generated code is optimised or not?


    Because it is not always correct!

    Sometimes the issue is on the lines of "Why is this taking so long?  I >>>> had expected less than 0.1µs, but it is taking nearly 0.2µs."  You need >>>> to look at the assembly for that.

    That's the kind of thing that the unit tests Ian is always on about
    don't really work.

    Unit tests test logic, not performance.  We run automated regression
    tests on real hardware to track performance.  If there's a change
    between builds, it's trivial to identify the code commits that caused
    the change.

    Most of the stuff I do is not helped with unit tests.

    So it doesn't have any functions that can be tested? That's a new on
    one me!

    Where tshere are things that possibly be tested by ticking off entries in
    a list, you find the real problems come up with combinations or contexts
    you haven't anticipated and that can't be enumerated.

    If you find a problem, add a test fro it to prove that you have fixed it
    and to make sure it does not recur.

    --
    Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Michael S on Sun Sep 12 17:38:19 2021
    Michael S <already5chosen@yahoo.com> writes:
    On Sunday, September 12, 2021 at 12:46:42 PM UTC+3, David Brown wrote:
    On 12/09/2021 10:29, Michael S wrote:
    [...]
    gcc maintainers have policy against updating/fixing docs.
    From their perspective, compiler and docs are inseparable parts of holy "release".
    Well, yes. The gcc manual of a particular version documents the gcc of
    that version. It seems an excellent policy to me.

    It would be a little different if they were publishing a tutorial on
    using gcc.
    I tried to change their mind about it few years ago, but didn't succeed. >> Thankfully. It would be rather messy if they only had one reference
    manual which was full of comments about which versions the particular
    options or features applied to, as these come and go over time.

    That's not what I was suggesting.
    I was suggesting to add an clarifications and suggestions for a
    feature (it was something about function attribute 'optimize') that
    existed in gcc5 to online copy of respective manual hosted on gcc.gnu.org/onlinedocs Obviously, a previous version of the manual
    could have been available to historians among us in gcc source control database.

    Then the online version of the gcc-5.5.0 documentation would be out of
    sync with the released version. I'm not saying that's completely a bad
    thing, but it's something to consider.

    Instead, said clarifications+suggestions were added to the *next*
    release of the manual. Oh, in fact, no, it didn't made it into gcc6
    manual. It was added to gcc 7 manual.
    So, gcc5 users now have no way to know that changes in docs apply to
    gcc5 every bit as much as they apply to gcc7 and later.

    gcc has always (?) been released as gcc-X.Y.Z.tar.gz (or .bz2), which
    includes source code and documentation. If the documentation in gcc-5.5.0.tar.bz2 incorrectly describes the behavior of gcc-5.5.0,
    that's obviously a problem. I think the gcc maintainers would consider
    that to be a bug in the gcc 5.5.0 release, and they would no more release
    an updated "gcc-5.5.0.tar.bz2" to correct a documentation error than to
    correct a code error. That would cause too much confusion for users who already downloaded the old version of the tar file.

    If they considered the error important enough to justify a new release,
    they could release a new gcc-5.5.1.tar.bz2 or gcc-5.6.0.tar.bz2, perhaps
    with only documentation updates (which would be mentioned in the release notes). But the long-term solution is to fix it in a newer release (the
    latest is 11.2.0), and there's a legitimate question about how much
    effort is justified to support gcc-5.* users.

    Adding footnotes to the online versions of the manuals isn't a bad idea,
    but again there are question about how much effort it takes to support something that few people are using.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Manfred on Sun Sep 12 19:10:32 2021
    Manfred <noname@add.invalid> writes:
    [...]
    Yes, for the record the page that I linked is not about a specific
    version of 'make', it is part of the GNU make online manual.

    The link you posted was:

    https://www.gnu.org/software/make/manual/html_node/Automatic-Prerequisites.html

    The URL doesn't refer to a particular version of GNU make, but I believe
    it will always refer to the latest version of the manual. If you go up
    a couple of levels, it says:

    This is Edition 0.75, last updated 17 January 2020, of The GNU Make
    Manual, for GNU make version 4.3.

    I expect that when 4.4 is released the URL will refer to a newer version
    of the manual.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Manfred@21:1/5 to David Brown on Mon Sep 13 03:52:53 2021
    On 9/12/2021 11:46 AM, David Brown wrote:
    On 12/09/2021 10:29, Michael S wrote:
    On Sunday, September 12, 2021 at 2:09:30 AM UTC+3, Manfred wrote:

    (As a side note, it wouldn't hurt if the GCC people updated their docs
    from time to time...)


    Your reference here was to the "make" manual, rather than the gcc documentation. But the gcc folk could add an example like this to their manual for the "-MT" option.


    Yes, I know I linked a page from the "make" docs, and I meant to write
    the /GNU/ people (but my fingers typed gcc) - or whatever team takes
    care of GNU make.
    I think the GCC man page is OK, and so is the paragraph about -MT, but a
    page of "GNU make" that titles "Automatic Prerequisites" would be nice
    to give an example of the 1-line rule command that uses -MT instead of a
    'sed' hieroglyph sequence (which I can read, but defining it user
    friendly could be controversial) + a temp file + a couple of rogue rm -f commands, expecially considering that -MT is available since gcc 3.04
    (i.e. it dates back a /long/ time))
    I guess someone from the gcc team somewhere during the last couple of
    decades could have sent a carrier pigeon to their colleague in the
    'make' maintenance team next door, with a small paper roll along the
    lines of "hey, we've got this cool feature that might make makefile
    writers' life easier, what do you think about it?" and the receiver
    might have put hand to their doc page...


    gcc maintainers have policy against updating/fixing docs.
    From their perspective, compiler and docs are inseparable parts of holy "release".

    Well, yes. The gcc manual of a particular version documents the gcc of
    that version. It seems an excellent policy to me.

    Yes, for the record the page that I linked is not about a specific
    version of 'make', it is part of the GNU make online manual.


    It would be a little different if they were publishing a tutorial on
    using gcc.

    I tried to change their mind about it few years ago, but didn't succeed.

    Thankfully. It would be rather messy if they only had one reference
    manual which was full of comments about which versions the particular
    options or features applied to, as these come and go over time.

    I suppose it would be possible to make some kind of interactive
    reference where you selected your choice of compiler version, target processor, etc., and the text adapted to suit. That could be a useful
    tool, and help people see exactly what applied to their exact toolchain.
    But it would take a good deal of work, and a rather different thing
    from the current manuals.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to HorseyWorsey@the_stables.com on Sun Sep 12 23:13:24 2021
    On 9/12/21 5:29 AM, HorseyWorsey@the_stables.com wrote:
    ...
    Try following a thread before replying. A couple of posters were claiming the compiler could automate the entire build system

    I have been following the thread, and no one said anything of the kind.
    The claim that started this subthread was:

    On 9/8/21 1:22 PM, Paavo Helde wrote:
    Such dependencies are taken care automatically by the gcc -MD option,
    which you have to specify for both Makefile and CMake based builds

    Note that he only referred to "such dependencies", in context, that
    refers only to the dependencies that result from #include directives. He
    made no claim that all dependencies could be automated that way.
    Secondly, he quite clearly indicated that gcc -MD was to be used with
    Makefile or CMake based builds, he did not in any way suggest that it
    replaced the need for a build system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to HorseyWorsey@the_stables.com on Sun Sep 12 23:12:55 2021
    On 9/12/21 5:23 AM, HorseyWorsey@the_stables.com wrote:
    On Sat, 11 Sep 2021 18:40:58 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 10/09/2021 17:47, HorseyWorsey@the_stables.com wrote:
    ...
    Do you still not understand what is being discussed here? "gcc -MD" is
    /not/ a replacement for a build system. It is a tool to help automate
    your build systems. The output of "gcc -MD" is a dependency file, which
    your makefile (or other build system) imports.

    Yes, I understand perfectly. You create huge dependency files which either have to be stored in git (or similar) and updated when appropriate, or auto

    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated automatically when needed.
    The convention I've seen is that version control systems like git are
    used only to save the source files from which other files are generated
    - they aren't used to store generated files. I know of one main
    exception to that, when using clearcase/clearmake - but the feasibility
    of doing that depends upon features of clearcase and clearmake that are
    not, to the best of my knowledge, shared by git and make, respectively.

    A dependencies file retrieved from git would have to be replaced almost immediately with a freshly generated one, which makes storing it there
    even less reasonable than storing a .o file.

    generated in the makefile and then further used in the makefile which has to be manually written anyway unless its simple so what exactly is the point?

    It greatly simplifies writing the makefile - it need only contain an
    include line referencing the dependency file, rather than containing all
    of those individual dependencies.

    Also using
    the compiler is sod all use if you need to fire off a script to auto generate

    some code first.

    No, it is not. It works fine - as long as you understand how your build

    Excuse me? Ok, please do tell me how the compiler knows which script file to run to generate the header file. This'll be interesting.

    The build system is what needs to know how to generate the header file.
    The dependency file created by gcc -MD is intended to be used by the
    build system, not as a replacement for a build system. All the compiler
    needs to know is whether any given translation unit that it compiles
    #includes the header; if so, it generates the appropriate line in a
    dependency file.

    well as lots of other C and header files). If I change the text file or
    the Python script and type "make", then first a new header and C file
    are created from the text file. Then "gcc -MD" is run on the C file,
    generating a new dependency file, since the dependency file depends on
    the header and the C file. Then this updated dependency file is
    imported by make, and shows that the object file (needed by for the
    link) depends on the updated C file, so the compiler is called on the file.

    And that is supposed to be simpler than writing a Makefile yourself is it?

    It certainly is. It happens automatically when your build system is
    properly set up, without requiring user intervention to update the
    dependencies when they change, and it includes all dependencies, both
    direct and indirect, which is something so difficult that most people
    wouldn't even attempt it if forced to insert the dependencies manually.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Juha Nieminen@21:1/5 to Ian Collins on Mon Sep 13 05:28:20 2021
    Ian Collins <ian-news@hotmail.com> wrote:
    On 12/09/2021 21:10, Juha Nieminen wrote:

    There's no reason to use optimizations while writing code and testing it.

    There may be many!

    Unoptimised code being too slow or too big to run on the target is
    common in real-time or pretend (i.e. Linux) real-time systems. Getting
    more comprehensive error checking is another.

    Rather obviously you need to test that your program works when compiled
    with optimizations (there are situations where bugs manifest themselves
    only when optimizations are turned on).

    But wasn't my point. My point is that during development, when you are
    writing, testing and debugging your code, you rarely need to turn on optimizations. You can, of course (especially if it makes little
    difference in compilation speed), but at the point where -O0 takes
    10 seconds to compile and -O3 takes 1 minute to compile, you might
    reconsider.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Juha Nieminen@21:1/5 to HorseyWorsey@the_stables.com on Mon Sep 13 05:23:16 2021
    HorseyWorsey@the_stables.com wrote:
    Yes, it was sarcasm.

    Well, good luck trying to get any more answers from me. I don't usually
    placate to assholes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Mon Sep 13 08:38:15 2021
    On 12/09/2021 20:29, Michael S wrote:
    On Sunday, September 12, 2021 at 7:53:29 PM UTC+3, David Brown wrote:
    On 12/09/2021 15:01, Michael S wrote:
    On Sunday, September 12, 2021 at 2:55:15 PM UTC+3, David Brown
    I think it's fair to say there is scope for improvement in the way
    gcc documentation is handled, but it is still a good deal better
    than many compilers and other projects.

    Comparatively to public variant of llvm/clang docs - sure, but that's
    pretty low bar. I never looked at apple's and google's releases of
    clang docs, hopefully they are better than public release.
    Comparatively to Microsoft - it depends. Some parts of the gcc docs
    are better others are worse. However I would think that when
    Microsoft's maintainers see a mistake in their online docs for old
    compiler, or, more likely, are pointed to mistake by community, they
    fix it without hesitation.

    I have not nearly enough experience with the documentation of MS's
    compiler to tell - I have only ever looked up a few points. (The same
    with clang.) I've read manuals for many other compilers over the years,
    which are often much worse, but none of these tools are direct
    comparisons with gcc (being commercial embedded toolchains targetting
    one or a few specific microcontroller cores).

    One especially "fun" case was a toolchain that failed to zero-initialise
    non-local objects that were not explicitly initialised - what you
    normally get by startup code clearing the ".bss" segment. This
    "feature" was documented in a footnote in the middle of the manual,
    noting that the behaviour was not standards conforming and would
    silently break existing C code.

    I suppose, you are talking about TI compilers.
    IIRC, in their old docs (around 1998 to 2002) it was documented in relatively clear way.
    But it was quite a long time ago so it's possible that I misremember.



    I was omitting the names, to protect the guilty. Yes, it was TI
    compilers. And not just such old ones either, or just one target device
    - I have seen the same "feature" on wildly separate twos for at least
    two TI device architectures. Neither was well documented, certainly not
    with the flashing red warning lamps you would expect for such a
    pointless, unexpected and critical deviation from standard C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Vir Campestris on Mon Sep 13 08:48:44 2021
    On 12/09/2021 22:18, Vir Campestris wrote:
    On 12/09/2021 15:55, Chris Vine wrote:
    On Sun, 12 Sep 2021 21:51:32 +1200
    Ian Collins <ian-news@hotmail.com> wrote:
    On 12/09/2021 21:10, Juha Nieminen wrote:

    There's no reason to use optimizations while writing code and
    testing it.

    There may be many!

    Unoptimised code being too slow or too big to run on the target is
    common in real-time or pretend (i.e. Linux) real-time systems.  Getting >>> more comprehensive error checking is another.

    Checking that code actually tests correctly when fully optimized is also
    important.

    The number of programmers who understand basic things like the strict
    aliasing rule, or when pointer arithmetic is permitted in C++, is in my
    experience low (it became obvious during consideration of P0593 for
    C++20 that many of those on the C++ standard committee didn't understand
    the second of those).  In fact I suspect the number of programmers who
    fully understand all aspects of C++ and who are fully versed in the
    standard is very small and approaching zero.  Correspondingly, I suspect
    that the number of C++ programs which do not unwittingly rely on
    undefined behaviour also approaches 0.

    Choosing -O3 on testing will at least tell you whether your particular
    compiler version in question, when optimizing code with undefined
    behaviour that you had not previously recognized as undefined, will
    give results contradicting your expectations.  Programmers who treat C
    as if it were a high level assembler language are particularly prone to
    this problem.


    The fun one I've had when debugging is when the compiler correctly spots
    that a bit of code really ought to be a function because it's
    duplicated. Which means you end up in the same bit of machine code from
    two different source locations.

    The relationship between source code and object code sections is far
    looser with modern optimising compilers, and C++ rather than C. It does
    pose challenges for debugging sometimes, but it is worth the cost (IMHO)
    for being able to write better code with fewer bugs in the first place!


    This is especially fun when looking at post-mortem dump files of some
    code somebody else wrote.


    Debugging someone else's code is always horrible...

    While I've only ever once or twice found a genuine
    compiler-made-bad-code bug in my entire career UB resulting in different behaviour from bad source is much more common. And if you want to be
    sure about that you need to debug with the target optimisation level.


    I have hit a few bugs in compilers over the years. I've occasionally
    had to use compilers that had a fair number of bugs, which can be an interesting experience, and I've even found bugs in gcc on occasion. I
    agree with you that you need to do your debugging and testing at the
    target optimisation level (with occasional variations during
    bug-hunting, depending on the kind of bug). And when the compiler bug
    involves ordering of instructions in connection with interrupt
    disabling, you need to be examining the assembly code when fully
    optimising - there are no alternatives (certainly not testing).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Juha Nieminen on Mon Sep 13 09:23:44 2021
    On 13/09/2021 07:28, Juha Nieminen wrote:
    Ian Collins <ian-news@hotmail.com> wrote:
    On 12/09/2021 21:10, Juha Nieminen wrote:

    There's no reason to use optimizations while writing code and testing it. >>
    There may be many!

    Unoptimised code being too slow or too big to run on the target is
    common in real-time or pretend (i.e. Linux) real-time systems. Getting
    more comprehensive error checking is another.

    Rather obviously you need to test that your program works when compiled
    with optimizations (there are situations where bugs manifest themselves
    only when optimizations are turned on).

    But wasn't my point. My point is that during development, when you are writing, testing and debugging your code, you rarely need to turn on optimizations. You can, of course (especially if it makes little
    difference in compilation speed), but at the point where -O0 takes
    10 seconds to compile and -O3 takes 1 minute to compile, you might reconsider.


    Such time differences are not impossible, I suppose, but very rare.

    It is not often that "gcc -O3" is a good choice - the code produced is
    seldom faster, and regularly slower, than "gcc -O2". It is usually only
    worth using if you have code that deals with large data amounts of data
    where auto-vectorisation helps. And even then, most people use it
    incorrectly - without additional flags to specify the target processor
    (or "-fmarch=native") you miss out on much of the possibilities.
    Otherwise you risk slowdowns due to the significantly larger code and
    higher cache usage, rather than speedups due to loop unrolling and the
    like. I'd only use -O3 for code that I have measured and tested to show
    that it is actually faster than using -O2. (And even then, I'd probably specify individual extra optimisation flags as pragmas in the code that benefits from them.)


    Just for fun, I've tested build times for a project I am working on.
    The complete build for the 140 object files (including dependency
    generation and linking, though compilation is the major effort) with
    different -O levels are:

    -O level real time user time =========================================
    0 20.8 s 1m 32 s
    1 23.1 s 1m 43 s
    2 24.3 s 1m 48 s
    3 25.5 s 1m 52 s


    So going from -O2 down to -O0 might save about 20% of the compilation
    time - giving rubbish code, pointless testing, impenetrable assembly
    listings, and vastly weaker static analysis.

    Now, there might be unusual cases where there are extreme timing
    differences, but I believe these figures are not atypical. If you have particularly large source code files - as you might, for generated
    source code, simulation code, and a other niche uses - then
    inter-procedural optimisations with -O1 and above will definitely slow
    you down. In such cases, I'd add pragmas to disable the optimisations
    that scale badly with code size, and then use -O2.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Ian Collins on Mon Sep 13 11:03:59 2021
    On 13/09/2021 01:51, Ian Collins wrote:
    On 12/09/2021 22:17, Bart wrote:

    Most of the stuff I do is not helped with unit tests.

    So it doesn't have any functions that can be tested?  That's a new on
    one me!

    You mean some tiny leaf function that has a well-defined task with a
    known range of inputs? That would be in the minority.

    The problems I have to deal with are several levels above that and
    involve the bigger picture. A flaw in an approach might be discovered
    that means changes to global data structures and new or rewritten functions.

    Also, if you're developing languages then you might have multiple sets
    of source code where the problem might lie.

    One extreme example a few years back, involved six distinct sets of
    source! (Sources of my systems language; revised sources I was testing;
    sources of my C compiler that I was testing the revised compiler on; the sources of Seed7 I was testing that rebuilt C compiler on; a test
    program Bas7.sd7 (a Basic interpreter) that I ran rebuilt Seed7 on; and
    a test program test.bas to try out.)

    Maybe unit tests could have applied to one of those sources, such as
    that C compiler, which might have inherent bugs exposed by the revised implementation language.

    But how effective are those in such a product? gcc has been going since
    1987; there are still active bugs!

    Where tshere are things that possibly be tested by ticking off entries in
    a list, you find the real problems come up with combinations or contexts
    you haven't anticipated and that can't be enumerated.

    If you find a problem, add a test fro it to prove that you have fixed it
    and to make sure it does not recur.

    My 'unit tests' for language products consist of running non-trivial applications to see if they still work.

    Or running multiple generations of a compiler.

    So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe
    and the result can build a range of C programs, if I take that tcc.exe
    and build Tiny C with it, that new tcc2.exe doesn't work (error in the generated binaries).

    So where do you start with that?

    (The problem is still there. I don't have 100% confidence in bcc's code generator; so I will replace it at some point with a new one, and try
    again.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to David Brown on Mon Sep 13 10:55:24 2021
    On Sun, 12 Sep 2021 18:45:07 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 12/09/2021 18:13, HorseyWorsey@the_stables.com wrote:

    I don't yet know whether you are wilfully ignorant, or trolling.

    I'm rapidly getting the impression you and the others completely missed my >> original point despite stating it numerous time.

    You had no point - you failed to read or understand someone's post,
    thought it would make you look smart or cool to mock them, and have been >digging yourself deeper in a hole ever since.

    If writing that load of BS makes you feel better then do carry on.

    As a side effect, you might have learned something - but I am sure you
    will deny that. Other people have, which is the beauty of Usenet - even
    the worst posters can sometimes inspire a thread that is helpful or >interesting to others.

    I've learned of you move the goalposts when you're losing the argument.
    "Cmake is better than makefiles which are ancient and useless"
    "Oh ok, makefiles are fine. You can do everything with dependency files and
    don't need to write the makefile yourself"
    "Oh ok, you can't do everything with dependency files and do need to write some
    of the makefile yourself".

    Etc etc etc.

    Frankly I can't be bothered
    to continue with this.


    I suppose that is as close to an apology and admission of error as we
    will ever get.

    No apology and I'm not wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to Michael S on Mon Sep 13 10:52:31 2021
    On Sun, 12 Sep 2021 09:48:22 -0700 (PDT)
    Michael S <already5chosen@yahoo.com> wrote:
    On Sunday, September 12, 2021 at 7:13:27 PM UTC+3, Horsey...@the_stables.com >wrote:
    On Sun, 12 Sep 2021 12:49:14 +0200
    David Brown <david...@hesbynett.no> wrote:
    On 12/09/2021 11:23, HorseyWorsey@the_stables.com wrote:
    Yes, the main makefile is written manually (or at least, that's what I
    Exactly.
    It is /not/ the /compiler's/ job to know this! It is the /build/ system
    that says what programs are run on which files in order to create all
    the files needed.
    Exactly.
    And that is supposed to be simpler than writing a Makefile yourself is >it?
    Riiiiiight.

    Who do you think wrote the makefile? A friendly goblin? /I/ wrote the
    makefile. /I/ put rules in the makefile to run "gcc -MD" as and when
    needed in order to generate the dependencies. The point is that no one
    - not me, nor anyone else - needs to keep manually updating the makefile
    to track the simple dependencies that can be calculated automatically.
    "Simple". Exactly.
    I don't yet know whether you are wilfully ignorant, or trolling.
    I'm rapidly getting the impression you and the others completely missed my >> original point despite stating it numerous time. Frankly I can't be bothered

    to continue with this.


    Frankly, from the view of discussion I see on google groups it's quite >difficult to figure out what you are arguing foe. Or against.

    Are you saying that all of us have to teach ourselves cmake even despite the >fact that writing makefiles by hand + utilizing .d files generated by compiles >served our needs rather well for last 10-20-30 years?

    No.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to Juha Nieminen on Mon Sep 13 10:56:44 2021
    On Mon, 13 Sep 2021 05:23:16 -0000 (UTC)
    Juha Nieminen <nospam@thanks.invalid> wrote:
    HorseyWorsey@the_stables.com wrote:
    Yes, it was sarcasm.

    Well, good luck trying to get any more answers from me. I don't usually >placate to assholes.

    I'm not interested what you do with your donkey or its hole.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to HorseyWorsey@the_stables.com on Mon Sep 13 14:15:19 2021
    On 13/09/2021 12:55, HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 18:45:07 +0200

    I've learned of you move the goalposts when you're losing the argument. "Cmake is better than makefiles which are ancient and useless"
    "Oh ok, makefiles are fine. You can do everything with dependency files and
    don't need to write the makefile yourself"
    "Oh ok, you can't do everything with dependency files and do need to write some
    of the makefile yourself".

    Etc etc etc.

    I'd love to see a reference where I mention CMake at all. It's not a
    tool I have ever used. As for other people's posts, can you give any
    reference to posts that suggest that "gcc -MD" is anything other than an
    aid to generating dependency information that can be used by a build
    system (make, ninja, presumably CMake, and no doubt many other systems)?

    No, I am confident that you cannot.

    You have misunderstood and misrepresented others all along. It's fine
    to misunderstand or misread - it happens to everyone at times. Mocking,
    lying, sarcasm to try to hide your mistakes when they are pointed out to
    you - that is much less fine.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Mon Sep 13 15:09:05 2021
    David Brown <david.brown@hesbynett.no> writes:
    On 12/09/2021 22:18, Vir Campestris wrote:
    ace!


    This is especially fun when looking at post-mortem dump files of some
    code somebody else wrote.


    Debugging someone else's code is always horrible...

    I've been doing that for forty years now; it's more likely than
    not that any programmer in a large organization or working on a
    legacy codebase will have to debug someone else's code.

    Frankly, it's not that difficult - although it can, at times,
    be a time-consuming slog - there was a time when a custom X.509
    certificate management system (running on Solaris) would very
    occasionally SIGSEGV. This was a large codebase that includes
    several cryptographic libraries (bsafe, libssl, etc.). I finally
    tracked it down to a linkage issue - different parts of the
    application had linked against different versions of libssl and
    would call a setup function from one and try to use the resulting
    data structure in another.



    While I've only ever once or twice found a genuine
    compiler-made-bad-code bug in my entire career UB resulting in different
    behaviour from bad source is much more common. And if you want to be
    sure about that you need to debug with the target optimisation level.


    I have hit a few bugs in compilers over the years.

    My first was back in the Portable C Compiler days - Motorola
    had ported it to the M88100 processor family and we were running
    cfront to compile C++ code and compiled the resulting C with
    the Motorola PCC port (for which we had the source).

    cfront generates expressions with up to a dozen or more comma
    operators - when PCC was processing the parse tree for those
    statements, it would run out of temporary registers and the codegen
    phase would fail. I implemented a sethi-ullman algorithm that,
    given a tree, would compute the number of temps required and added
    code to spill the registers to the stack if needed.

    More recently, we've run into several GCC bugs in the ARM64 world;
    we have one of the GCC engineers on staff who generates
    the fixes and pushes them upstream. The latest issue was related
    to bitfields in structs when compiled for big-endian on ARM64.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Juha Nieminen on Mon Sep 13 14:55:22 2021
    Juha Nieminen <nospam@thanks.invalid> writes:
    HorseyWorsey@the_stables.com wrote:
    Yes, it was sarcasm.

    Well, good luck trying to get any more answers from me. I don't usually >placate to assholes.

    Come now, its nom-de-post is sufficient to make one wary of its goals.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to HorseyWorsey@the_stables.com on Mon Sep 13 12:07:27 2021
    On 9/13/21 11:32 AM, HorseyWorsey@the_stables.com wrote:
    On Mon, 13 Sep 2021 14:15:19 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    ...
    I'd love to see a reference where I mention CMake at all. It's not a
    tool I have ever used. As for other people's posts, can you give any
    reference to posts that suggest that "gcc -MD" is anything other than an
    aid to generating dependency information that can be used by a build
    system (make, ninja, presumably CMake, and no doubt many other systems)?
    ...
    The usual response from people on this group, pretend something wasn't said when it becomes inconvenient.

    All you have to do to prove that it was said would be to cite the
    relevant message by author, date, and time, and to quote the relevant
    text. Of course, since you misunderstood that text the first time, when
    people point out that fact, it might seem to you they are merely
    engaging in a cover-up. There's not much that anyone else can do about
    such self-delusion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HorseyWorsey@the_stables.com@21:1/5 to David Brown on Mon Sep 13 15:32:59 2021
    On Mon, 13 Sep 2021 14:15:19 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    On 13/09/2021 12:55, HorseyWorsey@the_stables.com wrote:
    On Sun, 12 Sep 2021 18:45:07 +0200

    I've learned of you move the goalposts when you're losing the argument.
    "Cmake is better than makefiles which are ancient and useless"
    "Oh ok, makefiles are fine. You can do everything with dependency files and >> don't need to write the makefile yourself"
    "Oh ok, you can't do everything with dependency files and do need to write >some
    of the makefile yourself".

    Etc etc etc.

    I'd love to see a reference where I mention CMake at all. It's not a
    tool I have ever used. As for other people's posts, can you give any >reference to posts that suggest that "gcc -MD" is anything other than an
    aid to generating dependency information that can be used by a build
    system (make, ninja, presumably CMake, and no doubt many other systems)?

    No, I am confident that you cannot.

    You have misunderstood and misrepresented others all along. It's fine
    to misunderstand or misread - it happens to everyone at times. Mocking, >lying, sarcasm to try to hide your mistakes when they are pointed out to
    you - that is much less fine.

    The usual response from people on this group, pretend something wasn't said when it becomes inconvenient.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to James Kuyper on Mon Sep 13 22:00:38 2021
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the executables, the symbol files that go with them, and put a label in the
    source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at dumps.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Ian Collins on Mon Sep 13 21:42:10 2021
    Ian Collins <ian-news@hotmail.com> writes:
    On 14/09/2021 09:00, Vir Campestris wrote:
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated
    automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the
    executables, the symbol files that go with them, and put a label in the
    source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at dumps.

    But they can be recreated from the source and a given source control
    hash or tag?

    Maybe. If you have the exact same compiler, assembler and linker. Maybe.

    And not if the linker uses any form of address space randomization.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Vir Campestris on Tue Sep 14 09:24:05 2021
    On 14/09/2021 09:00, Vir Campestris wrote:
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated
    automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the executables, the symbol files that go with them, and put a label in the source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at dumps.

    But they can be recreated from the source and a given source control
    hash or tag?

    --
    Ian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Scott Lurndal on Tue Sep 14 10:15:48 2021
    On 14/09/2021 09:42, Scott Lurndal wrote:
    Ian Collins <ian-news@hotmail.com> writes:
    On 14/09/2021 09:00, Vir Campestris wrote:
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated
    automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the
    executables, the symbol files that go with them, and put a label in the
    source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at dumps. >>
    But they can be recreated from the source and a given source control
    hash or tag?

    Maybe. If you have the exact same compiler, assembler and linker. Maybe.

    We build everything in virtual machines or docker, so the build
    environment for any build can be recovered. Our CI system generates
    ~500GB of built artifacts per day, which doesn't include intermediate
    items such as object files. Not something we'd want to keep around forever!

    And not if the linker uses any form of address space randomization.

    This hasn't bitten us. Our target is (literally!) a closed black box.

    --
    Ian

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Bart on Tue Sep 14 12:05:56 2021
    On 13/09/2021 22:03, Bart wrote:
    On 13/09/2021 01:51, Ian Collins wrote:
    On 12/09/2021 22:17, Bart wrote:

    Most of the stuff I do is not helped with unit tests.

    So it doesn't have any functions that can be tested?  That's a new on
    one me!

    You mean some tiny leaf function that has a well-defined task with a
    known range of inputs? That would be in the minority.

    Doesn't every bit of well designed code have functions, leaf or
    otherwise, with well defined tasks?

    The problems I have to deal with are several levels above that and
    involve the bigger picture. A flaw in an approach might be discovered
    that means changes to global data structures and new or rewritten functions.

    Also, if you're developing languages then you might have multiple sets
    of source code where the problem might lie.

    So just use those for higher level testing.

    <snip>

    Maybe unit tests could have applied to one of those sources, such as
    that C compiler, which might have inherent bugs exposed by the revised implementation language.

    Unit tests apply to small "units" of code, such as classes or functions,
    not whole products.

    Where tshere are things that possibly be tested by ticking off entries in >>> a list, you find the real problems come up with combinations or contexts >>> you haven't anticipated and that can't be enumerated.

    If you find a problem, add a test fro it to prove that you have fixed it
    and to make sure it does not recur.

    My 'unit tests' for language products consist of running non-trivial applications to see if they still work.

    Those aren't by any definition unit tests. They are what would normally
    be know as acceptance tests.

    Or running multiple generations of a compiler.

    So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe
    and the result can build a range of C programs, if I take that tcc.exe
    and build Tiny C with it, that new tcc2.exe doesn't work (error in the generated binaries).

    So where do you start with that?

    By testing the logic in your code?

    --
    Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to HorseyWorsey@the_stables.com on Mon Sep 13 22:18:17 2021
    On 9/13/21 11:32 AM, HorseyWorsey@the_stables.com wrote:
    On Mon, 13 Sep 2021 14:15:19 +0200
    David Brown <david.brown@hesbynett.no> wrote:
    ...
    I'd love to see a reference where I mention CMake at all. It's not a
    tool I have ever used. As for other people's posts, can you give any
    reference to posts that suggest that "gcc -MD" is anything other than an
    aid to generating dependency information that can be used by a build
    system (make, ninja, presumably CMake, and no doubt many other systems)?
    ...
    The usual response from people on this group, pretend something wasn't said when it becomes inconvenient.

    All you have to do to prove that it was said would be to cite the
    relevant message by author, date, and time, and to quote the relevant
    text. Of course, since you misunderstood that text the first time, when
    people point out that fact, it might seem to you they are merely
    engaging in a cover-up. There's not much that anyone else can do about
    such self-delusion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Scott Lurndal on Tue Sep 14 09:12:34 2021
    On 13/09/2021 23:42, Scott Lurndal wrote:
    Ian Collins <ian-news@hotmail.com> writes:
    On 14/09/2021 09:00, Vir Campestris wrote:
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated
    automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the
    executables, the symbol files that go with them, and put a label in the
    source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at dumps. >>
    But they can be recreated from the source and a given source control
    hash or tag?

    Maybe. If you have the exact same compiler, assembler and linker. Maybe.

    And not if the linker uses any form of address space randomization.


    All these things vary by project. For the kinds of things I do, I make
    a point of archiving the toolchain tool (though not in a git
    repository). Reproducible builds are important for me. Other kinds of projects have different setups and are perhaps build using a variety of different tools.

    So while /I/ keep track of released binaries - and when re-opening old projects, I have something to compare so that I can check the re-created
    build environment - it is not something everyone needs or does.

    Another advantage I see of binaries in the repositories is it makes life
    easier for people involved in testing or other work - they can pull out
    the latest binaries without having to have all the toolchain themselves.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Ian Collins on Tue Sep 14 10:28:17 2021
    On 14/09/2021 01:05, Ian Collins wrote:
    On 13/09/2021 22:03, Bart wrote:

    So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe
    and the result can build a range of C programs, if I take that tcc.exe
    and build Tiny C with it, that new tcc2.exe doesn't work (error in the
    generated binaries).

    So where do you start with that?

    By testing the logic in your code?

    Which bit of logic out of 10s of 1000s of lines? The actual bug might be
    in bcc, or maybe in mm.exe which generated the code of bcc.exe, or it
    might be a latent bug in tcc.exe (or maybe yet another quirk of C which
    I wasn't aware of), and I'm not going to start delving into /its/ 25K
    lines of C code, because the next program might have 250K lines or 2.5M.

    I get the impression from you that, with a product like a compiler, if
    it passes all its unit tests, then it is unnecessary to test it further
    with any actual applications! Just ship it immediately.

    In actuality, you will see new bugs you didn't anticipate. The bug may
    only manifest itself in a second or subsequent generation. Or the
    application built with your compiler may only go wrong with certain inputs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Scott Lurndal on Tue Sep 14 17:07:38 2021
    On 14/09/2021 17:01, Scott Lurndal wrote:
    David Brown <david.brown@hesbynett.no> writes:
    On 13/09/2021 23:42, Scott Lurndal wrote:
    Ian Collins <ian-news@hotmail.com> writes:
    On 14/09/2021 09:00, Vir Campestris wrote:
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o >>>>>> files in git? How about executables? You don't need to update them >>>>>> manually; if you set up your build system properly, they get updated >>>>>> automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the >>>>> executables, the symbol files that go with them, and put a label in the >>>>> source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at dumps.

    But they can be recreated from the source and a given source control
    hash or tag?

    Maybe. If you have the exact same compiler, assembler and linker. Maybe. >>>
    And not if the linker uses any form of address space randomization.


    All these things vary by project. For the kinds of things I do, I make
    a point of archiving the toolchain tool (though not in a git
    repository). Reproducible builds are important for me. Other kinds of
    projects have different setups and are perhaps build using a variety of
    different tools.

    In our case, the debuginfo files (DWARF data extracted from the ELF
    prior to shipping to customers) are saved for each software 'drop'
    to a customer. Much easier to deal with than finding the particular version of the toolset used to build the product.


    Debug information is never reproducible - there are always paths,
    timestamps, etc., that differ. But none of our customers are interested
    in debug information or even stripped elf files - it's the .bin or .hex
    images that must match entirely.

    The toolset version is in the makefiles - when "/opt/gcc-arm-none-eabi-10-2020-q4-major/bin/" is explicit in the
    makefile, there's never any doubt about the toolchain.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Tue Sep 14 15:01:14 2021
    David Brown <david.brown@hesbynett.no> writes:
    On 13/09/2021 23:42, Scott Lurndal wrote:
    Ian Collins <ian-news@hotmail.com> writes:
    On 14/09/2021 09:00, Vir Campestris wrote:
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated >>>>> automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the
    executables, the symbol files that go with them, and put a label in the >>>> source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at dumps.

    But they can be recreated from the source and a given source control
    hash or tag?

    Maybe. If you have the exact same compiler, assembler and linker. Maybe. >>
    And not if the linker uses any form of address space randomization.


    All these things vary by project. For the kinds of things I do, I make
    a point of archiving the toolchain tool (though not in a git
    repository). Reproducible builds are important for me. Other kinds of >projects have different setups and are perhaps build using a variety of >different tools.

    In our case, the debuginfo files (DWARF data extracted from the ELF
    prior to shipping to customers) are saved for each software 'drop'
    to a customer. Much easier to deal with than finding the particular
    version of the toolset used to build the product.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Collins@21:1/5 to Bart on Wed Sep 15 08:17:24 2021
    On 14/09/2021 21:28, Bart wrote:
    On 14/09/2021 01:05, Ian Collins wrote:
    On 13/09/2021 22:03, Bart wrote:

    So while I know that my C compiler bcc.exe can build Tiny C into tcc.exe >>> and the result can build a range of C programs, if I take that tcc.exe
    and build Tiny C with it, that new tcc2.exe doesn't work (error in the
    generated binaries).

    So where do you start with that?

    By testing the logic in your code?

    Which bit of logic out of 10s of 1000s of lines?

    If you've never had tests, adding a full set after the fact will be too painful. What you can do is add tests for code you are about to change
    in order to enure you understand what the code currently does and that
    your change hasn't broken anything.

    <snip>


    I get the impression from you that, with a product like a compiler, if
    it passes all its unit tests, then it is unnecessary to test it further
    with any actual applications! Just ship it immediately.

    Where did you get that strange idea? Not from me. There should always
    be layers of testing.

    --
    Ian.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Ian Collins on Tue Sep 14 21:42:26 2021
    On 13/09/2021 22:24, Ian Collins wrote:
    On 14/09/2021 09:00, Vir Campestris wrote:
    On 13/09/2021 04:12, James Kuyper wrote:
    <snip>
    Why in the world would you store them in git? They are
    compiler-generated files, not source files. Do you normally keep .o
    files in git? How about executables? You don't need to update them
    manually; if you set up your build system properly, they get updated
    automatically when needed.
    </snip>

    We absolutely keep executables in a controlled archive.

    The build system produces a system image; this is in turn made from
    various files, some of which are executable; we store the image, the
    executables, the symbol files that go with them, and put a label in the
    source control system to show where it was build from.

    The executables and symbols can save a _lot_ of time when looking at
    dumps.

    But they can be recreated from the source and a given source control
    hash or tag?

    In theory, yes.

    In practice I've never had to try. But you can think of the binaries as
    a cache.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Manfred@21:1/5 to Ian Collins on Wed Sep 15 19:20:41 2021
    On 9/11/2021 1:42 AM, Ian Collins wrote:
    On 11/09/2021 11:33, Bart wrote:
    On 10/09/2021 12:47, David Brown wrote:
    On 10/09/2021 11:10, Juha Nieminen wrote:

    However, gcc -O0 is quite useful in development. For starters, when you >>>> are interactively debugging (eg. with gdb, or any of the myriads of
    debuggers in different IDEs), you usually don't want things like your
    functions being inlined, loops unrolled, compile-time arithmetic
    (other than, of course, that of constexpr/consteval functions), etc.

    I always compile with debugging information, and I regularly use
    breakpoints, stepping, assembly-level debug, etc.  I /hate/ having to
    deal with unoptimised "gcc -O0" code - it is truly awful.  You have vast >>> amounts of useless extra code that hides the real action.  In the
    assembly, the code to load and store variables from the stack, instead
    of registers, often outweighs the actual interesting stuff.
    Single-stepping through your important functions becomes far harder
    because all the little calls that should be inlined out of existence,
    become layers of calls that you might have to dig through.  Most of the >>> what you are looking for is drowned out in the noise.

    I agree with JN. With optimised code, what you have may have little
    relationship with the original source code. If you're trying to trace a
    logic problem, how do you map machine code to the corresponding source?

    If you are trying to trace a logic problem, unit tests are your friend.
     The debugger is the last resort...

    One additional consideration is that spotting a logic problem that
    involves local variables only is usually easy just looking at the source
    code.
    Harder bugs may involve data that is spread across different TUs, which,
    unlike local variables, have a low chance of being optimized away or
    into registers by the compiler - while on the other hand the nesting
    level of calls may be reduced, which makes debugging faster as David
    Brown was relating to.

    So, even if optimized code is theoretically harder to debug, practical circumstances may come to the rescue.
    But I agree that the debugger is the last resort.


    After continually bawling me out for putting too much emphasis on
    compilation speed, are you saying for the first time that it might be
    important after all?!

    If your project has thousands of source files and builds for several
    targets, then build times are obviously important.

    However you seem to be in favour of letting off the people who write the
    tools (because it is unheard of for them create an inefficient
    product!), and just throwing more hardware - and money - at the problem.

    Build systems are a collection of tools, one of which is the compiler.
    The collection gives you the performance you want.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)