• Makefile or IDE?

    From pozz@21:1/5 to All on Thu Dec 2 12:46:48 2021
    When I download C source code (for example for Linux), most of the time
    I need to use make (or autoconf).

    In embedded world (no Linux embedded), we use MCUs produced by a silicon
    vendor that give you at least a ready-to-use IDE (Elipse based or Visual
    Studio based or proprietary). Recently it give you a full set of
    libraries, middlewares, tools to create a complex project from scratch
    in a couple of minutes that is compatibile and buildable with its IDE.

    Ok, it's a good thing to start with a minimal effort and make some tests
    on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the
    silicon vendor or it is maintained with a Makefile (or similar).

    I'm asking this, because I just started to add some unit tests (to run
    on the host machine) on one of my projects that is built under the IDE.
    Without a Makefile is very difficult to add a series of tests: do I
    create a different IDE project for each module test?

    Moreover, the build process of a project maintained under an IDE is
    manual (click on a button). Most of the time there isn't the possibility
    to build by a command line and when it is possible, it isn't the
    "normal" way.

    Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.

    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to pozz on Thu Dec 2 15:22:36 2021
    On 2021-12-02, pozz <pozzugno@gmail.com> wrote:

    Ok, it's a good thing to start with a minimal effort and make some tests
    on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).

    We always use makefiles. Some people do their editing and "make"ing in
    an IDE like eclipse. Others use emacs or whatever other environment
    they like.

    In my experience, software provided by silicon vendors has always,
    been utter crap. That's been true for IDEs, libraries, header files,
    debuggers -- _everything_. And it's been true for 40 years.

    Recently I tried to use the Silicon vendor's IDE and demo
    project/libraries to build the simple app that prints "hello world" on
    a serial port. This is an application, IDE, and libraries the silicon
    vendor provided _with_the_evaluation_board_.

    Following the instructions, step-by-step, did allow me to build an
    executable. It was far too large for the MCU's flash. I threw out the
    silicon vendor's "drivers" (which were absurdly bloated) and C library
    (also huge). I wrote my own bare-metal drivers and substituted the
    printf() implementation I had been using for years. The exectuable
    size was reduced by over 75%.

    We've also tried to use non-silicon-vendor IDEs (eclipse), and using
    the IDE's concept of "projects" is always a complete mess. The
    "project" always ends up with lot's of hard-coded paths and
    host-specific junk in it. This means you can't check the project into git/subversion, check it out on another machine, and build it without
    days of "fixing" the project to work on the new host.

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pozz@21:1/5 to All on Thu Dec 2 17:34:46 2021
    Il 02/12/2021 16:22, Grant Edwards ha scritto:
    On 2021-12-02, pozz <pozzugno@gmail.com> wrote:

    Ok, it's a good thing to start with a minimal effort and make some tests
    on EVB and new chips. However I'm wondering if a good quality
    commercial/industrial grade software is maintained under the IDE of the
    silicon vendor or it is maintained with a Makefile (or similar).

    We always use makefiles. Some people do their editing and "make"ing in
    an IDE like eclipse. Others use emacs or whatever other environment
    they like.

    In my experience, software provided by silicon vendors has always,
    been utter crap. That's been true for IDEs, libraries, header files, debuggers -- _everything_. And it's been true for 40 years.

    Recently I tried to use the Silicon vendor's IDE and demo
    project/libraries to build the simple app that prints "hello world" on
    a serial port. This is an application, IDE, and libraries the silicon
    vendor provided _with_the_evaluation_board_.

    Following the instructions, step-by-step, did allow me to build an executable. It was far too large for the MCU's flash. I threw out the
    silicon vendor's "drivers" (which were absurdly bloated) and C library
    (also huge). I wrote my own bare-metal drivers and substituted the
    printf() implementation I had been using for years. The exectuable
    size was reduced by over 75%.

    We've also tried to use non-silicon-vendor IDEs (eclipse), and using
    the IDE's concept of "projects" is always a complete mess. The
    "project" always ends up with lot's of hard-coded paths and
    host-specific junk in it. This means you can't check the project into git/subversion, check it out on another machine, and build it without
    days of "fixing" the project to work on the new host.

    Thank you for sharing your experiences. Anyway my post wasn't related to
    the quality (size/speed efficiency...) of source code provided by
    silicon vendors, but to the build process: IDE vs Makefile.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to pozz on Thu Dec 2 09:48:27 2021
    On 12/2/2021 4:46 AM, pozz wrote:
    When I download C source code (for example for Linux), most of the time I need
    to use make (or autoconf).

    In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor
    that give you at least a ready-to-use IDE (Elipse based or Visual Studio based
    or proprietary). Recently it give you a full set of libraries, middlewares, tools to create a complex project from scratch in a couple of minutes that is compatibile and buildable with its IDE.

    Ok, it's a good thing to start with a minimal effort and make some tests on EVB
    and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).

    I'm asking this, because I just started to add some unit tests (to run on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I create a different IDE project for each module test?

    Moreover, the build process of a project maintained under an IDE is manual (click on a button). Most of the time there isn't the possibility to build by a
    command line and when it is possible, it isn't the "normal" way.

    Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.

    Do you use IDE or Makefile? Is there a recent and much better alternative to make (such as cmake or SCons)?

    Makefiles give you more control over the build/test/document process.
    They're "code" of a different sort (and purpose).

    Otherwise, you are left at the mercy of whoever designed/implemented
    the tool(chain) you are using.

    [I have makefiles that ship my sources off to other machines and
    start builds, there, to verify that my code at least compiles
    without warnings on different architectures -- before getting
    to "invested" in the current state of the code]

    There are tools that will help you create makefiles.
    The biggest problem (with inheritted codebases) will be if you
    have to use a particular "flavor" of make. Or, some oddball
    build *system*. (converting from something like that to
    a more "standardized" build environment can often be a
    significant effort. So, if the foreign codebase is something
    that is still actively being maintained, you may choose to adopt
    its oddball way of doing things to save yourself from having to
    keep "porting" the build mechanism(s) over to "your" way of
    doing things -- even if that is a friendlier environment!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Thu Dec 2 17:34:55 2021
    Am 02.12.2021 um 12:46 schrieb pozz:
    Moreover, the build process of a project maintained under an IDE is
    manual (click on a button). Most of the time there isn't the possibility
    to build by a command line and when it is possible, it isn't the
    "normal" way.

    So far, all the IDEs I have encountered this century use some variation
    of make under the hood, and have a somewhat standard compiler (i.e.
    responds to `whatevercc -c file.c -o file.o`).

    Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.

    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?

    Think of make (or ninja) as some sort of (macro-) assembler language of
    build systems, and add a high-level language on top.

    CMake seems to be a popular (the most popular?) choice for that language
    on top, although reasons why it sucks are abundant; the most prominent
    for me being that the Makefiles it generates violate pretty much every
    best practice and therefore are slow. Other than that, it can build
    embedded software of course.

    You'll eventually need another meta-build system on top to build the
    projects that form your system image (busybox? openssl? dropbear?
    linux?), you'll not port their build systems into yours.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pozz@21:1/5 to All on Fri Dec 3 11:49:17 2021
    Il 02/12/2021 17:34, Stefan Reuther ha scritto:
    Am 02.12.2021 um 12:46 schrieb pozz:
    Moreover, the build process of a project maintained under an IDE is
    manual (click on a button). Most of the time there isn't the possibility
    to build by a command line and when it is possible, it isn't the
    "normal" way.

    So far, all the IDEs I have encountered this century use some variation
    of make under the hood, and have a somewhat standard compiler (i.e.
    responds to `whatevercc -c file.c -o file.o`).

    Many times in the past I tried to write a Makefile for my projects, but
    sincerely for me make tool is very criptic (tabs instead of spaces?).
    Dependencies are a mess.

    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?

    Think of make (or ninja) as some sort of (macro-) assembler language of
    build systems, and add a high-level language on top.

    CMake seems to be a popular (the most popular?) choice for that language
    on top, although reasons why it sucks are abundant; the most prominent
    for me being that the Makefiles it generates violate pretty much every
    best practice and therefore are slow. Other than that, it can build
    embedded software of course.

    You'll eventually need another meta-build system on top to build the
    projects that form your system image (busybox? openssl? dropbear?
    linux?), you'll not port their build systems into yours.

    It's very difficult to choose the build system today to study and use.

    make, Cmake/make, Cmake/ninja, Meson, Scons, ...

    What do you suggest for embedded projects? Of course I use
    cross-compiler for the target (mainly arm-gcc), but also host native
    compiler (mingw on Windows and gcc on Linux) for testing and simulation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tauno Voipio@21:1/5 to pozz on Fri Dec 3 13:15:58 2021
    On 2.12.21 18.34, pozz wrote:
    Il 02/12/2021 16:22, Grant Edwards ha scritto:
    On 2021-12-02, pozz <pozzugno@gmail.com> wrote:

    Ok, it's a good thing to start with a minimal effort and make some tests >>> on EVB and new chips. However I'm wondering if a good quality
    commercial/industrial grade software is maintained under the IDE of the
    silicon vendor or it is maintained with a Makefile (or similar).

    We always use makefiles. Some people do their editing and "make"ing in
    an IDE like eclipse. Others use emacs or whatever other environment
    they like.

    In my experience, software provided by silicon vendors has always,
    been utter crap. That's been true for IDEs, libraries, header files,
    debuggers -- _everything_. And it's been true for 40 years.

    Recently I tried to use the Silicon vendor's IDE and demo
    project/libraries to build the simple app that prints "hello world" on
    a serial port. This is an application, IDE, and libraries the silicon
    vendor provided _with_the_evaluation_board_.

    Following the instructions, step-by-step, did allow me to build an
    executable. It was far too large for the MCU's flash. I threw out the
    silicon vendor's "drivers" (which were absurdly bloated) and C library
    (also huge). I wrote my own bare-metal drivers and substituted the
    printf() implementation I had been using for years. The exectuable
    size was reduced by over 75%.

    We've also tried to use non-silicon-vendor IDEs (eclipse), and using
    the IDE's concept of "projects" is always a complete mess. The
    "project" always ends up with lot's of hard-coded paths and
    host-specific junk in it. This means you can't check the project into
    git/subversion, check it out on another machine, and build it without
    days of "fixing" the project to work on the new host.

    Thank you for sharing your experiences. Anyway my post wasn't related to
    the quality (size/speed efficiency...) of source code provided by
    silicon vendors, but to the build process: IDE vs Makefile.


    They are not complete opposites For example, the Eclipse CDT uses make
    at the tool to perform the build. There is a difference whether the
    user wirites the makefiles or the IDE creates them. Most IDEs create
    makefiles for running the generated code on the same computer which
    houses the IDE, and it is more difficult to cross-compile for embedded
    targets.

    I agree on the silicon manufacturers' code, it should be jettisoned.

    I have abandoned the code from both Atmel and ST after fighting some
    weeks to make them perform. Instead, the manufacturers should
    concentrate on documenting the hardware properly. I had to dis-assemble
    Atmel's start-up code to get the fact that SAM4 processor clock controls
    must be changed only one field at a time, even if the fields occupy
    the same register. If multiple fields were changed, the clock set-up
    did never get ready. This is a serious problem, as ARM breaks the JTAG
    standard and requires the processor clock running to respond to JTAG.
    The JTAG standard assumes that the only clocking needed comes from the
    JTAG clock.

    --

    -TV

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Fri Dec 3 18:27:05 2021
    Am 03.12.2021 um 11:49 schrieb pozz:
    Il 02/12/2021 17:34, Stefan Reuther ha scritto:
    Think of make (or ninja) as some sort of (macro-) assembler language of
    build systems, and add a high-level language on top.

    CMake seems to be a popular (the most popular?) choice for that language
    on top, although reasons why it sucks are abundant; the most prominent
    for me being that the Makefiles it generates violate pretty much every
    best practice and therefore are slow. Other than that, it can build
    embedded software of course.

    You'll eventually need another meta-build system on top to build the
    projects that form your system image (busybox? openssl? dropbear?
    linux?), you'll not port their build systems into yours.

    It's very difficult to choose the build system today to study and use.

    make, Cmake/make, Cmake/ninja, Meson, Scons, ...

    What do you suggest for embedded projects? Of course I use
    cross-compiler for the target (mainly arm-gcc), but also host native
    compiler (mingw on Windows and gcc on Linux) for testing and simulation.

    Same here.

    At work, we use cmake/make for building (but if you have cmake, it
    doesn't matter whether there's make or ninja below it). That's pretty
    ok'ish for turning a bunch of source code files into an executable;
    probably not so good for doing something else (e.g. rendering images for documentation and your device's UI).

    Personally, I generate my Makefiles (or build.ninja files) with a
    homegrown script; again, based on the assumption that make is an
    assembler that needs a high-level language on top.

    However, building your code isn't the whole story. Unless you have a
    huge monorepo containing everything you ever did, you'll have to check
    out different things, and you will have dependencies between projects,
    some even conditional (maybe you don't want to build your unit test infrastructure when you make a release build for your target? maybe you
    want a different kernel version when building an image for a v2 board
    vs. a v1 board?).

    I use a tool called 'bob' <https://github.com/BobBuildTool/bob> as the meta-build system for that, at work and personally. It started out as an in-house tool so it surely isn't industry standard, needs some planning,
    and then gets the job done nicely. It invokes the original build process
    of the original subprojects, be it cmake-based or autotools-based. The
    people who build (desktop or embedded) Linux distributions all have some meta-build system to do things like that, and I would assume neither of
    them is easy to set up, just because the problem domain is pretty complex.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to pozz on Fri Dec 3 20:51:54 2021
    pozz <pozzugno@gmail.com> wrote:
    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?

    ISTM that IDEs start off as cover for the woeful state of the command line environment on Windows.

    On Unix, when you want to target a different platform, all you need is a new compiler. Just grab arm-unknown-gnueabi-gcc and you're done. Maybe you
    need some libraries as well, but that's easy. Debugging tools are all there
    - based on gdb or one of numerous frontends. Then you use the environment
    you already have - your own editor, shell, scripting language, version
    control, etc are all there.

    On Windows[*], few people develop like that because cmd.exe is an awful
    shell to work in, all this C:\Program Files\blah tends to get in the way of Unixy build tools like make, and command line editors etc aren't very good. Windows also makes it awkward to mix and match GUI tools (eg separate
    editor, compiler, debugger GUI apps).

    So instead people expect an IDE with its own editor, that does everything in house and lives in a single maximised window, and orchestrates the build pipeline.

    But then it starts bloating - the debugger gets brought in, then the programmer, then sometimes it starts growing its own idea of a version
    control client. And eventually you end up with something extremely
    complicated and somewhat flaky just to build a few kilobytes of code.

    Not to say that there aren't some useful features of IDEs - one thing is explicit library integration into the editor (so you get documentation and expansion as you type), another is special dialogues for configuration
    options in your particular chip (eg pin mapping or initial clock setup)
    rather than expecting you to configure all these things from code. The
    first is something that existing editors can do given sufficient
    information of the API, and the second is generally something you only do
    once per project.

    But for the basic edit-build-run-test cycle, the GUI seems mostly to get in
    the way.

    Theo

    [*] Powershell and WSL have been trying to improve this. But I've not seen
    any build flows that make much use of them, beyond simply taking Linux flows and running them in WSL.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to Theo on Fri Dec 3 21:28:54 2021
    On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote:

    [*] Powershell and WSL have been trying to improve this. But I've not seen any build flows that make much use of them, beyond simply taking Linux flows and running them in WSL.

    I always had good luck using Cygwin and gnu "make" on Windows to run
    various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
    haven't needed to do that for several years now...

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pozz@21:1/5 to All on Fri Dec 3 23:48:16 2021
    Il 02/12/2021 12:46, pozz ha scritto:
    When I download C source code (for example for Linux), most of the time
    I need to use make (or autoconf).

    In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor that give you at least a ready-to-use IDE (Elipse based or Visual Studio based or proprietary). Recently it give you a full set of
    libraries, middlewares, tools to create a complex project from scratch
    in a couple of minutes that is compatibile and buildable with its IDE.

    Ok, it's a good thing to start with a minimal effort and make some tests
    on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).

    I'm asking this, because I just started to add some unit tests (to run
    on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I
    create a different IDE project for each module test?

    Moreover, the build process of a project maintained under an IDE is
    manual (click on a button). Most of the time there isn't the possibility
    to build by a command line and when it is possible, it isn't the
    "normal" way.

    Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.

    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?


    It's absurd how difficult is to create a Makefile for a simple project
    with the following tree:

    Makefile
    src/
    file1.c
    module1/
    file2.c
    module2/
    file3.c
    target1/
    Release/
    src/
    file1.o
    file1.d
    module1/
    file2.o
    file2.d
    module2/
    file3.o
    file3.d
    Debug/
    src/
    file1.o
    file1.d
    ...

    Just to create directories for output files (objects and dependencies)
    is a mess: precious rule, cheating make adding a dot after trailing
    slash, second expansion, order-only prerequisite!!!

    Dependencies must be created as a side effect of compilation with
    esoteric -M options for gcc.

    Is cmake simpler to configure?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Sat Dec 4 10:31:23 2021
    Am 03.12.2021 um 23:48 schrieb pozz:
    It's absurd how difficult is to create a Makefile for a simple project
    with the following tree:

    Makefile
    src/
      file1.c
      module1/
        file2.c
      module2/
        file3.c
    target1/
      Release/
    [...]
      Debug/
        src/
          file1.o
          file1.d
          ...

    Just to create directories for output files (objects and dependencies)
    is a mess: precious rule, cheating make adding a dot after trailing
    slash, second expansion, order-only prerequisite!!!

    Hypothesis: a single Makefile that does all this is not a good idea.
    Better: make a single Makefile that turns your source code into one
    instance of object code, and give it some configuration options that say whether you want target1/Release, target2/Release, or host/Debug.

    I'm not sure what you need order-only dependencies for. For a project
    like this, with GNU make I'd most likely just do something like

    OBJ = file1.o module1/file2.o module2/file3.o
    main: $(OBJ)
    $(CC) -o $@ $(OBJ)
    $(OBJ): %.o: $(SRCDIR)/%.c
    mkdir $(dir $@)
    $(CC) $(CFLAGS) -c $< -o $@

    Dependencies must be created as a side effect of compilation with
    esoteric -M options for gcc.

    It's not too bad with sufficiently current versions.

    CFLAGS += -MMD -MP
    -include $(OBJ:.o=.d)

    Is cmake simpler to configure?

    CMake does one-configuration-per-invocation type builds like sketched
    above, i.e. to build target1/Release and target1/Debug, you invoke CMake
    on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
    once with -DCMAKE_BUILD_TYPE=Debug.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pozz@21:1/5 to All on Sat Dec 4 16:23:41 2021
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:
    Am 03.12.2021 um 23:48 schrieb pozz:
    It's absurd how difficult is to create a Makefile for a simple project
    with the following tree:

    Makefile
    src/
      file1.c
      module1/
        file2.c
      module2/
        file3.c
    target1/
      Release/
    [...]
      Debug/
        src/
          file1.o
          file1.d
          ...

    Just to create directories for output files (objects and dependencies)
    is a mess: precious rule, cheating make adding a dot after trailing
    slash, second expansion, order-only prerequisite!!!

    Hypothesis: a single Makefile that does all this is not a good idea.
    Better: make a single Makefile that turns your source code into one
    instance of object code, and give it some configuration options that say whether you want target1/Release, target2/Release, or host/Debug.

    Oh yes, I'm using three make variables:

    CONF=rls|dbg
    TARGET=target1|target2

    I also have

    MODEL=model1|model2

    because the same source code can be compiler to produce firmware for two
    types of products.

    Anyway, even using these three variables, the Makefile is difficult to
    write and understand (at least for me).


    I'm not sure what you need order-only dependencies for. For a project
    like this, with GNU make I'd most likely just do something like

    OBJ = file1.o module1/file2.o module2/file3.o
    main: $(OBJ)
    $(CC) -o $@ $(OBJ)
    $(OBJ): %.o: $(SRCDIR)/%.c
    mkdir $(dir $@)
    $(CC) $(CFLAGS) -c $< -o $@

    This is suboptimal. Every time one object file is created (because it is
    not present or because prerequisites aren't satisfied), mkdir command is executed, even if $(dir $@) is already created.

    A better approach is to use a dedicated rule for directories, but it's
    very complex and tricky[1].

    I think your approach is better, only because is much more
    understandable, not because is more efficient.


    Dependencies must be created as a side effect of compilation with
    esoteric -M options for gcc.

    It's not too bad with sufficiently current versions.

    CFLAGS += -MMD -MP
    -include $(OBJ:.o=.d)

    Are you sure you don't need -MT too, to specify exactly the target rule?


    Is cmake simpler to configure?

    CMake does one-configuration-per-invocation type builds like sketched
    above, i.e. to build target1/Release and target1/Debug, you invoke CMake
    on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
    once with -DCMAKE_BUILD_TYPE=Debug.

    Yes, I was asking if the configuration file of CMake is simpler to write compared to a Makefile.


    [1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Sat Dec 4 17:41:41 2021
    On 04/12/2021 16:23, pozz wrote:
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:

    I'm not sure what you need order-only dependencies for. For a project
    like this, with GNU make I'd most likely just do something like

         OBJ = file1.o module1/file2.o module2/file3.o
         main: $(OBJ)
              $(CC) -o $@ $(OBJ)
         $(OBJ): %.o: $(SRCDIR)/%.c
              mkdir $(dir $@)
              $(CC) $(CFLAGS) -c $< -o $@


    I almost never use makefiles that have the object files (or source
    files, or other files) specified explicitly.

    CFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.c))
    CXXFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.cpp))

    OBJSsrc := $(CFILES:.c=.o) $(CXXFILES:.cpp=.o)
    OBJS := $(addprefix $(OBJDIR), $(patsubst ../%,%,$(OBJSsrc)))

    If there is a C or C++ file in the source tree, it is part of the
    project. Combined with automatic dependency resolution (for which I use
    gcc with -M* flags) this means that the make for a project adapts
    automatically whenever you add new source or header files, or change the
    ones that are there.

    This is suboptimal. Every time one object file is created (because it is
    not present or because prerequisites aren't satisfied), mkdir command is executed, even if $(dir $@) is already created.

    Use existence-only dependencies:

    target/%.o : %.c | target
    $(CC) $(CFLAGS) -c $< -o $@

    target :
    mkdir -p target


    When you have a dependency given after a |, gnu make will ensure that it
    exists but does not care about its timestamp. So here it will check if
    the target directory is there before creating target/%.o, and if not it
    will make it. It probably doesn't matter much for directories, but it
    can be useful in some cases to avoid extra work.

    And use "mkdir -p" to make a directory including any other parts of the
    path needed, and to avoid an error if the directory already exists.


    A better approach is to use a dedicated rule for directories, but it's
    very complex and tricky[1].

    The reference you gave is okay too. Some aspects of advanced makefiles
    /are/ complex and tricky, and can be hard to debug (look out for mixes
    of spaces instead of tabs at the start of lines!) But once you've got
    them in place, you can re-use them in other projects. And you can copy examples like the reference you gave, rather than figuring it out yourself.


    I think your approach is better, only because is much more
    understandable, not because is more efficient.


    My version is - IMHO - understandable /and/ efficient.


    Dependencies must be created as a side effect of compilation with
    esoteric -M options for gcc.

    It's not too bad with sufficiently current versions.

         CFLAGS += -MMD -MP
         -include $(OBJ:.o=.d)

    Are you sure you don't need -MT too, to specify exactly the target rule?


    The exact choice of -M flags depends on details of your setup. I prefer
    to have the dependency creation done as a separate step from the
    compilation - it's not strictly necessary, but I have found it neater.
    However, I use two -MT flags per dependency file. One makes a rule for
    the file.o dependency, the other is for the file.d dependency. That
    way, make knows when it has to re-build the dependency file.


    Is cmake simpler to configure?

    CMake does one-configuration-per-invocation type builds like sketched
    above, i.e. to build target1/Release and target1/Debug, you invoke CMake
    on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
    once with -DCMAKE_BUILD_TYPE=Debug.

    Yes, I was asking if the configuration file of CMake is simpler to write compared to a Makefile.


    I've only briefly looked at CMake. It always looked a bit limited to me
    - sometimes I have a variety of extra programs or steps to run (like a
    Python script to pre-process files and generate extra C or header files,
    or extra post-processing steps). I also often need different compiler
    flags for different parts of a project. Perhaps it would work for what
    I need and I just haven't read enough.


    [1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tauno Voipio@21:1/5 to David Brown on Sat Dec 4 18:49:37 2021
    On 4.12.21 18.41, David Brown wrote:
    On 04/12/2021 16:23, pozz wrote:
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:

    I'm not sure what you need order-only dependencies for. For a project
    like this, with GNU make I'd most likely just do something like

         OBJ = file1.o module1/file2.o module2/file3.o
         main: $(OBJ)
              $(CC) -o $@ $(OBJ)
         $(OBJ): %.o: $(SRCDIR)/%.c
              mkdir $(dir $@)
              $(CC) $(CFLAGS) -c $< -o $@


    I almost never use makefiles that have the object files (or source
    files, or other files) specified explicitly.

    CFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.c))
    CXXFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.cpp))

    OBJSsrc := $(CFILES:.c=.o) $(CXXFILES:.cpp=.o)
    OBJS := $(addprefix $(OBJDIR), $(patsubst ../%,%,$(OBJSsrc)))

    If there is a C or C++ file in the source tree, it is part of the
    project. Combined with automatic dependency resolution (for which I use
    gcc with -M* flags) this means that the make for a project adapts automatically whenever you add new source or header files, or change the
    ones that are there.

    This is suboptimal. Every time one object file is created (because it is
    not present or because prerequisites aren't satisfied), mkdir command is
    executed, even if $(dir $@) is already created.

    Use existence-only dependencies:

    target/%.o : %.c | target
    $(CC) $(CFLAGS) -c $< -o $@

    target :
    mkdir -p target


    When you have a dependency given after a |, gnu make will ensure that it exists but does not care about its timestamp. So here it will check if
    the target directory is there before creating target/%.o, and if not it
    will make it. It probably doesn't matter much for directories, but it
    can be useful in some cases to avoid extra work.

    And use "mkdir -p" to make a directory including any other parts of the
    path needed, and to avoid an error if the directory already exists.


    A better approach is to use a dedicated rule for directories, but it's
    very complex and tricky[1].

    The reference you gave is okay too. Some aspects of advanced makefiles
    /are/ complex and tricky, and can be hard to debug (look out for mixes
    of spaces instead of tabs at the start of lines!) But once you've got
    them in place, you can re-use them in other projects. And you can copy examples like the reference you gave, rather than figuring it out yourself.


    I think your approach is better, only because is much more
    understandable, not because is more efficient.


    My version is - IMHO - understandable /and/ efficient.


    Dependencies must be created as a side effect of compilation with
    esoteric -M options for gcc.

    It's not too bad with sufficiently current versions.

         CFLAGS += -MMD -MP
         -include $(OBJ:.o=.d)

    Are you sure you don't need -MT too, to specify exactly the target rule?


    The exact choice of -M flags depends on details of your setup. I prefer
    to have the dependency creation done as a separate step from the
    compilation - it's not strictly necessary, but I have found it neater. However, I use two -MT flags per dependency file. One makes a rule for
    the file.o dependency, the other is for the file.d dependency. That
    way, make knows when it has to re-build the dependency file.


    Is cmake simpler to configure?

    CMake does one-configuration-per-invocation type builds like sketched
    above, i.e. to build target1/Release and target1/Debug, you invoke CMake >>> on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
    once with -DCMAKE_BUILD_TYPE=Debug.

    Yes, I was asking if the configuration file of CMake is simpler to write
    compared to a Makefile.


    I've only briefly looked at CMake. It always looked a bit limited to me
    - sometimes I have a variety of extra programs or steps to run (like a
    Python script to pre-process files and generate extra C or header files,
    or extra post-processing steps). I also often need different compiler
    flags for different parts of a project. Perhaps it would work for what
    I need and I just haven't read enough.


    [1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/


    CMake is on a different level than make. CMake aims to the realm of
    autoconf, automake and friends. One of the supported tail-ends for
    CMake is GNU make.

    --

    -TV

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Tauno Voipio on Sat Dec 4 18:23:40 2021
    On 04/12/2021 17:49, Tauno Voipio wrote:
    On 4.12.21 18.41, David Brown wrote:
    On 04/12/2021 16:23, pozz wrote:
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:

    Is cmake simpler to configure?

    CMake does one-configuration-per-invocation type builds like sketched
    above, i.e. to build target1/Release and target1/Debug, you invoke
    CMake
    on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
    once with -DCMAKE_BUILD_TYPE=Debug.

    Yes, I was asking if the configuration file of CMake is simpler to write >>> compared to a Makefile.


    I've only briefly looked at CMake.  It always looked a bit limited to me
    - sometimes I have a variety of extra programs or steps to run (like a
    Python script to pre-process files and generate extra C or header files,
    or extra post-processing steps).  I also often need different compiler
    flags for different parts of a project.  Perhaps it would work for what
    I need and I just haven't read enough.


    [1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/


    CMake is on a different level than make. CMake aims to the realm of
    autoconf, automake and friends. One of the supported tail-ends for
    CMake is GNU make.


    Yes, I know. The question is, could I (or the OP, or others) use CMake
    to control their builds? It doesn't really matter if the output is a
    makefile, a ninja file, or whatever - it matters if it can do the job
    better (for some value of "better") than a hand-written makefile. I
    suspect that for projects that fit into the specific patterns it
    supports, it will be a good choice - for others, it will not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to stefan.news@arcor.de on Sat Dec 4 14:37:12 2021
    On Thu, 2 Dec 2021 17:34:55 +0100, Stefan Reuther
    <stefan.news@arcor.de> wrote:

    CMake seems to be a popular (the most popular?) choice for that language
    on top, although reasons why it sucks are abundant; the most prominent
    for me being that the Makefiles it generates violate pretty much every
    best practice and therefore are slow. Other than that, it can build
    embedded software of course.

    CMake is popular because it is cross-platformm: with a bit of care,
    its makefiles (conditionally) can run on any platform that supports
    CMake.

    Cross-platform can be a dealbreaker in the desktop/server world.

    YMMV,
    George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From George Neuner@21:1/5 to invalid@invalid.invalid on Sat Dec 4 14:53:48 2021
    On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards <invalid@invalid.invalid> wrote:

    On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote:

    [*] Powershell and WSL have been trying to improve this. But I've not seen >> any build flows that make much use of them, beyond simply taking Linux flows >> and running them in WSL.

    I always had good luck using Cygwin and gnu "make" on Windows to run
    various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
    haven't needed to do that for several years now...

    The problem with Cygwin is it doesn't play well with native Windows
    GCC (MingW et al).

    Cygwin compilers produce executables that depend on the /enormous/
    Cygwin library. You can statically link the library or ship the DLL
    (or an installer that downloads it) with your program, but by doing so
    your programs falls under GPL - the terms of which are not acceptable
    to some developers.

    And the Cygwin environment is ... less than stable. Any update to
    Windows can break it.

    YMMV,
    George

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tauno Voipio@21:1/5 to David Brown on Sat Dec 4 21:25:54 2021
    On 4.12.21 19.23, David Brown wrote:
    On 04/12/2021 17:49, Tauno Voipio wrote:
    On 4.12.21 18.41, David Brown wrote:
    On 04/12/2021 16:23, pozz wrote:
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:

    Is cmake simpler to configure?

    CMake does one-configuration-per-invocation type builds like sketched >>>>> above, i.e. to build target1/Release and target1/Debug, you invoke
    CMake
    on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and >>>>> once with -DCMAKE_BUILD_TYPE=Debug.

    Yes, I was asking if the configuration file of CMake is simpler to write >>>> compared to a Makefile.


    I've only briefly looked at CMake.  It always looked a bit limited to me >>> - sometimes I have a variety of extra programs or steps to run (like a
    Python script to pre-process files and generate extra C or header files, >>> or extra post-processing steps).  I also often need different compiler
    flags for different parts of a project.  Perhaps it would work for what >>> I need and I just haven't read enough.


    [1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/


    CMake is on a different level than make. CMake aims to the realm of
    autoconf, automake and friends. One of the supported tail-ends for
    CMake is GNU make.


    Yes, I know. The question is, could I (or the OP, or others) use CMake
    to control their builds? It doesn't really matter if the output is a makefile, a ninja file, or whatever - it matters if it can do the job
    better (for some value of "better") than a hand-written makefile. I
    suspect that for projects that fit into the specific patterns it
    supports, it will be a good choice - for others, it will not.


    I tried to use it for some raw-iron embedded programs. IMHO, CMake
    belongs there more to the problem than solution set. CMake is aimed
    to produce code for the system it is run on, cross-compilation creates problems.

    I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS
    Linux, but the compiler identification for a raw metal target was not
    happy when the trial compilation could not link a run file using the
    Linux run file creation model.

    --

    -TV

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to George Neuner on Sat Dec 4 22:17:57 2021
    On 04/12/2021 20:53, George Neuner wrote:
    On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards <invalid@invalid.invalid> wrote:

    On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote:

    [*] Powershell and WSL have been trying to improve this. But I've not seen >>> any build flows that make much use of them, beyond simply taking Linux flows
    and running them in WSL.

    I always had good luck using Cygwin and gnu "make" on Windows to run
    various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
    haven't needed to do that for several years now...

    The problem with Cygwin is it doesn't play well with native Windows
    GCC (MingW et al).

    Cygwin compilers produce executables that depend on the /enormous/
    Cygwin library. You can statically link the library or ship the DLL
    (or an installer that downloads it) with your program, but by doing so
    your programs falls under GPL - the terms of which are not acceptable
    to some developers.

    And the Cygwin environment is ... less than stable. Any update to
    Windows can break it.


    I concur with that. Cygwin made sense long ago, but for the past couple
    of decades the mingw-based alternatives have been more appropriate for
    most uses of *nix stuff on Windows. In particular, Cygwin is a thick compatibility layer that has its own filesystem, process management, and
    other features to fill in the gaps where Windows doesn't fulfil the
    POSIX standards (or does so in a way that plays badly with the rest of Windows). Very often, the changes needed in open-source or
    *nix-heritage software to make them more Windows-friendly are small. A
    common example is changing old-style "fork + exec" paths to "spawn"
    calls which are more modern and more efficient (even on *nix). With
    such small changes, programs can be compiled on thin compatibility
    layers like mingw instead, with the result being a lot faster, smoother,
    better integrated with Windows, and without that special tier of
    DLL-hell reserved for cygwin1.dll and its friends.

    So the earliest gcc versions I built and used on Windows, gcc 2.95 for
    the m68k IIRC, had to be built with Cygwin. By the time I was using gcc
    4+, perhaps earlier, it was all mingw-based and I have rarely looked at
    Cygwin since.

    I strongly recommend msys2, with mingw-64, as the way to handle *nix
    programs on Windows. You can install and use as much as you want - it's
    fine to take most simple programs and use them independently on other
    Windows systems with no or a minimum of dll's. (If the program relies
    on many external files, it will need the msys2 file tree.) You can use
    msys2 bash if you like, or not if you don't like. The mingw-64 gcc has
    a modern, complete and efficient C library instead of the horrendous
    MSVCCRT dll. Most Windows ports of *nix software is made with either
    older mingw or newer mingw-64.

    I can understand using Cygwin simply because you've always used Cygwin,
    or if you really need fuller POSIX compatibility. But these days, WSL
    is probably a better option if that's what you need.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Tauno Voipio on Sat Dec 4 22:20:41 2021
    On 04/12/2021 20:25, Tauno Voipio wrote:
    On 4.12.21 19.23, David Brown wrote:
    On 04/12/2021 17:49, Tauno Voipio wrote:
    On 4.12.21 18.41, David Brown wrote:
    On 04/12/2021 16:23, pozz wrote:
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:

    Is cmake simpler to configure?

    CMake does one-configuration-per-invocation type builds like sketched >>>>>> above, i.e. to build target1/Release and target1/Debug, you invoke >>>>>> CMake
    on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and >>>>>> once with -DCMAKE_BUILD_TYPE=Debug.

    Yes, I was asking if the configuration file of CMake is simpler to
    write
    compared to a Makefile.


    I've only briefly looked at CMake.  It always looked a bit limited
    to me
    - sometimes I have a variety of extra programs or steps to run (like a >>>> Python script to pre-process files and generate extra C or header
    files,
    or extra post-processing steps).  I also often need different compiler >>>> flags for different parts of a project.  Perhaps it would work for what >>>> I need and I just haven't read enough.


    [1]
    https://ismail.badawi.io/blog/automatic-directory-creation-in-make/


    CMake is on a different level than make. CMake aims to the realm of
    autoconf, automake and friends. One of the supported tail-ends for
    CMake is GNU make.


    Yes, I know.  The question is, could I (or the OP, or others) use CMake
    to control their builds?  It doesn't really matter if the output is a
    makefile, a ninja file, or whatever - it matters if it can do the job
    better (for some value of "better") than a hand-written makefile.  I
    suspect that for projects that fit into the specific patterns it
    supports, it will be a good choice - for others, it will not.


    I tried to use it for some raw-iron embedded programs. IMHO, CMake
    belongs there more to the problem than solution set. CMake is aimed
    to produce code for the system it is run on, cross-compilation creates problems.

    I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS
    Linux, but the compiler identification for a raw metal target was not
    happy when the trial compilation could not link a run file using the
    Linux run file creation model.


    That is kind of what I thought. CMake sounds like a good solution if
    you want to make a program that compiles on Linux with native gcc, and
    also with MSVC on Windows, and perhaps a few other native build
    combinations. But it is not really suited for microcontroller builds as
    far as I can see. (Again, I haven't tried it much, and don't want to do
    it injustice by being too categorical.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Sun Dec 5 11:11:36 2021
    Am 04.12.2021 um 16:23 schrieb pozz:
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:
    I'm not sure what you need order-only dependencies for. For a project
    like this, with GNU make I'd most likely just do something like

         OBJ = file1.o module1/file2.o module2/file3.o
         main: $(OBJ)
              $(CC) -o $@ $(OBJ)
         $(OBJ): %.o: $(SRCDIR)/%.c
              mkdir $(dir $@)
              $(CC) $(CFLAGS) -c $< -o $@

    This is suboptimal. Every time one object file is created (because it is
    not present or because prerequisites aren't satisfied), mkdir command is executed, even if $(dir $@) is already created.

    (did I really forget the '-p'?)

    The idea was that creating a directory and checking for its existence
    both require a path lookup, which is the expensive operation here.

    When generating the Makefile with a script, it's easy to sneak a 100%
    matching directory creation dependency into any rule that needs it

    foo/bar.o: bar.c foo/.mark
    ...
    foo/.mark:
    mkdir foo

    Dependencies must be created as a side effect of compilation with
    esoteric -M options for gcc.

    It's not too bad with sufficiently current versions.

         CFLAGS += -MMD -MP
         -include $(OBJ:.o=.d)

    Are you sure you don't need -MT too, to specify exactly the target rule?

    Documentation says you are right, but '-MMD -MP' works fine for me so far...


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Sun Dec 5 11:02:12 2021
    Am 04.12.2021 um 22:17 schrieb David Brown:
    On 04/12/2021 20:53, George Neuner wrote:
    On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
    I always had good luck using Cygwin and gnu "make" on Windows to run
    various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
    haven't needed to do that for several years now...

    The problem with Cygwin is it doesn't play well with native Windows
    GCC (MingW et al).
    [...]
    I concur with that. Cygwin made sense long ago, but for the past couple
    of decades the mingw-based alternatives have been more appropriate for
    most uses of *nix stuff on Windows. In particular, Cygwin is a thick compatibility layer that has its own filesystem, process management, and other features to fill in the gaps where Windows doesn't fulfil the
    POSIX standards (or does so in a way that plays badly with the rest of Windows).

    The problem is that both projects, Cygwin and MinGW/MSYS, provide much
    more than just a compiler, and in an incompatible way, which is probably incompatible with what your toolchain does, and incompatible with what
    Visual Studio does.

    "-Ic:\test" specifies one path name for Windows, but probably two for a toolchain with Unix heritage, where ":" is the separator, not a drive
    letter. Cygwin wants "-I/cygdrive/c" instead, (some versions of) MinGW
    want "-I/c". That, on the other hand, might be an option "-I" followed
    by an option "/c" for a toolchain with Windows heritage.

    The problem domain is complex, therefore solutions need to be complex.

    That aside, I found staying within one universe ("use all from Cygwin",
    "use all from MinGW") to work pretty well; when having to call into
    another universe (e.g. native Win32), be careful to not use, for
    example, any absolute paths.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Stefan Reuther on Sun Dec 5 13:21:50 2021
    On 05/12/2021 11:11, Stefan Reuther wrote:
    Am 04.12.2021 um 16:23 schrieb pozz:
    Il 04/12/2021 10:31, Stefan Reuther ha scritto:
    I'm not sure what you need order-only dependencies for. For a project
    like this, with GNU make I'd most likely just do something like

         OBJ = file1.o module1/file2.o module2/file3.o
         main: $(OBJ)
              $(CC) -o $@ $(OBJ)
         $(OBJ): %.o: $(SRCDIR)/%.c
              mkdir $(dir $@)
              $(CC) $(CFLAGS) -c $< -o $@

    This is suboptimal. Every time one object file is created (because it is
    not present or because prerequisites aren't satisfied), mkdir command is
    executed, even if $(dir $@) is already created.

    (did I really forget the '-p'?)

    The idea was that creating a directory and checking for its existence
    both require a path lookup, which is the expensive operation here.


    Checking for the path is not expensive - it is already necessary to have
    the path details read from the filesystem (and therefore cached, even on Windows) because you want to put a file in it. So it is free. "mkdir
    -p" is also very cheap - it only needs to do something if the path does
    not exist. (Of course, on Windows starting any process takes time and resources an order of magnitude or more greater than on *nix.) It is
    always nice to avoid unnecessary effort, as even small inefficiencies
    add up if there are enough of them. But there's no need to worry unduly
    about the small things.

    When generating the Makefile with a script, it's easy to sneak a 100% matching directory creation dependency into any rule that needs it

    foo/bar.o: bar.c foo/.mark
    ...
    foo/.mark:
    mkdir foo


    And it's easy to forget the "touch foo/.mark" command to make that work!

    But that is completely unnecessary - make is perfectly capable of
    working with a directory as a dependency and target (especially as an order-only dependency).

    Dependencies must be created as a side effect of compilation with
    esoteric -M options for gcc.

    It's not too bad with sufficiently current versions.

         CFLAGS += -MMD -MP
         -include $(OBJ:.o=.d)

    Are you sure you don't need -MT too, to specify exactly the target rule?

    Documentation says you are right, but '-MMD -MP' works fine for me so far...


    Stefan


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Stefan Reuther on Sun Dec 5 13:16:03 2021
    On 05/12/2021 11:02, Stefan Reuther wrote:
    Am 04.12.2021 um 22:17 schrieb David Brown:
    On 04/12/2021 20:53, George Neuner wrote:
    On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
    I always had good luck using Cygwin and gnu "make" on Windows to run
    various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
    haven't needed to do that for several years now...

    The problem with Cygwin is it doesn't play well with native Windows
    GCC (MingW et al).
    [...]
    I concur with that. Cygwin made sense long ago, but for the past couple
    of decades the mingw-based alternatives have been more appropriate for
    most uses of *nix stuff on Windows. In particular, Cygwin is a thick
    compatibility layer that has its own filesystem, process management, and
    other features to fill in the gaps where Windows doesn't fulfil the
    POSIX standards (or does so in a way that plays badly with the rest of
    Windows).

    The problem is that both projects, Cygwin and MinGW/MSYS, provide much
    more than just a compiler, and in an incompatible way, which is probably incompatible with what your toolchain does, and incompatible with what
    Visual Studio does.

    Neither Cygwin nor msys are compilers or toolchains. Nor is MSVS, for
    that matter. That would only be a problem if you misunderstood what
    they are.


    "-Ic:\test" specifies one path name for Windows, but probably two for a toolchain with Unix heritage, where ":" is the separator, not a drive
    letter. Cygwin wants "-I/cygdrive/c" instead, (some versions of) MinGW
    want "-I/c". That, on the other hand, might be an option "-I" followed
    by an option "/c" for a toolchain with Windows heritage.


    Drive letters on Windows have always been a PITA. Usually, IME, it is
    not a big issue for compilation - most of your include directories will
    be on the same drive you are working in (with "system" includes already
    handled by the compiler configuration). Use a makefile, make the base
    part a variable, then at most you only have to change one part. It's a
    good idea anyway to have things like base paths to includes as a
    variable in the makefile.

    One of the differences between msys and cygwin is the way they handle
    paths - msys has a method that is simpler and closer to Windows, while
    cygwin is a bit more "alien" but supports more POSIX features (like
    links). In practice, with programs compiled for the "mingw" targets you
    can usually use Windows names and paths without further ado. On my
    Windows systems, I put msys2's "/usr/bin" directory on my normal PATH,
    and from the standard Windows command prompt I happily use make, grep,
    less, cp, ssh, and other *nix tools without problems or special
    consideration.


    The problem domain is complex, therefore solutions need to be complex.

    That aside, I found staying within one universe ("use all from Cygwin",
    "use all from MinGW") to work pretty well; when having to call into
    another universe (e.g. native Win32), be careful to not use, for
    example, any absolute paths.


    Stefan


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to David Brown on Mon Dec 6 13:51:09 2021
    On 2021-12-04, David Brown <david.brown@hesbynett.no> wrote:

    I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS
    Linux, but the compiler identification for a raw metal target was not
    happy when the trial compilation could not link a run file using the
    Linux run file creation model.

    That is kind of what I thought. CMake sounds like a good solution if
    you want to make a program that compiles on Linux with native gcc, and
    also with MSVC on Windows, and perhaps a few other native build
    combinations. But it is not really suited for microcontroller builds as
    far as I can see. (Again, I haven't tried it much, and don't want to do
    it injustice by being too categorical.)

    I use CMake for cross-compilation for microcontroller stuff. I don't
    use it for my own code, but there are a few 3rd-party libraries that
    use it, and I don't have any problems configuring it to use a cross
    compiler.

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to George Neuner on Mon Dec 6 13:52:57 2021
    On 2021-12-04, George Neuner <gneuner2@comcast.net> wrote:
    On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
    <invalid@invalid.invalid> wrote:

    On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote:

    [*] Powershell and WSL have been trying to improve this. But I've not seen >>> any build flows that make much use of them, beyond simply taking Linux flows
    and running them in WSL.

    I always had good luck using Cygwin and gnu "make" on Windows to run >>various Win32 .exe command line compilers (e.g. IAR). I (thankfully) >>haven't needed to do that for several years now...

    The problem with Cygwin is it doesn't play well with native Windows
    GCC (MingW et al).

    It's always worked fine for me.

    Cygwin compilers produce executables that depend on the /enormous/
    Cygwin library.

    I wasn't talking about using Cygwin compilers. I was talking about
    using Cygwin to do cross-compilation using compilers like IAR.

    You can statically link the library or ship the DLL (or an installer
    that downloads it) with your program, but by doing so your programs
    falls under GPL - the terms of which are not acceptable to some
    developers.

    And the Cygwin environment is ... less than stable. Any update to
    Windows can break it.

    That's definitely true. :/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to David Brown on Mon Dec 6 16:02:24 2021
    On 2021-12-06, David Brown <david.brown@hesbynett.no> wrote:
    On 06/12/2021 14:51, Grant Edwards wrote:

    I use CMake for cross-compilation for microcontroller stuff. I don't
    use it for my own code, but there are a few 3rd-party libraries that
    use it, and I don't have any problems configuring it to use a cross
    compiler.

    OK. As I said, I haven't looked in detail or tried much. Maybe I will,
    one day when I have time.

    I see no reason at all to use for embedded code unless you want to use
    a large 3rd party library that already uses it, and you want to use
    that library's existing cmake build process. For smaller libraries,
    it's probably easier to write a makefile from scratch.

    IMO, configuring stuff that uses cmake seems very obtuse and fragile —
    but that's probably because I don't use it much.

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Grant Edwards on Mon Dec 6 16:33:43 2021
    On 06/12/2021 14:51, Grant Edwards wrote:
    On 2021-12-04, David Brown <david.brown@hesbynett.no> wrote:

    I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS
    Linux, but the compiler identification for a raw metal target was not
    happy when the trial compilation could not link a run file using the
    Linux run file creation model.

    That is kind of what I thought. CMake sounds like a good solution if
    you want to make a program that compiles on Linux with native gcc, and
    also with MSVC on Windows, and perhaps a few other native build
    combinations. But it is not really suited for microcontroller builds as
    far as I can see. (Again, I haven't tried it much, and don't want to do
    it injustice by being too categorical.)

    I use CMake for cross-compilation for microcontroller stuff. I don't
    use it for my own code, but there are a few 3rd-party libraries that
    use it, and I don't have any problems configuring it to use a cross
    compiler.


    OK. As I said, I haven't looked in detail or tried much. Maybe I will,
    one day when I have time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dave Nadler@21:1/5 to pozz on Mon Dec 6 15:45:56 2021
    On 12/2/2021 6:46 AM, pozz wrote:
    When I download C source code (for example for Linux), most of the time
    I need to use make (or autoconf).

    In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor that give you at least a ready-to-use IDE (Elipse based or Visual Studio based or proprietary). Recently it give you a full set of
    libraries, middlewares, tools to create a complex project from scratch
    in a couple of minutes that is compatibile and buildable with its IDE.

    Ok, it's a good thing to start with a minimal effort and make some tests
    on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).

    I'm asking this, because I just started to add some unit tests (to run
    on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I
    create a different IDE project for each module test?

    Moreover, the build process of a project maintained under an IDE is
    manual (click on a button). Most of the time there isn't the possibility
    to build by a command line and when it is possible, it isn't the
    "normal" way.

    Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.

    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?

    For my most recent projects, I'm using Eclipse and letting it generate
    the make files. I find it necessary to manually clean up the XML
    controlling Eclipse to ensure that there are no hard-coded paths and
    everything uses sane path variables (after starting with
    vendor-tool-generated project).

    I have multiple projects in the workspace:
    1) target project uses the GCC ARM cross-compiler (debug and release
    targets), and
    2) for host builds of debug+test software, one or more minGW GCC projects

    Its not optimal but it does work without too much grief.
    Does a poor job for understanding/maintaining those places where
    different compiler options are needed (fortunately not many).

    I read *the* book on CMake and my head hurts, plus the book is revised
    every three weeks as CMake adds or fixes numerous 'special' things.
    Haven't actually used it yet but might try (with Zephyr).

    My older big projects are primarily make (with dozens of targets
    including intermediate preprocess stuff), plus separate Visual Studio
    build for a simulator, an Eclipse build (auto-generated make) for one of
    the embedded components, and Linux Eclipse build (auto-generated make)
    for Linux versions of utilities. All this is painful to maintain and
    keep synchronized. Sure would like to see a better way to handle all the different targets and platforms (which CMake should help with but I'm
    really not sure how to wrangle the thing).

    Interesting discussion!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pozz@21:1/5 to All on Thu Dec 9 10:43:08 2021
    Il 02/12/2021 12:46, pozz ha scritto:
    [...]
    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?

    I reply to this post for a few questions on make.

    For embedded projects, we use at least one cross-compiler. I usually use
    two compilers: cross-compiler for the embedded target and
    native-compiler for creating a "simulator" for the host or for running
    some tests on the host.

    I'm thinking to use an env variable to choose between the targets:

    make TARGET=embedded|host

    If the native compiler is usually already on the PATH (even if I prefer
    to avoid), cross-compiler is usually not already in the PATH. How do you
    solve this?

    I'm thinking to use again env variables and set them in a batch script
    path.bat that is machine dependent (so it shouldn't be tracked by git).

    SET GNU_ARM=c:\nxp\MCUXpressoIDE_11.2.1_4149...
    SET MINGW_PATH=c:\mingw64

    In the makefile:

    ifeq($(TARGET),embedded)
    CC := "$(GNU_ARM)/bin/arm-none-eabi-gcc"
    CPP...
    else ifeq($(TARGET),host)
    CC := $(MINGW_PATH)/bin/gcc
    CPP...
    endif

    In this way, I launch path.bat only one time on my windows development
    machine and run make TARGET= during development.


    Another issue is with internal commands of cmd.exe. GNU make for
    Windows, ARM gcc, mingw and so on are able to manage paths with
    Unix-like slash, but Windows internal commands such as mkdir, del does not.
    I think it's much better to use Unix like commands (mkdir, rm) that can
    be installed with coreutils[1] for Windows.

    So in path.bat I add the coreutils folder to PATH:

    SET COREUTILS_PATH=C:\TOOLS\COREUTILS

    and in Makefile:

    MKDIR := $(COREUTILS_PATH)/bin/mkdir
    RM := $(COREUTILS_PATH)/bin/rm


    Do you use better solutions?


    [1] http://gnuwin32.sourceforge.net/packages/coreutils.htm

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pozz@21:1/5 to All on Thu Dec 9 11:54:36 2021
    Il 04/12/2021 17:41, David Brown ha scritto:
    [...]
    This is suboptimal. Every time one object file is created (because it is
    not present or because prerequisites aren't satisfied), mkdir command is
    executed, even if $(dir $@) is already created.

    Use existence-only dependencies:

    target/%.o : %.c | target
    $(CC) $(CFLAGS) -c $< -o $@

    target :
    mkdir -p target

    Do you replicate the source tree in the target directory for build?

    I'd prefer to have the same tree in source and build dirs:

    src/
    file1.c
    mod1/
    file1.c
    mod2/
    file1.c
    build/
    file1.o
    mod1/
    file1.o
    mod2/
    file1.o

    With your rules above, I don't think this can be done. target is only
    the main build directory, but I need to create subdirectories too.

    I understood for this I need to use $(@D) in prerequisites and this can
    be done only with second expansion.

    .SECONDEXPANSION:

    target/%.o : %.c target/%.d | $$(@D)
    $(CC) $(CFLAGS) -c $< -o $@

    $(BUILD_DIRS):
    $(MKDIR) -p $@

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Johann Klammer@21:1/5 to pozz on Fri Dec 10 09:30:46 2021
    On 12/02/2021 12:46 PM, pozz wrote:

    Do you use IDE or Makefile? Is there a recent and much better alternative to make (such as cmake or SCons)?

    Whatever will run on your box.(usually that's make/automake and nothing else)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Hans-Bernhard_Br=c3=b6ker@21:1/5 to All on Fri Dec 10 18:35:50 2021
    Am 09.12.2021 um 11:54 schrieb pozz:

    I'd prefer to have the same tree in source and build dirs:

    What on earth for?

    Subdirectories for sources are necessary to organize our work, because
    humans can't deal too well with folders filled with hundreds of files,
    and because we fare better with the project's top-down structure
    tangibly represented as a tree of subfolders.

    But let's face it: we very rarely even look at object files, much less
    work on them in any meaningful fashion. They just have to be somewhere,
    but it's no particular burden at all if they're all in a single folder,
    per primary build target. They're for the compiler and make alone to
    work on, not for humans. So they don't have to be organized for human consumption.

    That's why virtually all hand-written Makefiles I've ever seen, and a
    large portion of the auto-generated ones, too, keep all of a target's
    object, list and dependency files in a single folder. Mechanisms like
    VPATH exist for the express purpose of easing this approach, and the
    built-in rules and macros also largely rely on it.

    The major exception in this regard is CMake, which does indeed mirror
    the source tree layout --- but that's manageable for them only because
    their Makefiles, being fully machine-generated, can become almost
    arbitrarily complex, for no extra cost. Nobody in full possession of
    their mental capabilities would ever write Makefiles the way CMake does
    it, by hand.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Fri Dec 10 19:44:58 2021
    On 10/12/2021 18:35, Hans-Bernhard Bröker wrote:
    Am 09.12.2021 um 11:54 schrieb pozz:

    I'd prefer to have the same tree in source and build dirs:

    What on earth for?

    Subdirectories for sources are necessary to organize our work, because
    humans can't deal too well with folders filled with hundreds of files,
    and because we fare better with the project's top-down structure
    tangibly represented as a tree of subfolders.

    But let's face it: we very rarely even look at object files, much less
    work on them in any meaningful fashion.  They just have to be somewhere,
    but it's no particular burden at all if they're all in a single folder,
    per primary build target.  They're for the compiler and make alone to
    work on, not for humans.  So they don't have to be organized for human consumption.

    That's why virtually all hand-written Makefiles I've ever seen, and a
    large portion of the auto-generated ones, too, keep all of a target's
    object, list and dependency files in a single folder.  Mechanisms like
    VPATH exist for the express purpose of easing this approach, and the
    built-in rules and macros also largely rely on it.

    The major exception in this regard is CMake, which does indeed mirror
    the source tree layout --- but that's manageable for them only because
    their Makefiles, being fully machine-generated, can become almost
    arbitrarily complex, for no extra cost.  Nobody in full possession of
    their mental capabilities would ever write Makefiles the way CMake does
    it, by hand.

    There are other automatic systems that mirror the structure of the
    source tree for object files, dependency files and list files (yes, some
    people still like these). Eclipse does it, for example, and therefore
    the majority of vendor-supplied toolkits since most are Eclipse based.
    (I don't know if NetBeans and Visual Studio / Visual Studio Code do so -
    these are the other two IDE's commonly used by manufacturer tools).

    The big advantage of having object directories that copy source
    directories is that it all works even if you have more than one file
    with the same name. Usually, of course, you want to avoid name
    conflicts - there are risks of other issues or complications such as
    header guard symbols that are not unique (they /can/ include directory information and not just the filename, but they don't always do so) and
    you have to be careful that you #include the files you meant. But with
    big projects containing SDK files, third-party libraries, RTOS's,
    network stacks, and perhaps files written by many people working
    directly on the project, conflicts happen. "timers.c" and "utils.c"
    sound great to start with, but there is a real possibility of more than
    one turning up in a project.

    It is not at all hard to make object files mirror the source tree, and
    it adds nothing to the build time. For large projects, it is clearly
    worth the effort. (For small projects, it is probably not necessary.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Sat Dec 11 10:01:34 2021
    Am 10.12.2021 um 18:35 schrieb Hans-Bernhard Bröker:
    Am 09.12.2021 um 11:54 schrieb pozz:
    I'd prefer to have the same tree in source and build dirs:

    What on earth for?

    Subdirectories for sources are necessary to organize our work, because
    humans can't deal too well with folders filled with hundreds of files,
    and because we fare better with the project's top-down structure
    tangibly represented as a tree of subfolders.

    But let's face it: we very rarely even look at object files, much less
    work on them in any meaningful fashion.  They just have to be somewhere,
    but it's no particular burden at all if they're all in a single folder,
    per primary build target.

    But sometimes, we do look at them. Especially in an embedded context.
    One example could be things like stack consumption analysis. Or to
    answer the question "how much code size do I pay for using this C++
    feature?". "Did the compiler correctly inline this function I expected
    it to inline?".

    And if the linker gives me a "duplicate definition" error, I prefer that
    it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'.

    The major exception in this regard is CMake, which does indeed mirror
    the source tree layout --- but that's manageable for them only because
    their Makefiles, being fully machine-generated, can become almost
    arbitrarily complex, for no extra cost.  Nobody in full possession of
    their mental capabilities would ever write Makefiles the way CMake does
    it, by hand.

    The main reason I'd never write Makefiles the way CMake does it is that
    CMake's makefiles are horribly inefficient...

    But otherwise, once you got infrastructure to place object files in SOME subdirectory in your build system, mirroring the source structure is
    easy and gives a usability win.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to pozz on Sat Dec 11 17:06:37 2021
    On 11/12/2021 16:18, pozz wrote:


    Ok, it's not too hard (nothing is hard when you know how to do it), but
    it's not that simple too.


    Of course.

    And once you've got a makefile you like for one project, you copy it for
    the next. I don't think I have started writing a new makefile in 25 years!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pozz@21:1/5 to All on Sat Dec 11 16:18:51 2021
    Il 10/12/2021 19:44, David Brown ha scritto:
    On 10/12/2021 18:35, Hans-Bernhard Bröker wrote:
    Am 09.12.2021 um 11:54 schrieb pozz:

    I'd prefer to have the same tree in source and build dirs:

    What on earth for?

    Subdirectories for sources are necessary to organize our work, because
    humans can't deal too well with folders filled with hundreds of files,
    and because we fare better with the project's top-down structure
    tangibly represented as a tree of subfolders.

    But let's face it: we very rarely even look at object files, much less
    work on them in any meaningful fashion.  They just have to be somewhere,
    but it's no particular burden at all if they're all in a single folder,
    per primary build target.  They're for the compiler and make alone to
    work on, not for humans.  So they don't have to be organized for human
    consumption.

    That's why virtually all hand-written Makefiles I've ever seen, and a
    large portion of the auto-generated ones, too, keep all of a target's
    object, list and dependency files in a single folder.  Mechanisms like
    VPATH exist for the express purpose of easing this approach, and the
    built-in rules and macros also largely rely on it.

    The major exception in this regard is CMake, which does indeed mirror
    the source tree layout --- but that's manageable for them only because
    their Makefiles, being fully machine-generated, can become almost
    arbitrarily complex, for no extra cost.  Nobody in full possession of
    their mental capabilities would ever write Makefiles the way CMake does
    it, by hand.

    There are other automatic systems that mirror the structure of the
    source tree for object files, dependency files and list files (yes, some people still like these). Eclipse does it, for example, and therefore
    the majority of vendor-supplied toolkits since most are Eclipse based.
    (I don't know if NetBeans and Visual Studio / Visual Studio Code do so - these are the other two IDE's commonly used by manufacturer tools).

    Atmel Studio, now Microchip Studio, that is based on Visual Studio
    mirrors exactly the source tree to the build dir.


    The big advantage of having object directories that copy source
    directories is that it all works even if you have more than one file
    with the same name. Usually, of course, you want to avoid name
    conflicts - there are risks of other issues or complications such as
    header guard symbols that are not unique (they /can/ include directory information and not just the filename, but they don't always do so) and
    you have to be careful that you #include the files you meant. But with
    big projects containing SDK files, third-party libraries, RTOS's,
    network stacks, and perhaps files written by many people working
    directly on the project, conflicts happen. "timers.c" and "utils.c"
    sound great to start with, but there is a real possibility of more than
    one turning up in a project.

    Yes, these are the reasons why I'd like to put object files in
    subdirectories.


    It is not at all hard to make object files mirror the source tree, and
    it adds nothing to the build time. For large projects, it is clearly
    worth the effort. (For small projects, it is probably not necessary.)

    Ok, it's not too hard (nothing is hard when you know how to do it), but
    it's not that simple too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Grant Edwards@21:1/5 to David Brown on Sat Dec 11 16:57:57 2021
    On 2021-12-11, David Brown <david.brown@hesbynett.no> wrote:
    On 11/12/2021 16:18, pozz wrote:

    Ok, it's not too hard (nothing is hard when you know how to do it), but
    it's not that simple too.

    Of course.

    And once you've got a makefile you like for one project, you copy it for
    the next. I don't think I have started writing a new makefile in 25 years!

    Too true.

    You don't write a Makefile from scratch any more than you sit down
    with some carbon, water, nitrogen, phosphorus and whatnot and make an
    apple tree.

    You look around and find an nice existing one that's closest to what
    you want, copy it, and start tweaking.

    --
    Grant

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Hans-Bernhard_Br=c3=b6ker@21:1/5 to All on Sat Dec 11 18:53:34 2021
    Am 10.12.2021 um 19:44 schrieb David Brown:

    The big advantage of having object directories that copy source
    directories is that it all works even if you have more than one file
    with the same name.

    Setting aside the issue whether the build can actually handle that
    ("module names" in the code tend to only be based on the basename of the source, not its full path, so they would clash anyway), that should
    remain an exceptional mishap. I don't subscribe to the idea of making
    my everyday life harder to account for (usually) avoidable exceptions
    like that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Hans-Bernhard_Br=c3=b6ker@21:1/5 to All on Sat Dec 11 18:47:48 2021
    Am 11.12.2021 um 10:01 schrieb Stefan Reuther:
    Am 10.12.2021 um 18:35 schrieb Hans-Bernhard Bröker:

    But let's face it: we very rarely even look at object files, much less
    work on them in any meaningful fashion.  They just have to be somewhere,
    but it's no particular burden at all if they're all in a single folder,
    per primary build target.

    But sometimes, we do look at them. Especially in an embedded context.

    In my experience, looking at individual object files does not occur in
    embedded context any more often than in others.

    One example could be things like stack consumption analysis.

    That one's actually easier if you have the object files all in a single
    folder, as the tool will have to look at all of them anyway, so it helps
    if you can just pass it objdir/*.o.

    Or to
    answer the question "how much code size do I pay for using this C++ feature?". "Did the compiler correctly inline this function I expected
    it to inline?".

    Both of those are way easier to check in the debugger or in the mapfile,
    than by inspecting individual object files.


    And if the linker gives me a "duplicate definition" error, I prefer that
    it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'.

    Both are equally useless. You want to know which source file they're in,
    not which object files.

    Do you actually use a tool that obfuscates the o file nimes like that?

    But otherwise, once you got infrastructure to place object files in SOME subdirectory in your build system, mirroring the source structure is
    easy and gives a usability win.

    I don't think you've actually mentioned a single one, so far. None of
    the things you mentioned had anything to do with _where_ the object
    files are.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Reuther@21:1/5 to All on Sun Dec 12 11:27:00 2021
    Am 11.12.2021 um 18:47 schrieb Hans-Bernhard Bröker:
    Am 11.12.2021 um 10:01 schrieb Stefan Reuther:
    Am 10.12.2021 um 18:35 schrieb Hans-Bernhard Bröker:
    But let's face it: we very rarely even look at object files, much less
    work on them in any meaningful fashion.  They just have to be somewhere, >>> but it's no particular burden at all if they're all in a single folder,
    per primary build target.

    But sometimes, we do look at them. Especially in an embedded context.

    In my experience, looking at individual object files does not occur in embedded context any more often than in others.

    Neither in my experience, but this is because I look at individual
    object files even for desktop/server applications, but I don't expect
    that to be the rule :)

    Or to
    answer the question "how much code size do I pay for using this C++
    feature?". "Did the compiler correctly inline this function I expected
    it to inline?".

    Both of those are way easier to check in the debugger or in the mapfile,
    than by inspecting individual object files.

    For me, 'objdump -dr blah.o | less' or 'nm blah.o | awk ...' is the
    easiest way to answer such questions. The output of 'objdump | less' is
    much easier to handle than gdb's 'disas'. And how do you even get
    function sizes with a debugger?

    And if the linker gives me a "duplicate definition" error, I prefer that
    it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'.

    Both are equally useless. You want to know which source file they're in,
    not which object files.

    I want to know in what translation unit they are in. It doesn't help to
    know that the duplicate definition comes from 'keys.inc' which is
    supposed to be included exactly once. I want to know which two
    translation units included it, and for that it helps to have the name of
    the translation unit - the initial *.c/cpp file - encoded in the object
    file name.

    Do you actually use a tool that obfuscates the o file nimes like that?

    Encoding the command-line that generates a file (as a cryptographic
    hash) into the file name is a super-easy way to implement rebuild-on-rule-change. I use that for a number of temporary files.

    I do not use that for actual object files for the reasons given, but it
    would technically make sense.

    I don't think you've actually mentioned a single one, so far.  None of
    the things you mentioned had anything to do with _where_ the object
    files are.

    There are no hard technical reasons. It's all about usability, and
    that's about the things you actually do. If you got a GUI that takes you
    to the assembler code of a function with a right-click in the editor,
    you don't need 'objdump'. I don't have such a GUI and don't want it most
    of the time.


    Stefan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to All on Sun Dec 12 14:15:08 2021
    On 11/12/2021 18:53, Hans-Bernhard Bröker wrote:
    Am 10.12.2021 um 19:44 schrieb David Brown:

    The big advantage of having object directories that copy source
    directories is that it all works even if you have more than one file
    with the same name. 

    Setting aside the issue whether the build can actually handle that
    ("module names" in the code tend to only be based on the basename of the source, not its full path, so they would clash anyway), that should
    remain an exceptional mishap.  I don't subscribe to the idea of making
    my everyday life harder to account for (usually) avoidable exceptions
    like that.


    Nor do I. But as I said, and as others know, supporting object files in
    a tree is not difficult in a makefile, and it is common practice for
    many build systems. I can't think of any that /don't/ support it (not
    that I claim to have used a sizeable proportion of build systems).

    If it is easy to avoid a particular class of problem, and have a nice,
    neat structure, then what's the problem with having object files in a tree?

    After all, the basic principle of an automatically maintained makefile
    (or other build system) is:

    1. Find all the source files - src/x/y/z.c - in whatever source paths
    you have specified.

    2. Determine all the object files you need by swapping ".c" for ".o",
    and changing the "src" directory for the "build" directory, giving you a
    list build/x/y/z.o.

    3. Figure out a set of dependency rules for these, either using
    something like "gcc -M...", or the lazy method of making all object
    files depend on all headers, or something inbetween.

    4. Make your binary file depend on all the build/x/y/z.o files.


    As I see it, it is simpler, clearer and more natural that the object
    files (and dependencies, lists files, etc.) follow the structure of the
    source files. I'd have to go out of my way to make a riskier system
    that put all the object files in one place.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From chris@21:1/5 to Grant Edwards on Fri Dec 17 17:19:56 2021
    On 12/06/21 13:52, Grant Edwards wrote:
    On 2021-12-04, George Neuner<gneuner2@comcast.net> wrote:
    On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
    <invalid@invalid.invalid> wrote:

    On 2021-12-03, Theo<theom+news@chiark.greenend.org.uk> wrote:

    [*] Powershell and WSL have been trying to improve this. But I've not seen
    any build flows that make much use of them, beyond simply taking Linux flows
    and running them in WSL.

    I always had good luck using Cygwin and gnu "make" on Windows to run
    various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
    haven't needed to do that for several years now...

    The problem with Cygwin is it doesn't play well with native Windows
    GCC (MingW et al).

    It's always worked fine for me.

    Cygwin compilers produce executables that depend on the /enormous/
    Cygwin library.

    I wasn't talking about using Cygwin compilers. I was talking about
    using Cygwin to do cross-compilation using compilers like IAR.

    You can statically link the library or ship the DLL (or an installer
    that downloads it) with your program, but by doing so your programs
    falls under GPL - the terms of which are not acceptable to some
    developers.

    And the Cygwin environment is ... less than stable. Any update to
    Windows can break it.

    That's definitely true. :/


    Used cygwin for years just to have access to the unix utils and X
    so I could run my favourite nedit fs editor, Never ran compilers
    though it, but hassle free experience once setup. That was the 32 bit
    version, sadly no longer available...

    Chris

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From chris@21:1/5 to pozz on Fri Dec 17 17:27:57 2021
    On 12/02/21 11:46, pozz wrote:
    When I download C source code (for example for Linux), most of the time
    I need to use make (or autoconf).

    In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor that give you at least a ready-to-use IDE (Elipse based or Visual Studio based or proprietary). Recently it give you a full set of
    libraries, middlewares, tools to create a complex project from scratch
    in a couple of minutes that is compatibile and buildable with its IDE.

    Ok, it's a good thing to start with a minimal effort and make some tests
    on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).

    I'm asking this, because I just started to add some unit tests (to run
    on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I
    create a different IDE project for each module test?

    Moreover, the build process of a project maintained under an IDE is
    manual (click on a button). Most of the time there isn't the possibility
    to build by a command line and when it is possible, it isn't the
    "normal" way.

    Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.

    Do you use IDE or Makefile? Is there a recent and much better
    alternative to make (such as cmake or SCons)?


    Have a standard Makefile template that gets edited for each new project
    or part thereof.

    IDE systems may have their attractions, but usually don't like their
    editors, nor the plethora of config files. The more plain vanilla the
    better here, hence makefiles as the least hassle and most productive
    route. Need to have full visibility from top to bottom and some ide's
    can be pretty opaque.

    Older versions of netbeans looked interesting though...

    Chris

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From StateMachineCOM@21:1/5 to All on Mon Dec 20 17:18:30 2021
    Would anyone point me to a good Makefile template for building a simple embedded project with GNU-ARM?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From chris@21:1/5 to StateMachineCOM on Tue Dec 21 16:45:26 2021
    On 12/21/21 01:18, StateMachineCOM wrote:
    Would anyone point me to a good Makefile template for building a simple embedded project with GNU-ARM?

    I'm sure they must exist but perhaps most generate their own. Look
    at how others have done it, but the key is to keep it simple to
    start. Define a root path for sources and subdirs. same for tools
    and libraries. Then some simple rules to relate compiler to srcs
    and objects. You only need to understand the make basics to get
    started. The syntax is arcane and it can seem opaque in error
    message terms, but you just have to persevere until you have
    something working.

    Here's a snippet from a makefile here nothing clever, but try
    to abstract as much as possible to aid reuse. Like to keep it
    neat and tidy, as this aids understanding and you only need
    to do that once.

    # Local Project Directories

    ROOT = /export/nfs/sas8x600/system/projects/ntp-tests
    PRJROOT = $(ROOT)/ntp-client

    SRCDIR = $(PRJROOT)/src
    INCDIR = $(PRJROOT)/inc
    OBJDIR = $(PRJROOT)/obj
    BINDIR = $(PRJROOT)/bin
    LIBDIR = $(PRJROOT)/lib

    # Library & Test Rig

    PRJLIB = ntplib.a
    PRJBIN = ntpbin

    # Subsystem libraries and includes

    SYSINC = $(ROOT)/sysinc
    SYSLIB = $(ROOT)/syslib

    # Tools

    GCCROOT = /usr/local

    GCCINC = $(GCCROOT)/include
    GCCBIN = $(GCCROOT)/bin
    GCCLIB = $(GCCROOT)/lib
    GCCRTL = $(GCCLIB)/gcc/sparc-sun-solaris2.10/4.6.1

    GCCOBJS = $(GCCRTL)/crt1.o $(GCCRTL)/crti.o $(GCCRTL)/crtbegin.o $(GCCRTL)/crtend.o $(GCCRTL)/crtn.o

    CC = $(GCCBIN)/gcc
    CFLAGS = -c -O2 -ansi -Wall

    AS = as
    ASFLAGS =

    LD = ld
    LDFLAGS = -L/usr/local/lib/gcc/sparc-sun-solaris2.10/4.6.1
    -R/usr/local/lib -lc -lsocket -s

    AR = ar
    ARFLAGS = crvs

    INCLUDES = -I$(INCDIR) -I$(SYSINC)

    #
    # Dependency Search Paths
    #
    vpath %.h $(INCDIR)
    vpath %.c $(SRCDIR)
    vpath %.o $(OBJDIR)

    #
    # Rules
    #
    .SUFFIXES:
    .SUFFIXES: .c .o

    .c.o:
    $(CC) $(CFLAGS) $(INCLUDES) -c $(SRCDIR)/$< -o $(OBJDIR)/$@

    # Network Function Library
    #
    LIBINCS = network.h
    LIBSRCS = network.c
    LIBOBJS = network.o

    # Executable Test Harness

    SRCS = main.c network.c
    OBJS = main.o network.o

    # Default Target: Build Library and Test Harness
    #
    default: $(LIBOBJS) $(OBJS) $(SRCS)
    $(AR) $(ARFLAGS) $(LIBDIR)/$(PRJLIB) $(addprefix $(OBJDIR)/, $(LIBOBJS))
    $(LD) $(LDFLAGS) $(OBJDIR)/main.o $(GCCOBJS) $(LIBDIR)/$(PRJLIB) -o $(SRCDIR)/$(PRJBIN)
    cp $(LIBDIR)/$(PRJLIB) $(SYSLIB)/$(PRJLIB)
    cp $(addprefix $(INCDIR)/, $(LIBINCS)) $(SYSINC)

    # Start Over...
    #
    clean:
    rm -f $(BINDIR)/$(PRJBIN) $(OBJDIR)/*.o $(LIBDIR)/$(PRJLIB) $(SYSLIB)/$(PRJLIB) $(addprefix $(SYSINC)/, $(LIBINCS))


    This wasn't for an embedded project, but nothing clever or obscure
    and could probably be improved, so pull it to bits if you like...

    Chris

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Jackson@21:1/5 to StateMachineCOM on Tue Dec 21 17:40:06 2021
    On 2021-12-21, StateMachineCOM <statemachineguru@gmail.com> wrote:
    Would anyone point me to a good Makefile template for building a
    simple embedded project with GNU-ARM?

    I've used this one ...

    https://mithatkonar.com/wiki/doku.php/microcontrollers/avr_makefile_template

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From StateMachineCOM@21:1/5 to All on Tue Dec 21 10:55:26 2021
    Thanks a lot for the suggestions. I'll study them carefully,

    I'm already using a homegrown Makefile "template", such as this one:

    https://github.com/QuantumLeaps/qpc/blob/master/examples/arm-cm/blinky_ek-tm4c123gxl/qk/gnu/Makefile

    The Makefile supports multiple build configurations (Debug, Release, and "Spy" with software tracing), generation of dependencies, etc.. It is pretty straightforward with all source files, directories and libraries configurable. The Makefile uses VPATH
    to simplify the search for the sources. This really simplifies things, but requires unique file names for sources.

    I'm not sure if this Makefile looks "professional" enough to experts. Any constructive critique and suggestions for improvement will be welcome.

    Miro

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)