Ok, it's a good thing to start with a minimal effort and make some tests
on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).
On 2021-12-02, pozz <pozzugno@gmail.com> wrote:
Ok, it's a good thing to start with a minimal effort and make some tests
on EVB and new chips. However I'm wondering if a good quality
commercial/industrial grade software is maintained under the IDE of the
silicon vendor or it is maintained with a Makefile (or similar).
We always use makefiles. Some people do their editing and "make"ing in
an IDE like eclipse. Others use emacs or whatever other environment
they like.
In my experience, software provided by silicon vendors has always,
been utter crap. That's been true for IDEs, libraries, header files, debuggers -- _everything_. And it's been true for 40 years.
Recently I tried to use the Silicon vendor's IDE and demo
project/libraries to build the simple app that prints "hello world" on
a serial port. This is an application, IDE, and libraries the silicon
vendor provided _with_the_evaluation_board_.
Following the instructions, step-by-step, did allow me to build an executable. It was far too large for the MCU's flash. I threw out the
silicon vendor's "drivers" (which were absurdly bloated) and C library
(also huge). I wrote my own bare-metal drivers and substituted the
printf() implementation I had been using for years. The exectuable
size was reduced by over 75%.
We've also tried to use non-silicon-vendor IDEs (eclipse), and using
the IDE's concept of "projects" is always a complete mess. The
"project" always ends up with lot's of hard-coded paths and
host-specific junk in it. This means you can't check the project into git/subversion, check it out on another machine, and build it without
days of "fixing" the project to work on the new host.
When I download C source code (for example for Linux), most of the time I need
to use make (or autoconf).
In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor
that give you at least a ready-to-use IDE (Elipse based or Visual Studio based
or proprietary). Recently it give you a full set of libraries, middlewares, tools to create a complex project from scratch in a couple of minutes that is compatibile and buildable with its IDE.
Ok, it's a good thing to start with a minimal effort and make some tests on EVB
and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).
I'm asking this, because I just started to add some unit tests (to run on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I create a different IDE project for each module test?
Moreover, the build process of a project maintained under an IDE is manual (click on a button). Most of the time there isn't the possibility to build by a
command line and when it is possible, it isn't the "normal" way.
Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.
Do you use IDE or Makefile? Is there a recent and much better alternative to make (such as cmake or SCons)?
Moreover, the build process of a project maintained under an IDE is
manual (click on a button). Most of the time there isn't the possibility
to build by a command line and when it is possible, it isn't the
"normal" way.
Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.
Do you use IDE or Makefile? Is there a recent and much better
alternative to make (such as cmake or SCons)?
Am 02.12.2021 um 12:46 schrieb pozz:
Moreover, the build process of a project maintained under an IDE is
manual (click on a button). Most of the time there isn't the possibility
to build by a command line and when it is possible, it isn't the
"normal" way.
So far, all the IDEs I have encountered this century use some variation
of make under the hood, and have a somewhat standard compiler (i.e.
responds to `whatevercc -c file.c -o file.o`).
Many times in the past I tried to write a Makefile for my projects, but
sincerely for me make tool is very criptic (tabs instead of spaces?).
Dependencies are a mess.
Do you use IDE or Makefile? Is there a recent and much better
alternative to make (such as cmake or SCons)?
Think of make (or ninja) as some sort of (macro-) assembler language of
build systems, and add a high-level language on top.
CMake seems to be a popular (the most popular?) choice for that language
on top, although reasons why it sucks are abundant; the most prominent
for me being that the Makefiles it generates violate pretty much every
best practice and therefore are slow. Other than that, it can build
embedded software of course.
You'll eventually need another meta-build system on top to build the
projects that form your system image (busybox? openssl? dropbear?
linux?), you'll not port their build systems into yours.
Il 02/12/2021 16:22, Grant Edwards ha scritto:
On 2021-12-02, pozz <pozzugno@gmail.com> wrote:
Ok, it's a good thing to start with a minimal effort and make some tests >>> on EVB and new chips. However I'm wondering if a good quality
commercial/industrial grade software is maintained under the IDE of the
silicon vendor or it is maintained with a Makefile (or similar).
We always use makefiles. Some people do their editing and "make"ing in
an IDE like eclipse. Others use emacs or whatever other environment
they like.
In my experience, software provided by silicon vendors has always,
been utter crap. That's been true for IDEs, libraries, header files,
debuggers -- _everything_. And it's been true for 40 years.
Recently I tried to use the Silicon vendor's IDE and demo
project/libraries to build the simple app that prints "hello world" on
a serial port. This is an application, IDE, and libraries the silicon
vendor provided _with_the_evaluation_board_.
Following the instructions, step-by-step, did allow me to build an
executable. It was far too large for the MCU's flash. I threw out the
silicon vendor's "drivers" (which were absurdly bloated) and C library
(also huge). I wrote my own bare-metal drivers and substituted the
printf() implementation I had been using for years. The exectuable
size was reduced by over 75%.
We've also tried to use non-silicon-vendor IDEs (eclipse), and using
the IDE's concept of "projects" is always a complete mess. The
"project" always ends up with lot's of hard-coded paths and
host-specific junk in it. This means you can't check the project into
git/subversion, check it out on another machine, and build it without
days of "fixing" the project to work on the new host.
Thank you for sharing your experiences. Anyway my post wasn't related to
the quality (size/speed efficiency...) of source code provided by
silicon vendors, but to the build process: IDE vs Makefile.
Il 02/12/2021 17:34, Stefan Reuther ha scritto:
Think of make (or ninja) as some sort of (macro-) assembler language of
build systems, and add a high-level language on top.
CMake seems to be a popular (the most popular?) choice for that language
on top, although reasons why it sucks are abundant; the most prominent
for me being that the Makefiles it generates violate pretty much every
best practice and therefore are slow. Other than that, it can build
embedded software of course.
You'll eventually need another meta-build system on top to build the
projects that form your system image (busybox? openssl? dropbear?
linux?), you'll not port their build systems into yours.
It's very difficult to choose the build system today to study and use.
make, Cmake/make, Cmake/ninja, Meson, Scons, ...
What do you suggest for embedded projects? Of course I use
cross-compiler for the target (mainly arm-gcc), but also host native
compiler (mingw on Windows and gcc on Linux) for testing and simulation.
Do you use IDE or Makefile? Is there a recent and much better
alternative to make (such as cmake or SCons)?
[*] Powershell and WSL have been trying to improve this. But I've not seen any build flows that make much use of them, beyond simply taking Linux flows and running them in WSL.
When I download C source code (for example for Linux), most of the time
I need to use make (or autoconf).
In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor that give you at least a ready-to-use IDE (Elipse based or Visual Studio based or proprietary). Recently it give you a full set of
libraries, middlewares, tools to create a complex project from scratch
in a couple of minutes that is compatibile and buildable with its IDE.
Ok, it's a good thing to start with a minimal effort and make some tests
on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).
I'm asking this, because I just started to add some unit tests (to run
on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I
create a different IDE project for each module test?
Moreover, the build process of a project maintained under an IDE is
manual (click on a button). Most of the time there isn't the possibility
to build by a command line and when it is possible, it isn't the
"normal" way.
Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.
Do you use IDE or Makefile? Is there a recent and much better
alternative to make (such as cmake or SCons)?
It's absurd how difficult is to create a Makefile for a simple project[...]
with the following tree:
Makefile
src/
file1.c
module1/
file2.c
module2/
file3.c
target1/
Release/
Debug/
src/
file1.o
file1.d
...
Just to create directories for output files (objects and dependencies)
is a mess: precious rule, cheating make adding a dot after trailing
slash, second expansion, order-only prerequisite!!!
Dependencies must be created as a side effect of compilation with
esoteric -M options for gcc.
Is cmake simpler to configure?
Am 03.12.2021 um 23:48 schrieb pozz:
It's absurd how difficult is to create a Makefile for a simple project[...]
with the following tree:
Makefile
src/
file1.c
module1/
file2.c
module2/
file3.c
target1/
Release/
Debug/
src/
file1.o
file1.d
...
Just to create directories for output files (objects and dependencies)
is a mess: precious rule, cheating make adding a dot after trailing
slash, second expansion, order-only prerequisite!!!
Hypothesis: a single Makefile that does all this is not a good idea.
Better: make a single Makefile that turns your source code into one
instance of object code, and give it some configuration options that say whether you want target1/Release, target2/Release, or host/Debug.
I'm not sure what you need order-only dependencies for. For a project
like this, with GNU make I'd most likely just do something like
OBJ = file1.o module1/file2.o module2/file3.o
main: $(OBJ)
$(CC) -o $@ $(OBJ)
$(OBJ): %.o: $(SRCDIR)/%.c
mkdir $(dir $@)
$(CC) $(CFLAGS) -c $< -o $@
Dependencies must be created as a side effect of compilation with
esoteric -M options for gcc.
It's not too bad with sufficiently current versions.
CFLAGS += -MMD -MP
-include $(OBJ:.o=.d)
Is cmake simpler to configure?
CMake does one-configuration-per-invocation type builds like sketched
above, i.e. to build target1/Release and target1/Debug, you invoke CMake
on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
once with -DCMAKE_BUILD_TYPE=Debug.
Il 04/12/2021 10:31, Stefan Reuther ha scritto:
I'm not sure what you need order-only dependencies for. For a project
like this, with GNU make I'd most likely just do something like
OBJ = file1.o module1/file2.o module2/file3.o
main: $(OBJ)
$(CC) -o $@ $(OBJ)
$(OBJ): %.o: $(SRCDIR)/%.c
mkdir $(dir $@)
$(CC) $(CFLAGS) -c $< -o $@
This is suboptimal. Every time one object file is created (because it is
not present or because prerequisites aren't satisfied), mkdir command is executed, even if $(dir $@) is already created.
A better approach is to use a dedicated rule for directories, but it's
very complex and tricky[1].
I think your approach is better, only because is much more
understandable, not because is more efficient.
Dependencies must be created as a side effect of compilation with
esoteric -M options for gcc.
It's not too bad with sufficiently current versions.
CFLAGS += -MMD -MP
-include $(OBJ:.o=.d)
Are you sure you don't need -MT too, to specify exactly the target rule?
Is cmake simpler to configure?
CMake does one-configuration-per-invocation type builds like sketched
above, i.e. to build target1/Release and target1/Debug, you invoke CMake
on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
once with -DCMAKE_BUILD_TYPE=Debug.
Yes, I was asking if the configuration file of CMake is simpler to write compared to a Makefile.
[1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/
On 04/12/2021 16:23, pozz wrote:
Il 04/12/2021 10:31, Stefan Reuther ha scritto:
I'm not sure what you need order-only dependencies for. For a project
like this, with GNU make I'd most likely just do something like
OBJ = file1.o module1/file2.o module2/file3.o
main: $(OBJ)
$(CC) -o $@ $(OBJ)
$(OBJ): %.o: $(SRCDIR)/%.c
mkdir $(dir $@)
$(CC) $(CFLAGS) -c $< -o $@
I almost never use makefiles that have the object files (or source
files, or other files) specified explicitly.
CFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.c))
CXXFILES := $(foreach dir,$(ALLSOURCEDIRS),$(wildcard $(dir)/*.cpp))
OBJSsrc := $(CFILES:.c=.o) $(CXXFILES:.cpp=.o)
OBJS := $(addprefix $(OBJDIR), $(patsubst ../%,%,$(OBJSsrc)))
If there is a C or C++ file in the source tree, it is part of the
project. Combined with automatic dependency resolution (for which I use
gcc with -M* flags) this means that the make for a project adapts automatically whenever you add new source or header files, or change the
ones that are there.
This is suboptimal. Every time one object file is created (because it is
not present or because prerequisites aren't satisfied), mkdir command is
executed, even if $(dir $@) is already created.
Use existence-only dependencies:
target/%.o : %.c | target
$(CC) $(CFLAGS) -c $< -o $@
target :
mkdir -p target
When you have a dependency given after a |, gnu make will ensure that it exists but does not care about its timestamp. So here it will check if
the target directory is there before creating target/%.o, and if not it
will make it. It probably doesn't matter much for directories, but it
can be useful in some cases to avoid extra work.
And use "mkdir -p" to make a directory including any other parts of the
path needed, and to avoid an error if the directory already exists.
A better approach is to use a dedicated rule for directories, but it's
very complex and tricky[1].
The reference you gave is okay too. Some aspects of advanced makefiles
/are/ complex and tricky, and can be hard to debug (look out for mixes
of spaces instead of tabs at the start of lines!) But once you've got
them in place, you can re-use them in other projects. And you can copy examples like the reference you gave, rather than figuring it out yourself.
I think your approach is better, only because is much more
understandable, not because is more efficient.
My version is - IMHO - understandable /and/ efficient.
Dependencies must be created as a side effect of compilation with
esoteric -M options for gcc.
It's not too bad with sufficiently current versions.
CFLAGS += -MMD -MP
-include $(OBJ:.o=.d)
Are you sure you don't need -MT too, to specify exactly the target rule?
The exact choice of -M flags depends on details of your setup. I prefer
to have the dependency creation done as a separate step from the
compilation - it's not strictly necessary, but I have found it neater. However, I use two -MT flags per dependency file. One makes a rule for
the file.o dependency, the other is for the file.d dependency. That
way, make knows when it has to re-build the dependency file.
Is cmake simpler to configure?
CMake does one-configuration-per-invocation type builds like sketched
above, i.e. to build target1/Release and target1/Debug, you invoke CMake >>> on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
once with -DCMAKE_BUILD_TYPE=Debug.
Yes, I was asking if the configuration file of CMake is simpler to write
compared to a Makefile.
I've only briefly looked at CMake. It always looked a bit limited to me
- sometimes I have a variety of extra programs or steps to run (like a
Python script to pre-process files and generate extra C or header files,
or extra post-processing steps). I also often need different compiler
flags for different parts of a project. Perhaps it would work for what
I need and I just haven't read enough.
[1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/
On 4.12.21 18.41, David Brown wrote:
On 04/12/2021 16:23, pozz wrote:
Il 04/12/2021 10:31, Stefan Reuther ha scritto:
Is cmake simpler to configure?
CMake does one-configuration-per-invocation type builds like sketched
above, i.e. to build target1/Release and target1/Debug, you invoke
CMake
on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and
once with -DCMAKE_BUILD_TYPE=Debug.
Yes, I was asking if the configuration file of CMake is simpler to write >>> compared to a Makefile.
I've only briefly looked at CMake. It always looked a bit limited to me
- sometimes I have a variety of extra programs or steps to run (like a
Python script to pre-process files and generate extra C or header files,
or extra post-processing steps). I also often need different compiler
flags for different parts of a project. Perhaps it would work for what
I need and I just haven't read enough.
[1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/
CMake is on a different level than make. CMake aims to the realm of
autoconf, automake and friends. One of the supported tail-ends for
CMake is GNU make.
CMake seems to be a popular (the most popular?) choice for that language
on top, although reasons why it sucks are abundant; the most prominent
for me being that the Makefiles it generates violate pretty much every
best practice and therefore are slow. Other than that, it can build
embedded software of course.
On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote:
[*] Powershell and WSL have been trying to improve this. But I've not seen >> any build flows that make much use of them, beyond simply taking Linux flows >> and running them in WSL.
I always had good luck using Cygwin and gnu "make" on Windows to run
various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
haven't needed to do that for several years now...
On 04/12/2021 17:49, Tauno Voipio wrote:
On 4.12.21 18.41, David Brown wrote:
On 04/12/2021 16:23, pozz wrote:
Il 04/12/2021 10:31, Stefan Reuther ha scritto:
Is cmake simpler to configure?
CMake does one-configuration-per-invocation type builds like sketched >>>>> above, i.e. to build target1/Release and target1/Debug, you invoke
CMake
on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and >>>>> once with -DCMAKE_BUILD_TYPE=Debug.
Yes, I was asking if the configuration file of CMake is simpler to write >>>> compared to a Makefile.
I've only briefly looked at CMake. It always looked a bit limited to me >>> - sometimes I have a variety of extra programs or steps to run (like a
Python script to pre-process files and generate extra C or header files, >>> or extra post-processing steps). I also often need different compiler
flags for different parts of a project. Perhaps it would work for what >>> I need and I just haven't read enough.
[1] https://ismail.badawi.io/blog/automatic-directory-creation-in-make/
CMake is on a different level than make. CMake aims to the realm of
autoconf, automake and friends. One of the supported tail-ends for
CMake is GNU make.
Yes, I know. The question is, could I (or the OP, or others) use CMake
to control their builds? It doesn't really matter if the output is a makefile, a ninja file, or whatever - it matters if it can do the job
better (for some value of "better") than a hand-written makefile. I
suspect that for projects that fit into the specific patterns it
supports, it will be a good choice - for others, it will not.
On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards <invalid@invalid.invalid> wrote:
On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote:
[*] Powershell and WSL have been trying to improve this. But I've not seen >>> any build flows that make much use of them, beyond simply taking Linux flows
and running them in WSL.
I always had good luck using Cygwin and gnu "make" on Windows to run
various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
haven't needed to do that for several years now...
The problem with Cygwin is it doesn't play well with native Windows
GCC (MingW et al).
Cygwin compilers produce executables that depend on the /enormous/
Cygwin library. You can statically link the library or ship the DLL
(or an installer that downloads it) with your program, but by doing so
your programs falls under GPL - the terms of which are not acceptable
to some developers.
And the Cygwin environment is ... less than stable. Any update to
Windows can break it.
On 4.12.21 19.23, David Brown wrote:
On 04/12/2021 17:49, Tauno Voipio wrote:
On 4.12.21 18.41, David Brown wrote:
On 04/12/2021 16:23, pozz wrote:
Il 04/12/2021 10:31, Stefan Reuther ha scritto:
Is cmake simpler to configure?
CMake does one-configuration-per-invocation type builds like sketched >>>>>> above, i.e. to build target1/Release and target1/Debug, you invoke >>>>>> CMake
on two different workspaces, once with -DCMAKE_BUILD_TYPE=Release and >>>>>> once with -DCMAKE_BUILD_TYPE=Debug.
Yes, I was asking if the configuration file of CMake is simpler to
write
compared to a Makefile.
I've only briefly looked at CMake. It always looked a bit limited
to me
- sometimes I have a variety of extra programs or steps to run (like a >>>> Python script to pre-process files and generate extra C or header
files,
or extra post-processing steps). I also often need different compiler >>>> flags for different parts of a project. Perhaps it would work for what >>>> I need and I just haven't read enough.
[1]
https://ismail.badawi.io/blog/automatic-directory-creation-in-make/
CMake is on a different level than make. CMake aims to the realm of
autoconf, automake and friends. One of the supported tail-ends for
CMake is GNU make.
Yes, I know. The question is, could I (or the OP, or others) use CMake
to control their builds? It doesn't really matter if the output is a
makefile, a ninja file, or whatever - it matters if it can do the job
better (for some value of "better") than a hand-written makefile. I
suspect that for projects that fit into the specific patterns it
supports, it will be a good choice - for others, it will not.
I tried to use it for some raw-iron embedded programs. IMHO, CMake
belongs there more to the problem than solution set. CMake is aimed
to produce code for the system it is run on, cross-compilation creates problems.
I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS
Linux, but the compiler identification for a raw metal target was not
happy when the trial compilation could not link a run file using the
Linux run file creation model.
Il 04/12/2021 10:31, Stefan Reuther ha scritto:
I'm not sure what you need order-only dependencies for. For a project
like this, with GNU make I'd most likely just do something like
OBJ = file1.o module1/file2.o module2/file3.o
main: $(OBJ)
$(CC) -o $@ $(OBJ)
$(OBJ): %.o: $(SRCDIR)/%.c
mkdir $(dir $@)
$(CC) $(CFLAGS) -c $< -o $@
This is suboptimal. Every time one object file is created (because it is
not present or because prerequisites aren't satisfied), mkdir command is executed, even if $(dir $@) is already created.
Dependencies must be created as a side effect of compilation with
esoteric -M options for gcc.
It's not too bad with sufficiently current versions.
CFLAGS += -MMD -MP
-include $(OBJ:.o=.d)
Are you sure you don't need -MT too, to specify exactly the target rule?
On 04/12/2021 20:53, George Neuner wrote:[...]
On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
I always had good luck using Cygwin and gnu "make" on Windows to run
various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
haven't needed to do that for several years now...
The problem with Cygwin is it doesn't play well with native Windows
GCC (MingW et al).
I concur with that. Cygwin made sense long ago, but for the past couple
of decades the mingw-based alternatives have been more appropriate for
most uses of *nix stuff on Windows. In particular, Cygwin is a thick compatibility layer that has its own filesystem, process management, and other features to fill in the gaps where Windows doesn't fulfil the
POSIX standards (or does so in a way that plays badly with the rest of Windows).
Am 04.12.2021 um 16:23 schrieb pozz:
Il 04/12/2021 10:31, Stefan Reuther ha scritto:
I'm not sure what you need order-only dependencies for. For a project
like this, with GNU make I'd most likely just do something like
OBJ = file1.o module1/file2.o module2/file3.o
main: $(OBJ)
$(CC) -o $@ $(OBJ)
$(OBJ): %.o: $(SRCDIR)/%.c
mkdir $(dir $@)
$(CC) $(CFLAGS) -c $< -o $@
This is suboptimal. Every time one object file is created (because it is
not present or because prerequisites aren't satisfied), mkdir command is
executed, even if $(dir $@) is already created.
(did I really forget the '-p'?)
The idea was that creating a directory and checking for its existence
both require a path lookup, which is the expensive operation here.
When generating the Makefile with a script, it's easy to sneak a 100% matching directory creation dependency into any rule that needs it
foo/bar.o: bar.c foo/.mark
...
foo/.mark:
mkdir foo
Dependencies must be created as a side effect of compilation with
esoteric -M options for gcc.
It's not too bad with sufficiently current versions.
CFLAGS += -MMD -MP
-include $(OBJ:.o=.d)
Are you sure you don't need -MT too, to specify exactly the target rule?
Documentation says you are right, but '-MMD -MP' works fine for me so far...
Stefan
Am 04.12.2021 um 22:17 schrieb David Brown:
On 04/12/2021 20:53, George Neuner wrote:[...]
On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
I always had good luck using Cygwin and gnu "make" on Windows to run
various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
haven't needed to do that for several years now...
The problem with Cygwin is it doesn't play well with native Windows
GCC (MingW et al).
I concur with that. Cygwin made sense long ago, but for the past couple
of decades the mingw-based alternatives have been more appropriate for
most uses of *nix stuff on Windows. In particular, Cygwin is a thick
compatibility layer that has its own filesystem, process management, and
other features to fill in the gaps where Windows doesn't fulfil the
POSIX standards (or does so in a way that plays badly with the rest of
Windows).
The problem is that both projects, Cygwin and MinGW/MSYS, provide much
more than just a compiler, and in an incompatible way, which is probably incompatible with what your toolchain does, and incompatible with what
Visual Studio does.
"-Ic:\test" specifies one path name for Windows, but probably two for a toolchain with Unix heritage, where ":" is the separator, not a drive
letter. Cygwin wants "-I/cygdrive/c" instead, (some versions of) MinGW
want "-I/c". That, on the other hand, might be an option "-I" followed
by an option "/c" for a toolchain with Windows heritage.
The problem domain is complex, therefore solutions need to be complex.
That aside, I found staying within one universe ("use all from Cygwin",
"use all from MinGW") to work pretty well; when having to call into
another universe (e.g. native Win32), be careful to not use, for
example, any absolute paths.
Stefan
I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS
Linux, but the compiler identification for a raw metal target was not
happy when the trial compilation could not link a run file using the
Linux run file creation model.
That is kind of what I thought. CMake sounds like a good solution if
you want to make a program that compiles on Linux with native gcc, and
also with MSVC on Windows, and perhaps a few other native build
combinations. But it is not really suited for microcontroller builds as
far as I can see. (Again, I haven't tried it much, and don't want to do
it injustice by being too categorical.)
On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
<invalid@invalid.invalid> wrote:
On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote:
[*] Powershell and WSL have been trying to improve this. But I've not seen >>> any build flows that make much use of them, beyond simply taking Linux flows
and running them in WSL.
I always had good luck using Cygwin and gnu "make" on Windows to run >>various Win32 .exe command line compilers (e.g. IAR). I (thankfully) >>haven't needed to do that for several years now...
The problem with Cygwin is it doesn't play well with native Windows
GCC (MingW et al).
Cygwin compilers produce executables that depend on the /enormous/
Cygwin library.
You can statically link the library or ship the DLL (or an installer
that downloads it) with your program, but by doing so your programs
falls under GPL - the terms of which are not acceptable to some
developers.
And the Cygwin environment is ... less than stable. Any update to
Windows can break it.
On 06/12/2021 14:51, Grant Edwards wrote:
I use CMake for cross-compilation for microcontroller stuff. I don't
use it for my own code, but there are a few 3rd-party libraries that
use it, and I don't have any problems configuring it to use a cross
compiler.
OK. As I said, I haven't looked in detail or tried much. Maybe I will,
one day when I have time.
On 2021-12-04, David Brown <david.brown@hesbynett.no> wrote:
I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS
Linux, but the compiler identification for a raw metal target was not
happy when the trial compilation could not link a run file using the
Linux run file creation model.
That is kind of what I thought. CMake sounds like a good solution if
you want to make a program that compiles on Linux with native gcc, and
also with MSVC on Windows, and perhaps a few other native build
combinations. But it is not really suited for microcontroller builds as
far as I can see. (Again, I haven't tried it much, and don't want to do
it injustice by being too categorical.)
I use CMake for cross-compilation for microcontroller stuff. I don't
use it for my own code, but there are a few 3rd-party libraries that
use it, and I don't have any problems configuring it to use a cross
compiler.
When I download C source code (for example for Linux), most of the time
I need to use make (or autoconf).
In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor that give you at least a ready-to-use IDE (Elipse based or Visual Studio based or proprietary). Recently it give you a full set of
libraries, middlewares, tools to create a complex project from scratch
in a couple of minutes that is compatibile and buildable with its IDE.
Ok, it's a good thing to start with a minimal effort and make some tests
on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).
I'm asking this, because I just started to add some unit tests (to run
on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I
create a different IDE project for each module test?
Moreover, the build process of a project maintained under an IDE is
manual (click on a button). Most of the time there isn't the possibility
to build by a command line and when it is possible, it isn't the
"normal" way.
Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.
Do you use IDE or Makefile? Is there a recent and much better
alternative to make (such as cmake or SCons)?
Do you use IDE or Makefile? Is there a recent and much better
alternative to make (such as cmake or SCons)?
This is suboptimal. Every time one object file is created (because it is
not present or because prerequisites aren't satisfied), mkdir command is
executed, even if $(dir $@) is already created.
Use existence-only dependencies:
target/%.o : %.c | target
$(CC) $(CFLAGS) -c $< -o $@
target :
mkdir -p target
Do you use IDE or Makefile? Is there a recent and much better alternative to make (such as cmake or SCons)?
I'd prefer to have the same tree in source and build dirs:
Am 09.12.2021 um 11:54 schrieb pozz:
I'd prefer to have the same tree in source and build dirs:
What on earth for?
Subdirectories for sources are necessary to organize our work, because
humans can't deal too well with folders filled with hundreds of files,
and because we fare better with the project's top-down structure
tangibly represented as a tree of subfolders.
But let's face it: we very rarely even look at object files, much less
work on them in any meaningful fashion. They just have to be somewhere,
but it's no particular burden at all if they're all in a single folder,
per primary build target. They're for the compiler and make alone to
work on, not for humans. So they don't have to be organized for human consumption.
That's why virtually all hand-written Makefiles I've ever seen, and a
large portion of the auto-generated ones, too, keep all of a target's
object, list and dependency files in a single folder. Mechanisms like
VPATH exist for the express purpose of easing this approach, and the
built-in rules and macros also largely rely on it.
The major exception in this regard is CMake, which does indeed mirror
the source tree layout --- but that's manageable for them only because
their Makefiles, being fully machine-generated, can become almost
arbitrarily complex, for no extra cost. Nobody in full possession of
their mental capabilities would ever write Makefiles the way CMake does
it, by hand.
Am 09.12.2021 um 11:54 schrieb pozz:
I'd prefer to have the same tree in source and build dirs:
What on earth for?
Subdirectories for sources are necessary to organize our work, because
humans can't deal too well with folders filled with hundreds of files,
and because we fare better with the project's top-down structure
tangibly represented as a tree of subfolders.
But let's face it: we very rarely even look at object files, much less
work on them in any meaningful fashion. They just have to be somewhere,
but it's no particular burden at all if they're all in a single folder,
per primary build target.
The major exception in this regard is CMake, which does indeed mirror
the source tree layout --- but that's manageable for them only because
their Makefiles, being fully machine-generated, can become almost
arbitrarily complex, for no extra cost. Nobody in full possession of
their mental capabilities would ever write Makefiles the way CMake does
it, by hand.
Ok, it's not too hard (nothing is hard when you know how to do it), but
it's not that simple too.
On 10/12/2021 18:35, Hans-Bernhard Bröker wrote:
Am 09.12.2021 um 11:54 schrieb pozz:
I'd prefer to have the same tree in source and build dirs:
What on earth for?
Subdirectories for sources are necessary to organize our work, because
humans can't deal too well with folders filled with hundreds of files,
and because we fare better with the project's top-down structure
tangibly represented as a tree of subfolders.
But let's face it: we very rarely even look at object files, much less
work on them in any meaningful fashion. They just have to be somewhere,
but it's no particular burden at all if they're all in a single folder,
per primary build target. They're for the compiler and make alone to
work on, not for humans. So they don't have to be organized for human
consumption.
That's why virtually all hand-written Makefiles I've ever seen, and a
large portion of the auto-generated ones, too, keep all of a target's
object, list and dependency files in a single folder. Mechanisms like
VPATH exist for the express purpose of easing this approach, and the
built-in rules and macros also largely rely on it.
The major exception in this regard is CMake, which does indeed mirror
the source tree layout --- but that's manageable for them only because
their Makefiles, being fully machine-generated, can become almost
arbitrarily complex, for no extra cost. Nobody in full possession of
their mental capabilities would ever write Makefiles the way CMake does
it, by hand.
There are other automatic systems that mirror the structure of the
source tree for object files, dependency files and list files (yes, some people still like these). Eclipse does it, for example, and therefore
the majority of vendor-supplied toolkits since most are Eclipse based.
(I don't know if NetBeans and Visual Studio / Visual Studio Code do so - these are the other two IDE's commonly used by manufacturer tools).
The big advantage of having object directories that copy source
directories is that it all works even if you have more than one file
with the same name. Usually, of course, you want to avoid name
conflicts - there are risks of other issues or complications such as
header guard symbols that are not unique (they /can/ include directory information and not just the filename, but they don't always do so) and
you have to be careful that you #include the files you meant. But with
big projects containing SDK files, third-party libraries, RTOS's,
network stacks, and perhaps files written by many people working
directly on the project, conflicts happen. "timers.c" and "utils.c"
sound great to start with, but there is a real possibility of more than
one turning up in a project.
It is not at all hard to make object files mirror the source tree, and
it adds nothing to the build time. For large projects, it is clearly
worth the effort. (For small projects, it is probably not necessary.)
On 11/12/2021 16:18, pozz wrote:
Ok, it's not too hard (nothing is hard when you know how to do it), but
it's not that simple too.
Of course.
And once you've got a makefile you like for one project, you copy it for
the next. I don't think I have started writing a new makefile in 25 years!
The big advantage of having object directories that copy source
directories is that it all works even if you have more than one file
with the same name.
Am 10.12.2021 um 18:35 schrieb Hans-Bernhard Bröker:
But let's face it: we very rarely even look at object files, much less
work on them in any meaningful fashion. They just have to be somewhere,
but it's no particular burden at all if they're all in a single folder,
per primary build target.
But sometimes, we do look at them. Especially in an embedded context.
One example could be things like stack consumption analysis.
answer the question "how much code size do I pay for using this C++ feature?". "Did the compiler correctly inline this function I expected
it to inline?".
And if the linker gives me a "duplicate definition" error, I prefer that
it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'.
But otherwise, once you got infrastructure to place object files in SOME subdirectory in your build system, mirroring the source structure is
easy and gives a usability win.
Am 11.12.2021 um 10:01 schrieb Stefan Reuther:
Am 10.12.2021 um 18:35 schrieb Hans-Bernhard Bröker:
But let's face it: we very rarely even look at object files, much less
work on them in any meaningful fashion. They just have to be somewhere, >>> but it's no particular burden at all if they're all in a single folder,
per primary build target.
But sometimes, we do look at them. Especially in an embedded context.
In my experience, looking at individual object files does not occur in embedded context any more often than in others.
Or to
answer the question "how much code size do I pay for using this C++
feature?". "Did the compiler correctly inline this function I expected
it to inline?".
Both of those are way easier to check in the debugger or in the mapfile,
than by inspecting individual object files.
And if the linker gives me a "duplicate definition" error, I prefer that
it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'.
Both are equally useless. You want to know which source file they're in,
not which object files.
Do you actually use a tool that obfuscates the o file nimes like that?
I don't think you've actually mentioned a single one, so far. None of
the things you mentioned had anything to do with _where_ the object
files are.
Am 10.12.2021 um 19:44 schrieb David Brown:
The big advantage of having object directories that copy source
directories is that it all works even if you have more than one file
with the same name.
Setting aside the issue whether the build can actually handle that
("module names" in the code tend to only be based on the basename of the source, not its full path, so they would clash anyway), that should
remain an exceptional mishap. I don't subscribe to the idea of making
my everyday life harder to account for (usually) avoidable exceptions
like that.
On 2021-12-04, George Neuner<gneuner2@comcast.net> wrote:
On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards
<invalid@invalid.invalid> wrote:
On 2021-12-03, Theo<theom+news@chiark.greenend.org.uk> wrote:
[*] Powershell and WSL have been trying to improve this. But I've not seen
any build flows that make much use of them, beyond simply taking Linux flows
and running them in WSL.
I always had good luck using Cygwin and gnu "make" on Windows to run
various Win32 .exe command line compilers (e.g. IAR). I (thankfully)
haven't needed to do that for several years now...
The problem with Cygwin is it doesn't play well with native Windows
GCC (MingW et al).
It's always worked fine for me.
Cygwin compilers produce executables that depend on the /enormous/
Cygwin library.
I wasn't talking about using Cygwin compilers. I was talking about
using Cygwin to do cross-compilation using compilers like IAR.
You can statically link the library or ship the DLL (or an installer
that downloads it) with your program, but by doing so your programs
falls under GPL - the terms of which are not acceptable to some
developers.
And the Cygwin environment is ... less than stable. Any update to
Windows can break it.
That's definitely true. :/
When I download C source code (for example for Linux), most of the time
I need to use make (or autoconf).
In embedded world (no Linux embedded), we use MCUs produced by a silicon vendor that give you at least a ready-to-use IDE (Elipse based or Visual Studio based or proprietary). Recently it give you a full set of
libraries, middlewares, tools to create a complex project from scratch
in a couple of minutes that is compatibile and buildable with its IDE.
Ok, it's a good thing to start with a minimal effort and make some tests
on EVB and new chips. However I'm wondering if a good quality commercial/industrial grade software is maintained under the IDE of the silicon vendor or it is maintained with a Makefile (or similar).
I'm asking this, because I just started to add some unit tests (to run
on the host machine) on one of my projects that is built under the IDE. Without a Makefile is very difficult to add a series of tests: do I
create a different IDE project for each module test?
Moreover, the build process of a project maintained under an IDE is
manual (click on a button). Most of the time there isn't the possibility
to build by a command line and when it is possible, it isn't the
"normal" way.
Many times in the past I tried to write a Makefile for my projects, but sincerely for me make tool is very criptic (tabs instead of spaces?). Dependencies are a mess.
Do you use IDE or Makefile? Is there a recent and much better
alternative to make (such as cmake or SCons)?
Would anyone point me to a good Makefile template for building a simple embedded project with GNU-ARM?
Would anyone point me to a good Makefile template for building a
simple embedded project with GNU-ARM?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 54:42:02 |
Calls: | 6,650 |
Calls today: | 2 |
Files: | 12,200 |
Messages: | 5,330,629 |