By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its usefulness in today’s multilingual world.
On 1/29/2024 4:57 PM, Lawrence D'Oliveiro wrote:
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Huh?
On 30/01/2024 00:57, Lawrence D'Oliveiro wrote:
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of compiling assemblies of modules.
C doesn't.
The proposal would allow a project to be built using:
cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
or instead of having to provide a makefile or an @ filelist.
That is significant advance on what C compilers typically do.
On 30/01/2024 02:45, bart wrote:
On 30/01/2024 00:57, Lawrence D'Oliveiro wrote:
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.
C doesn't.
The proposal would allow a project to be built using:
cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
or instead of having to provide a makefile or an @ filelist.
That is significant advance on what C compilers typically do.
You are absolutely right that C does not have any real kind of module
system, and that can be a big limitation compared to other languages. However, I don't think the build system is where the lack of modules is
an issue - it is the scaling of namespaces and identifier clashes that
are the key challenge for large C projects.
Building is already solved - "make" handles everything from tiny
projects to huge projects. When "make" isn't suitable, you need /more/,
not less - build server support, automated build and test systems, etc.
And for users who like simpler things and have simpler projects, IDE's
are almost certainly a better option and will handle project builds.
On 30/01/2024 01:45, bart wrote:
On 30/01/2024 00:57, Lawrence D'Oliveiro wrote:
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Languages these days tend to have module schemes and built-in means of
compiling assemblies of modules.
C doesn't.
The proposal would allow a project to be built using:
cc file.c
instead of cc file.c file2.c .... lib1.dll lib2.dll ...,
or instead of having to provide a makefile or an @ filelist.
That is significant advance on what C compilers typically do.
There's a desperate need for hierarchy.
A library like ChatGTP only needs to expose one function,
"answer_question". Maybe a few extra to give context. But of course that
one function calls masses and masses of subroutines. Which should be
private to the module, but not to the source file for the
"answer_question" function.
On 30/01/2024 11:52, bart wrote:
On 30/01/2024 04:46, Malcolm McLean wrote:
Oh you are not adding modulesThere's a desperate need for hierarchy.
A library like ChatGTP only needs to expose one function,
"answer_question". Maybe a few extra to give context. But of course
that one function calls masses and masses of subroutines. Which
should be private to the module, but not to the source file for the
"answer_question" function.
I'm not sure what that has to do with my proposal (which is not to add
a module scheme as I said).
I've now added wildcards to my test implementation. If I go to your
resource compiler project (which I call 'BBX') and add one small C
file called bbx.c containing:
#pragma module "*.c"
#pragma module "freetype/*.c"
#pragma module "samplerate/*.c"
then I can build it like this:
c:\bbx\src>mcc bbx
Compiling bbx.c to bbx.exe
So essentially we have path listing and description language.
Which ironically is what the resource compiler basically does. You put a
list of paths into an XML file, and it uses that to find the resources,
and merge them together on standard output (as text, of course :-) ).
You're doing the same, except that of course you have to compile and
link rather than decode and lightly pre-process.
But I'm wondering about one file which contains all the sources for the program. Like an IDE project file but lighter weight.
But I'm wondering about one file which contains all the sources for the program. Like an IDE project file but lighter weight.
[description of a rudimentary C build system]
On 30/01/2024 02:38, Chris M. Thomasson wrote:
On 1/29/2024 4:57 PM, Lawrence D'Oliveiro wrote:
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Huh?
I assume he means it's common to use multiple programming languages,
rather than multiple human languages. (The later may also be true, but
it's the former that is relevant.)
For my own use at least, he's right. His system is aimed at being
simpler than make for C-only projects with limited and straightforward
build requirements. That's fine for such projects, and if that suits
his needs or the needs of others, great. But it would not cover more
than a tiny proportion of my projects over the decades - at least not
without extra help (extra commands, bash/bat files, etc.)
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
You are absolutely right that C does not have any real kind of module
system ...
On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps that output in some troff markup.
On 1/30/2024 12:06 AM, David Brown wrote:
On 30/01/2024 02:38, Chris M. Thomasson wrote:
On 1/29/2024 4:57 PM, Lawrence D'Oliveiro wrote:
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that >>>>> doesn't involve extra dependencies.
If it only works for C code, then that is going to limit its
usefulness in
today’s multilingual world.
Huh?
I assume he means it's common to use multiple programming languages,
rather than multiple human languages. (The later may also be true,
but it's the former that is relevant.)
For my own use at least, he's right. His system is aimed at being
simpler than make for C-only projects with limited and straightforward
build requirements.
When you say his, you mean, Bart's system, right?
That's fine for such projects, and if that suits his needs or the
needs of others, great. But it would not cover more than a tiny
proportion of my projects over the decades - at least not without
extra help (extra commands, bash/bat files, etc.)
On 2024-01-31, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps >> that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps that output in some troff markup.
On Wed, 31 Jan 2024 08:47:20 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 31/01/2024 04:23, Kaz Kylheku wrote:
On 2024-01-31, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python
script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
It is also the sort of stunt that reduces development effort and ensures
that you minimise the risk of documentation being out of sync with the
program.
I don't see how it achieves such tasks. For preventing loss of agreement between behaviour and documentation , the developers must have the necessary self-discipline to modify the documentation when they make changes in the behaviour. If they have such self-discipline then it's no harder to modify a separate documentation file than it is to modify the part of the source code which prints the --help output. Personally , I have the file(s) with the documentation as additional tabs in the same vim session where other tabs have the source code.
Also , the output of --help should be a short reminder whereas documentation should be longer , possibly much longer , possibly containing a tutorial , depending on how complex the application is.
On 31/01/2024 12:02, Spiros Bousbouras wrote:
On Wed, 31 Jan 2024 08:47:20 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 31/01/2024 04:23, Kaz Kylheku wrote:
On 2024-01-31, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
Just as an example, the man page for Blender is generated by a Python >>>>> script that runs the built executable with the “--help” option and wraps
that output in some troff markup.
That's the sort of stunt why distros have given up on clean cross
compiling, and resorted to Qemu.
It is also the sort of stunt that reduces development effort and ensures >>> that you minimise the risk of documentation being out of sync with the
program.
I don't see how it achieves such tasks. For preventing loss of agreement
between behaviour and documentation , the developers must have the necessary >> self-discipline to modify the documentation when they make changes in the
behaviour. If they have such self-discipline then it's no harder to modify a >> separate documentation file than it is to modify the part of the source code >> which prints the --help output. Personally , I have the file(s) with the >> documentation as additional tabs in the same vim session where other tabs >> have the source code.
They must document the user-visible features in (at least) two places -
the "man" page, and the "--help" output. By using automation to
generate one of these from the other, they reduce the duplicated effort.
On 30/01/2024 16:50, Malcolm McLean wrote:
In other words: a Makefile
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:
On 30/01/2024 16:50, Malcolm McLean wrote:
In other words: a Makefile
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
Agreed; it's a solution looking for a problem.
$ make -j # how does Bart's new build manager handle this case?
("-j" engages parallel compilation.)
ObC:
$ cat try.c
#include <stdlib.h>
int main(void) {
return(system("make -j 16"));
}
_ _ _ _ _ _ _
$ cat Makefile
CFLAGS=-g -O2 -std=c90 -pedantic
_ _ _ _ _ _ _
$ make try
cc -g -O2 -std=c90 -pedantic try.c -o try
$ ./try
make: 'try' is up to date.
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:
On 30/01/2024 16:50, Malcolm McLean wrote:
In other words: a Makefile
But I'm wondering about one file which contains all the sources for the
program. Like an IDE project file but lighter weight.
Agreed; it's a solution looking for a problem.
$ make -j # how does Bart's new build manager handle this case?
("-j" engages parallel compilation.)
ObC:
$ cat try.c
#include <stdlib.h>
int main(void) {
return(system("make -j 16"));
}
_ _ _ _ _ _ _
$ cat Makefile
CFLAGS=-g -O2 -std=c90 -pedantic
_ _ _ _ _ _ _
$ make try
cc -g -O2 -std=c90 -pedantic try.c -o try
$ ./try
make: 'try' is up to date.
Working with Other Compilers
----------------------------
Clearly, my scheme will only work with a suitable modified compiler.
Without that, then I considered doing something like this, adding this
block to my example from (2):
#pragma module "cipher.c"
#pragma module "hmac.c"
#pragma module "sha2.c"
#ifndef __MCC__
#include "runcc.c"
int main(void) {
runcc(__FILE__);
}
#endif
$ make -j
... but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.
... and the simulator 'help' command will call system("man ${INSTALL_LOC}/man/topic.man")
On Wed, 31 Jan 2024 15:13:20 GMT, Scott Lurndal wrote:
... and the simulator 'help' command will call system("man
${INSTALL_LOC}/man/topic.man")
Agh! Why do people feel the need to go through a shell where a shell is
not needed?
On Wed, 31 Jan 2024 15:31:29 +0100, David Brown wrote:
... but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.
So it is your considered opinion, then, that the bash man page is >“unsuitable”?
ldo@theon:~> man bash | wc -l
5276
Actually I refer to it quite a lot. Being able to use search functions
helps.
How would you display an manpage using nroff markup from an application?
On Wed, 31 Jan 2024 16:41:21 -0000 (UTC), vallor wrote:
$ make -j
The last time I tried that on an FFmpeg build, it brought my machine to
its knees. ;)
On 31/01/2024 16:41, vallor wrote:
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
<richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:
On 30/01/2024 16:50, Malcolm McLean wrote:
In other words: a Makefile
But I'm wondering about one file which contains all the sources for the >>>> program. Like an IDE project file but lighter weight.
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where the details are buried under appalling syntax and mixed up with a hundred
other matters.
On 31/01/2024 21:25, bart wrote:
On 31/01/2024 16:41, vallor wrote:
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
<richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:
On 30/01/2024 16:50, Malcolm McLean wrote:
In other words: a Makefile
But I'm wondering about one file which contains all the sources for
the
program. Like an IDE project file but lighter weight.
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where
the details are buried under appalling syntax and mixed up with a
hundred other matters.
No, that is not at all the purpose of modules in programming. Note that there is no specific meaning of "module", and different languages use different for similar concepts. There are many features that a
language's "module" system might have - some have all, some have few:
1. It lets you split the program into separate parts - generally
separate files. This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by language
tools to see how the whole program fits together or interact with
package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation. It provides a limited form of 5 (everything that is not exported isI've been implementing languages with language-supported modules for
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above. This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not).
Compiled
languages don't usually have such a thing, because developers (as
distinct from users) have build tools available that do a better job.
On Wed, 31 Jan 2024 08:47:20 +0100
David Brown <david.brown@hesbynett.no> wrote:
It is also the sort of stunt that reduces development effort and ensures
that you minimise the risk of documentation being out of sync with the
program.
I don't see how it achieves such tasks. For preventing loss of agreement between behaviour and documentation , the developers must have the necessary self-discipline to modify the documentation when they make changes in the behaviour.
If they have such self-discipline then it's no harder to modify a
separate documentation file than it is to modify the part of the source code which prints the --help output. Personally , I have the file(s) with the documentation as additional tabs in the same vim session where other tabs have the source code.
[...]
On 31/01/2024 12:02, Spiros Bousbouras wrote:
They must document the user-visible features in (at least) two places -
the "man" page, and the "--help" output. By using automation to
generate one of these from the other, they reduce the duplicated effort.
Also , the output of --help should be a short reminder whereas
documentation should be longer , possibly much longer , possibly
containing a
tutorial , depending on how complex the application is.
The same applies to "man" pages. Sometimes it makes sense to have short "--help" outputs and longer "man" pages, but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable. And
I imagine that the documentation for blender, along with its tutorials
(as you say), is many orders of magnitude more than that. Keeping the
"man" page and "--help" output the same seems sensible here.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 31 Jan 2024 15:31:29 +0100, David Brown wrote:
... but if the documentation is
longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.
So it is your considered opinion, then, that the bash man page is
“unsuitable”?
ldo@theon:~> man bash | wc -l
5276
Actually I refer to it quite a lot. Being able to use search functions
helps.
When working with the ksh man page, I use vim.
function viman
{
a=$(mktemp absXXXXXXX)
man "$1" | col -b > ${a}
vim ${a}
rm ${a}
}
$ viman ksh
bart <bc@freeuk.com> writes:
[description of a rudimentary C build system]
What was described is what I might call the easiest and
least important part of a build system.
Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two), here are
some characteristics essential for my workflow (given
in no particular order):
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
* use different flag settings for different translation
units
* be able to express dependency information
* produece generated source files, sometimes based
on other source files
* be able to invoke arbitrary commands, including
user-written scripts or other programs
* build or rebuild some outputs only when necessary
* condition some processing steps on successful
completion of other processing steps
* deliver partially built as well as fully built
program units
* automate regression testing and project archival
(in both cases depending on completion status)
* produce sets of review locations for things like
program errors or TBD items
* express different ways of combining compiler
outputs (such as .o files) depending on what
is being combined and what output is being
produced (sometimes a particular set of inputs
will be combined in several different ways to
produce several different outputs)
Indeed it is the case that producing a complete program is one
part of my overall build process. But it is only one step out
of many, and it is easy to express without needing any special
considerations from the build system.
Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two),
On Thu, 01 Feb 2024 00:29:23 GMT, Scott Lurndal wrote:
How would you display an manpage using nroff markup from an application?
Much safer:
subprocess.run \
(
args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
)
On 01/02/2024 08:39, David Brown wrote:
On 31/01/2024 21:25, bart wrote:I've been implementing languages with language-supported modules for
On 31/01/2024 16:41, vallor wrote:
On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
<richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>: >>>>
On 30/01/2024 16:50, Malcolm McLean wrote:
In other words: a Makefile
But I'm wondering about one file which contains all the sources
for the
program. Like an IDE project file but lighter weight.
Agreed; it's a solution looking for a problem.
Why do you think languages come with modules? That allows them to
discover their own modules, rather than rely on external apps where
the details are buried under appalling syntax and mixed up with a
hundred other matters.
No, that is not at all the purpose of modules in programming. Note
that there is no specific meaning of "module", and different languages
use different for similar concepts. There are many features that a
language's "module" system might have - some have all, some have few:
1. It lets you split the program into separate parts - generally
separate files. This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by language
tools to see how the whole program fits together or interact with
package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
It provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above. This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not).
about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.
They don't provide 2 (compiling individual modules) because the aim is a
very fast, whole-program compler.
While for 6, there is only a hierarchy between groups of modules, each forming an independent sub-program or library. I tried a strict full per-module hierarchy early on, mixed up with independent compilation; it worked poorly.
The two levels allow you to assemble one binary out of groups of modules
that each represent an independent component or library.
Compiled
languages don't usually have such a thing, because developers (as
distinct from users) have build tools available that do a better job.
Given a module scheme, the tool needed to build a whole program should
not need to be told about the names and location of every constituent
module; it should be able to determine that from what's already in the
source code, given only a start point.
Even with independent compilation, you might be able to use that info to determine dependencies, but you will need that module hierarchy if you
want to compile individual modules.
My view is that that tool only needs to be the compiler (a program that
does the 'full stack' from source files to executable binary) working
purely from the source code.
Yours is to have compilers, assemblers, linkers and make programs,
working with auxiliary data in makefiles, that itself have to be
generated by extra tools or special options, or built by hand.
In other words, you can're retro-fit a real module-scheme to C, not one
that will work with existing code.
On 31/01/2024 00:46, Tim Rentsch wrote:
Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two),
So, will a specific build of such a project produce a single EXE/DLL//SO file? (The // includes the typical file extension of Linux executables.)
This is all I want for a build.
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
5. Modules provide encapsulation of data, code and namespaces.
On 31/01/2024 20:25, bart wrote:
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
1. It lets you split the program into separate parts - generallyI've been implementing languages with language-supported modules for
separate files. This is essential for scalability for large programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big modules
from smaller ones to support larger libraries with many files.
7. Modules provide a higher level concept that can be used by
language tools to see how the whole program fits together or interact
with package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
It provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above. This is, I think,
common in interpreted languages (where modules have to be found at
run-time, where the user is there but the developer is not).
about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.
Sure. Programming languages need that if they are to scale at all.
They don't provide 2 (compiling individual modules) because the aim is
a very fast, whole-program compler.
Okay.
But what you are talking about to add to C is item 7, nothing more. That
is not adding "modules" to C. Your suggestion might be useful to some people for some projects, but that doesn't make it "modules" in any real sense.
Given a module scheme, the tool needed to build a whole program should
not need to be told about the names and location of every constituent
module; it should be able to determine that from what's already in the
source code, given only a start point.
Why?
You can't just take some idea that you like, and that is suitable for
the projects you use, and assume it applies to everyone else.
I have no problem telling my build system, or compilers, where the files are. In fact, I'd have a lot of problems if I couldn't do that. It is
not normal development practice to have the source files in the same directory that you use for building the object code and binaries.
Even with independent compilation, you might be able to use that info
to determine dependencies, but you will need that module hierarchy if
you want to compile individual modules.
I already have tools for determining dependencies. What can your
methods do that mine can't?
(And don't bother saying that you can do it without extra tools -
everyone who wants "make" and "gcc" has them on hand. And those who
want an IDE that figures out dependencies for them have a dozen free
options there too. These are all standard tools available to everyone.)
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger. Then I'd have something that they could not handle, and I'd reach for make. What
would be the point of using "make" to automate - for example - post-processing of a binary to add a CRC check, but using your tools to handle the build? It's much easier just to use "make" for the whole thing.
You are offering me a fish. I am offering to teach you to fish,
including where to go to catch different kinds of fish. This is really
a no-brainer choice.
On 01/02/2024 16:09, Richard Harnden wrote:
On 31/01/2024 20:25, bart wrote:
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
bart <bc@freeuk.com> writes:
On 01/02/2024 16:09, Richard Harnden wrote:
On 31/01/2024 20:25, bart wrote:
BTW that 'make' only works on my machine because it happens to be part >>>> of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
You sure about that? They sure used to have them
as an add-on. IIRC, they're still part of visual studio.
On 01/02/2024 19:25, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 01/02/2024 16:09, Richard Harnden wrote:
On 31/01/2024 20:25, bart wrote:
BTW that 'make' only works on my machine because it happens to
be part of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
No.
You sure about that? They sure used to have them
as an add-on. IIRC, they're still part of visual studio.
Visual Studio is a 10,000MB monster. It might well have it around,
but it's so complex, it's been years since I've even seen discrete
cl.exe and link.exe programs, despite scouring massive, 11-deep
directory structures.
Meanwhile my everyday compilers are 0.4MB for my language and 0.3MB
for C.
I like to keep things simple. Everybody else likes to keep things complicated, and the more the better.
Anyway, acquiring VS just to build one small program would be like
just a giant sledgehammer, 1000 times normal size, to crack a tiny
nut.
On Thu, 1 Feb 2024 18:34:08 +0000
bart <bc@freeuk.com> wrote:
On 01/02/2024 15:11, David Brown wrote:
But you may still need makefiles to deal with (3).
=20
If your main requirement /is/ only (1), then my idea is to move the=20
necessary info into the source code, and tackle it with the C
compiler.
=20
You proposal and needs of David Brown are not necessarily
contradictory.=20
All you need to do to satisfy him is to add to your compiler an option
for export of dependencies in make-compatible format, i.e. something
very similar to -MD option of gcc.
On 01/02/2024 15:11, David Brown wrote:
1. It lets you split the program into separate parts - generallyI've been implementing languages with language-supported modules
separate files. This is essential for scalability for large
programs.
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols
and facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big
modules from smaller ones to support larger libraries with many
files.
7. Modules provide a higher level concept that can be used by
language tools to see how the whole program fits together or
interact with package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h"
organisation. It provides a limited form of 5 (everything that is
not exported is "static"), but scaling to larger systems is
dependent on identifier prefixes.
You seem to be thinking purely about item 7 above. This is, I
think, common in interpreted languages (where modules have to be
found at run-time, where the user is there but the developer is
not).
for about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and
partial support of 6.
Sure. Programming languages need that if they are to scale at all.
They don't provide 2 (compiling individual modules) because the
aim is a very fast, whole-program compler.
Okay.
But what you are talking about to add to C is item 7, nothing more.
That is not adding "modules" to C. Your suggestion might be useful
to some people for some projects, but that doesn't make it
"modules" in any real sense.
Item 7 is my biggest stumbling to building open source C projects.
While the developer (say you), knows the necessary info, and can
somehow import into the build system, my job is trying to get it out.
I can't use the intended build system because for one reason or
another it doesn't work, or requires complex dependencies (MSYS,
CMake, MSTOOLS, .configure), or I want to run mcc on it.
That info could trivially be added to the C source code. Nobody
actually needs to use my #pragma scheme; it could simply be a block
comment on one of the modules.
I'm sure with all your complicated tools, they could surely dump some
text that looks like:
// List of source files to build the binary cipher.c:
// cipher.c
// hmac.c
// sha2.c
and prepend it to one of the files. Even a README will do.
That wouldn't hurt would it?
Given a module scheme, the tool needed to build a whole program
should not need to be told about the names and location of every
constituent module; it should be able to determine that from
what's already in the source code, given only a start point.
Why?
You can't just take some idea that you like, and that is suitable
for the projects you use, and assume it applies to everyone else.
I have no problem telling my build system, or compilers, where the
files are. In fact, I'd have a lot of problems if I couldn't do
that. It is not normal development practice to have the source
files in the same directory that you use for building the object
code and binaries.
Even with independent compilation, you might be able to use that
info to determine dependencies, but you will need that module
hierarchy if you want to compile individual modules.
I already have tools for determining dependencies. What can your
methods do that mine can't?
(And don't bother saying that you can do it without extra tools -
everyone who wants "make" and "gcc" has them on hand. And those
who want an IDE that figures out dependencies for them have a dozen
free options there too. These are all standard tools available to everyone.)
So, if C were to acquire modules, so that a C compiler could
determine that all for it itself (maybe even work out for itself
which need recompiling), would you just ignore that feature and use
the same auxiliary methods you have always done?
You don't see that the language taking over task (1) of the things
that makefiles do, and possibly (2) (of the list I posted; repeated
below), can streamline makefiles to make them shorter, simpler,
easier to write and to read, and with fewer opportunities to get
stuff wrong?
That was a rhetorical question. Obviously not.
Perhaps I would find your tools worked for a "Hello, world"
project. Maybe they were still okay as it got slightly bigger.
Then I'd have something that they could not handle, and I'd reach
for make. What would be the point of using "make" to automate -
for example - post-processing of a binary to add a CRC check, but
using your tools to handle the build? It's much easier just to use
"make" for the whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
Maybe you think makefiles should individually list all the 1000s of functions of a project too?
You are offering me a fish. I am offering to teach you to fish,
including where to go to catch different kinds of fish. This is
really a no-brainer choice.
That analogy makes no sense.
Let me try and explain what I do: I write whole-program compilers.
That means that, each time you do a new build, it will reprocess each
file from source. They use the language's module scheme to know which
files to process.
I tend to build C programs by recompiling all modules too. So I want
to introduce the same convenience I have elsewhere.
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their
brains.
----------------------------
(Repost)
I've already covered this in many posts on the subject. But 'make'
deals with three kinds of requirements:
(1) Specifying what the modules are to be compiled and combined into
one binary file
(2) Specifying dependences between all files to allow rebuilding of
that one file with minimal recompilation
(3) Everything else needed in a complex project: running processes to
generate files file config.h, creating multiple binaries,
specifying dependencies between binaries, installation etc
My proposal tackles only (1), which is something that many languages
now have the means to deal with themselves. I already stated that (2)
is not covered.
But you may still need makefiles to deal with (3).
If your main requirement /is/ only (1), then my idea is to move the necessary info into the source code, and tackle it with the C
compiler.
2. You can compile modules independently to allow partial builds.
On Thu, 1 Feb 2024 18:34:08 +0000
I've already covered this in many posts on the subject. But 'make'
deals with three kinds of requirements:
(1) Specifying what the modules are to be compiled and combined into
one binary file
(2) Specifying dependences between all files to allow rebuilding of
that one file with minimal recompilation
(3) Everything else needed in a complex project: running processes to
generate files file config.h, creating multiple binaries,
specifying dependencies between binaries, installation etc
My proposal tackles only (1), which is something that many languages
now have the means to deal with themselves. I already stated that (2)
is not covered.
But you may still need makefiles to deal with (3).
If your main requirement /is/ only (1), then my idea is to move the
necessary info into the source code, and tackle it with the C
compiler.
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an option
for export of dependencies in make-compatible format, i.e. something
very similar to -MD option of gcc.
Then David could write in his makefile:
out/foo.elf : main_foo.c
mcc -MD $< -o $@
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing needs.
David Brown <david.brown@hesbynett.no> writes:
I'd rather "make -j" (without a number) defaulted to using the number
of cpu cores, as that is a reasonable guess for most compilations.
Agreed, but there might not be a sufficiently portable way to determine
that number.
On 01/02/2024 15:11, David Brown wrote:
1. It lets you split the program into separate parts - generallyI've been implementing languages with language-supported modules for
separate files. This is essential for scalability for large programs. >>>>
2. You can compile modules independently to allow partial builds.
3. Modules generally have some way to specify exported symbols and
facilities that can be used by other modules.
4. Modules can "import" other modules, gaining access to those
modules' exported symbols.
5. Modules provide encapsulation of data, code and namespaces.
6. Modules can be used in a hierarchical system, building big
modules from smaller ones to support larger libraries with many files. >>>>
7. Modules provide a higher level concept that can be used by
language tools to see how the whole program fits together or
interact with package managers and librarian tools.
C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
It provides a limited form of 5 (everything that is not exported is
"static"), but scaling to larger systems is dependent on identifier
prefixes.
You seem to be thinking purely about item 7 above. This is, I
think, common in interpreted languages (where modules have to be
found at run-time, where the user is there but the developer is not).
about 12 years.
They generally provide 1, 2, 4, 5, and 7 from your list, and partial
support of 6.
Sure. Programming languages need that if they are to scale at all.
They don't provide 2 (compiling individual modules) because the aim
is a very fast, whole-program compler.
Okay.
But what you are talking about to add to C is item 7, nothing more.
That is not adding "modules" to C. Your suggestion might be useful to
some people for some projects, but that doesn't make it "modules" in
any real sense.
Item 7 is my biggest stumbling to building open source C projects.
While the developer (say you), knows the necessary info, and can somehow import into the build system, my job is trying to get it out.
I can't use the intended build system because for one reason or another
it doesn't work, or requires complex dependencies (MSYS, CMake, MSTOOLS, .configure), or I want to run mcc on it.
That info could trivially be added to the C source code. Nobody actually needs to use my #pragma scheme; it could simply be a block comment on
one of the modules.
I'm sure with all your complicated tools, they could surely dump some
text that looks like:
// List of source files to build the binary cipher.c:
// cipher.c
// hmac.c
// sha2.c
and prepend it to one of the files. Even a README will do.
That wouldn't hurt would it?
I already have tools for determining dependencies. What can your
methods do that mine can't?
(And don't bother saying that you can do it without extra tools -
everyone who wants "make" and "gcc" has them on hand. And those who
want an IDE that figures out dependencies for them have a dozen free
options there too. These are all standard tools available to everyone.)
So, if C were to acquire modules, so that a C compiler could determine
that all for it itself (maybe even work out for itself which need recompiling), would you just ignore that feature and use the same
auxiliary methods you have always done?
You don't see that the language taking over task (1) of the things that makefiles do, and possibly (2) (of the list I posted; repeated below),
can streamline makefiles to make them shorter, simpler, easier to write
and to read, and with fewer opportunities to get stuff wrong?
That was a rhetorical question. Obviously not.
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger. Then I'd have
something that they could not handle, and I'd reach for make. What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools
to handle the build? It's much easier just to use "make" for the
whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
Maybe you think makefiles should individually list all the 1000s of
functions of a project too?
You are offering me a fish. I am offering to teach you to fish,
including where to go to catch different kinds of fish. This is
really a no-brainer choice.
That analogy makes no sense.
Let me try and explain what I do: I write whole-program compilers. That
means that, each time you do a new build, it will reprocess each file
from source. They use the language's module scheme to know which files
to process.
It works for me, and I'm sure could work for others if they didn't have makefiles forced down their throats and hardwired into their brains.
On 31/01/2024 20:25, bart wrote:
BTW that 'make' only works on my machine because it happens to be part
of mingw; none of my other C compilers have make.
And as written, it only works for 'cc' which comes with 'gcc'
Doesn't dos/windows have nmake and cl?
David Brown <david.brown@hesbynett.no> writes:
[...]
I'd rather "make -j" (without a number) defaulted to using the number
of cpu cores, as that is a reasonable guess for most compilations.
Agreed, but there might not be a sufficiently portable way to determine
that number.
On 01/02/2024 19:34, bart wrote:
You don't see that the language taking over task (1) of the things
that makefiles do, and possibly (2) (of the list I posted; repeated
below), can streamline makefiles to make them shorter, simpler, easier
to write and to read, and with fewer opportunities to get stuff wrong?
That was a rhetorical question. Obviously not.
I've nothing against shorter or simpler makefiles. But as far as I can
see, you are just moving the same information from a makefile into the C files.
Indeed, you are duplicating things - now your C files have to have
"#pragma module this, #pragma module that" in addition to having
"#include this.h, #include that.h". With my makefiles, all the "this"
and "that" is found automatically - writing the includes in the C code
is sufficient.
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger. Then I'd have
something that they could not handle, and I'd reach for make. What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools
to handle the build? It's much easier just to use "make" for the
whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
No, it is the job of the linker.
Compiling is the job of the compiler.
Controlling the build is the job of the build system. I don't see monolithic applications as an advantage.
Maybe you think makefiles should individually list all the 1000s of
functions of a project too?
You are offering me a fish. I am offering to teach you to fish,
including where to go to catch different kinds of fish. This is
really a no-brainer choice.
That analogy makes no sense.
Let me try and explain what I do: I write whole-program compilers.
That means that, each time you do a new build, it will reprocess each
file from source. They use the language's module scheme to know which
files to process.
Surely most sensibly organised projects could then be built with :
bcc *.c -o prog.exe
I mean, that's what I can do with gcc if I had something that doesn't
need other flags (which is utterly impractical for my work).
/Nobody/ has makefiles forced on them. People use "make" because it is convenient, and it works.
But I have no interest in changing to something vastly more limited and
which adds nothing at all.
On 01/02/2024 21:23, Michael S wrote:
On Thu, 1 Feb 2024 18:34:08 +0000
I've already covered this in many posts on the subject. But 'make'
deals with three kinds of requirements:
(1) Specifying what the modules are to be compiled and combined
into one binary file
(2) Specifying dependences between all files to allow rebuilding of
that one file with minimal recompilation
(3) Everything else needed in a complex project: running processes
to generate files file config.h, creating multiple binaries,
specifying dependencies between binaries, installation etc
My proposal tackles only (1), which is something that many
languages now have the means to deal with themselves. I already
stated that (2) is not covered.
But you may still need makefiles to deal with (3).
If your main requirement /is/ only (1), then my idea is to move the
necessary info into the source code, and tackle it with the C
compiler.
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an
option for export of dependencies in make-compatible format, i.e.
something very similar to -MD option of gcc.
Then David could write in his makefile:
out/foo.elf : main_foo.c
mcc -MD $< -o $@
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing
needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
I am, however, considering CMake (which works at a
higher level, and outputs makefiles, ninja files or other project
files).
It appears to have some disadvantages compared to my makefiles,
such as needed to be run as an extra step when files are added or
removed to a project or dependencies are changed, but that doesn't
happen too often, and it's integration with other tools and projects
might make it an overall win.
Yes, I know, you copy&past arcan macros from project to project, but you
had to write them n years ago and that surely was not easy.
"nmake" is MS's version of "make" ...
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
nproc(1) is part of the GNU Core Utilities
<manpages.debian.org/1/nproc.1.html>.
And GNU make is not, so it's possible that a system might have make but
not nproc.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 00:29:23 GMT, Scott Lurndal wrote:
How would you display an manpage using nroff markup from an
application?
Much safer:
subprocess.run \
(
args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
)
You are aware you are posting to comp.lang.c, right?
On Thu, 01 Feb 2024 15:24:00 -0800, Keith Thompson wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
nproc(1) is part of the GNU Core Utilities
<manpages.debian.org/1/nproc.1.html>.
And GNU make is not, so it's possible that a system might have make but
not nproc.
While that is theoretically possible, I somehow think such an installation would feel to the typical *nix user somewhat ... crippled.
Particularly since the “install” command is part of coreutils.
I've nothing against shorter or simpler makefiles. [...]
On 01/02/2024 21:34, David Brown wrote:
What?
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their
brains.
/Nobody/ has makefiles forced on them. People use "make" because it
is convenient, and it works. If something better comes along, and it
is better enough to overcome the familiarity momentum, people will use
that.
You have total control of your programming environment and never have to consider anybody else? For hobby programming you do in a way. Not if you
want other people to use your stuff. But can always say that fun of
doing things exactly your way outweighs the fun of getting downloads.
But for professional or academic programming, often you'll find you have
to use make. You don't have a choice. Either someone else took the
decision, or there are so many other people who expect that build shall
be via make that you have no real alternative.
Now in one study, someone had wanted to do a survey of genetic sequence analysis software. They reported no results for half the programs,
because they had attempted to build them, and failed. They didn't say,
but it's a fair bet that most of those build systems used make. The
software distribution system is a disaster and badly needs fixing.
But there are lots of caveats. Bart's system might be better, but it as
you say it needs traction. I'd be reluctant to evangelise for it and get everyone to use it at work, because it might prove to have major
drawbacks, and then I'd get the blame.
The C standard doesn't specify file
extensions, either for source files or for files included with #include.
On Thu, 1 Feb 2024 22:38:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
And then to proceed with automatiion of his pre and post-processing
needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
I do. You type:
cc prog
without knowing or caring whether the contains that one module, or there
are 99 more.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 15:28:03 -0800, Keith Thompson wrote:
The C standard doesn't specify file extensions, either for source
files or for files included with #include.
It does for the standard library includes, though.
Strictly speaking, it doesn't specify that the standard library headers
are files.
On Thu, 1 Feb 2024 22:38:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 01/02/2024 21:23, Michael S wrote:
On Thu, 1 Feb 2024 18:34:08 +0000
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an
option for export of dependencies in make-compatible format, i.e.
something very similar to -MD option of gcc.
Then David could write in his makefile:
out/foo.elf : main_foo.c
mcc -MD $< -o $@
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing
needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 17:42:32 -0800, Keith Thompson wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 15:28:03 -0800, Keith Thompson wrote:
The C standard doesn't specify file extensions, either for source
files or for files included with #include.
It does for the standard library includes, though.
Strictly speaking, it doesn't specify that the standard library headers
are files.
From the C99 spec, page 149:
6.10.2 Source file inclusion
Constraints
A #include directive shall identify a header or source file that
can be processed by the implementation.
...
3 A preprocessing directive of the form
# include "q-char-sequence" new-line
causes the replacement of that directive by the entire contents of
the source file identified by the specified sequence between the "
delimiters. The named source file is searched for in an
implementation-defined manner.
So you see, the spec very explicitly uses the term “file”.
<https://www.open-std.org/JTC1/SC22/WG14/www/docs/n869/>
Yes, but not in reference to the standard headers.
A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file
in an implementation-defined manner; if that search fails, it tries
again as if <> had been used.
References to standard headers (stdio.h et al) always use the <> syntax.
You can write `#include "stdio.h"` if you like, but it risks picking up
a file with the same name instead of the standard header (which *might*
be what you want).
BTW, the n1256.pdf draft is a close approximation to the C99 standard;
it consists of the published standard with the three Technical
Corrigenda merged into it. The n1570.pdf draft is the last publicly
release draft before C11 was published, and is close enough to C11 for
most purposes.
On 01/02/2024 21:34, David Brown wrote:
What?
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their
brains.
/Nobody/ has makefiles forced on them. People use "make" because it
is convenient, and it works. If something better comes along, and it
is better enough to overcome the familiarity momentum, people will use
that.
You have total control of your programming environment and never have to consider anybody else? For hobby programming you do in a way. Not if you
want other people to use your stuff. But can always say that fun of
doing things exactly your way outweighs the fun of getting downloads.
On Thu, 1 Feb 2024 22:34:36 +0100, David Brown wrote:
I am, however, considering CMake (which works at a
higher level, and outputs makefiles, ninja files or other project
files).
Ninja was created as an alternative to Make.
Basically, if your Makefiles
are going to be generated by a meta-build system like CMake or Meson, then they don’t need to support the kinds of niceties that facilitate writing them by hand. So you strip it write down to the bare-bones functionality, which makes your builds fast while consuming minimal resources, and that
is Ninja.
It appears to have some disadvantages compared to my makefiles,
such as needed to be run as an extra step when files are added or
removed to a project or dependencies are changed, but that doesn't
happen too often, and it's integration with other tools and projects
might make it an overall win.
Some are proposing Meson as an alternative to CMake. I think they are
saying that the fact that its scripting language is not fully Turing- equivalent is an advantage.
Me, while I think the CMake language can be a little clunky in places, I still think having Turing-equivalence is better than not having it. ;)
On 01/02/2024 21:34, David Brown wrote:
On 01/02/2024 19:34, bart wrote:
You don't see that the language taking over task (1) of the things
that makefiles do, and possibly (2) (of the list I posted; repeated
below), can streamline makefiles to make them shorter, simpler,
easier to write and to read, and with fewer opportunities to get
stuff wrong?
That was a rhetorical question. Obviously not.
I've nothing against shorter or simpler makefiles. But as far as I
can see, you are just moving the same information from a makefile into
the C files.
Indeed, you are duplicating things - now your C files have to have
"#pragma module this, #pragma module that" in addition to having
"#include this.h, #include that.h". With my makefiles, all the "this"
and "that" is found automatically - writing the includes in the C code
is sufficient.
I don't think so. Seeing:
#include "file.h"
doesn't necessarily mean there is a matching "file.c". It might not
exist, or the header might be for some external library, or maybe it
does exist but in a different location.
Or maybe some code may use a file "fred.c", which needs to be submitted
to the compiler, but for which there is either no header used, or uses a header file with a different name.
As I said, C's uses of .h and .c files are chaotic.
Did you have in mind using gcc's -MM option? For my 'cipher.c' demo,
that only gives a set of header names. Missing are hmac.c and sha2.c.
If I try it on lua.c, it gives me only 5 header files; the project
comprises 33 .c files and 27 .h files.
Perhaps I would find your tools worked for a "Hello, world" project.
Maybe they were still okay as it got slightly bigger. Then I'd have
something that they could not handle, and I'd reach for make. What
would be the point of using "make" to automate - for example -
post-processing of a binary to add a CRC check, but using your tools
to handle the build? It's much easier just to use "make" for the
whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
No, it is the job of the linker.
There is where you're still stuck in the past.
I first got rid of a formal 'linker' about 40 years ago. I got rid of
the notion of combining independently compiled modules into an
executable a decade ago.
But I suspect you don't understand what a 'whole-program compiler' does:
* It means that for each binary, all sources are recompiled at the same
time to create it
* It doesn't mean that an application can only comprise one binary
* It moves the compilation unit granularity from a module to a single
EXE or DLL file
* Interfaces (in the case of a lower level language), are moved inter-
module to inter-program. The boundaries are between one program or
library and another, not between modules.
A language which claims to have a module system, but still compiles a
module at a time, will probably still have discrete inter-module
interfaces, although they may be handled automatically.
/Nobody/ has makefiles forced on them. People use "make" because it
is convenient, and it works.
BUT IT DOESN'T.
It fails a lot of the time on Windows, but they are too
complicated to figure out why.
But I have no interest in changing to something vastly more limited
and which adds nothing at all.
That's right; it adds nothing, but it takes a lot away! Like a lot of
failure points.
On 02/02/2024 01:26, Malcolm McLean wrote:
On 01/02/2024 21:34, David Brown wrote:
What?
It works for me, and I'm sure could work for others if they didn't
have makefiles forced down their throats and hardwired into their
brains.
/Nobody/ has makefiles forced on them. People use "make" because it
is convenient, and it works. If something better comes along, and it
is better enough to overcome the familiarity momentum, people will
use that.
You have total control of your programming environment and never have
to consider anybody else? For hobby programming you do in a way. Not
if you want other people to use your stuff. But can always say that
fun of doing things exactly your way outweighs the fun of getting
downloads.
Okay, none of the people talking about "make" /here/ had it forced on
them for the uses they are talking about /here/.
Yes, I have a very large degree of control over my programming
environment - because I work in a company where employees get to make
the decisions that they are best qualified to make, and management's job
is to support them. One of the important factors I consider is
interaction with colleagues and customers, for which "make" works well.
And while people may be required to use make, or particular compilers,
or OS's, no one is forced to /like/ a tool or find it useful. I believe that when people here say they like make, or find it works well for
them, or that it can handle lots of different needs, or that they know
of nothing better for their requirements, they are being honest about
that. If they didn't like it, they would say.
The only person here whom we can be absolutely sure does /not/ have
"make" forced upon them for their development, is Bart. And he is the
one who complains about it.
On 2/1/24 23:29, bart wrote:
I do. You type:
cc prog
without knowing or caring whether the contains that one module, or
there are 99 more.
I also do. You type:
make prog
without knowing or caring whether the contains that one module, or
there are 51 more.
But don't hold your breath waiting for something that will replace
make, or attract users of any other build system.
You're the one who needs to first write a pile of garbage within amakefile in order for you to do:
make progonly 35 C modules. Simple, isn't it?
Below is the makefile needed to build lua 5.4, which is a project of
---------------------------------
# Makefile for building Lua
# See ../doc/readme.html for installation and customization instructions.
# == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT
On 01/02/2024 23:55, Michael S wrote:
On Thu, 1 Feb 2024 22:38:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 01/02/2024 21:23, Michael S wrote:
On Thu, 1 Feb 2024 18:34:08 +0000
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an
option for export of dependencies in make-compatible format, i.e.
something very similar to -MD option of gcc.
Then David could write in his makefile:
out/foo.elf : main_foo.c
mcc -MD $< -o $@
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing
needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
Google "makefile automatic dependencies", then adapt to suit your own needs. Re-use the same makefile time and again.
Yes, some of the functions I have in my makefiles are a bit hairy, and
some of the command line options for gcc are a bit complicated. They
are done now.
If there had been an easier way than this, which still let me do what I
need (Bart's system does not), which is popular enough that you can
easily google for examples, blogs, and tutorials, then I'd have been
happy to use that at the time. I won't change to something else unless
it gives me significant additional benefits.
People smarter and more experienced than Bart have been trying to invent better replacements for "make" for many decades. None have succeeded.
Some build systems are better in some ways, but nothing has come close
to covering the wide range of features and uses of make, or gaining hold outside a particular niche. Everyone who has ever made serious use of "make" knows it has many flaws, unnecessarily complications, limitations
and inefficiencies. Despite that, it is the best we have.
With Bart's limited knowledge and experience,
and deeply ingrained
prejudices and misunderstandings, the best we can hope for is something
that works well enough for some simple cases of C programs.
More
realistically, it will work for Bart's use alone.
And that, of course, is absolutely fine. No one is paying Bart to write
a generic build system, or something of use to anyone else. He is free
to write exactly what he wants, in the way he wants, and if ends up with
a tool that he finds useful himself, that is great. If he ends up with something that at least some other people find useful, that is even
better, and I wish him luck with his work.
But don't hold your breath waiting for something that will replace make,
or attract users of any other build system.
On 01/02/2024 23:29, bart wrote:
As I said, C's uses of .h and .c files are chaotic.
My uses of .h and .c files are not chaotic.
I first got rid of a formal 'linker' about 40 years ago. I got rid of
the notion of combining independently compiled modules into an
executable a decade ago.
No, you built a monolithic tool that /included/ the linker.
That's fine
for niche tools that are not intended to work with anything else. Most people work with many tools - that's why we have standards, defined file formats, and flexible tools with wide support.
Other people got rid of monolithic tools forty years ago when they
realised it was a terrible way to organise things.
But I suspect you don't understand what a 'whole-program compiler' does:
I know exactly what it does. I am entirely without doubt that I know
the point and advantages of them better than you do
and advantages, not some pathetic "it means I don't have to use that
horrible nasty make program" reason.
* It means that for each binary, all sources are recompiled at the same
time to create it
No, it does not.
* It doesn't mean that an application can only comprise one binary
Correct.
* It moves the compilation unit granularity from a module to a single
EXE or DLL file
No, it does not.
In real-world whole program compilation systems, the focus is on
inter-module optimisations. Total build times are expected to go /up/. Build complexity can be much higher, especially for large programs. It
is more often used for C++ than C.
The main point of a lot of whole-program compilation is to allow
cross-module optimisation. It means you can have "access" functions
hidden away in implementation files so that you avoid global variables
or inter-dependencies between modules, but now they can be inline across modules so that you have no overhead or costs for this. It means you
can write code that is more structured and modular, with different teams handling different parts, and with layers of abstractions, but when you
pull it all together into one whole-program build, the run-time costs
and overhead for this all disappear. And it means lots of checks and
static analysis can be done across the whole program.
For such programs, each translation unit is still compiled separately,
but the "object" files contain internal data structures and analysis information, rather than generated code. Lots of the work is done by
this point, with inter-procedural optimisations done within the unit.
These compilations will be done as needed, in parallel, under the
control of a build system. Then they are combined for the linking and link-time optimisation which fits the parts together. Doing this in a scalable way is hard, and the subject of a lot of research, as you need
to partition it into chunks that can be handled in parallel on multiple
cpu cores (or even distributed amongst servers). Once you have parts of code that are ready, they are handed on to backend compilers that do
more optimisation and generate the object code, and this in turn is
linked (sometimes incrementally in parts, again aiming at improving
parallel building and scalability.
/You/ can't work it, but you excel at failing to get things working. You
have a special gift - you just have to look at a computer with tools
that you didn't write yourself, and it collapses.
On 01/02/2024 23:29, bart wrote:
On 01/02/2024 21:34, David Brown wrote:
On 01/02/2024 19:34, bart wrote:
You don't see that the language taking over task (1) of the
things that makefiles do, and possibly (2) (of the list I posted;
repeated below), can streamline makefiles to make them shorter,
simpler, easier to write and to read, and with fewer
opportunities to get stuff wrong?
That was a rhetorical question. Obviously not.
I've nothing against shorter or simpler makefiles. But as far as
I can see, you are just moving the same information from a
makefile into the C files.
Indeed, you are duplicating things - now your C files have to have
"#pragma module this, #pragma module that" in addition to having
"#include this.h, #include that.h". With my makefiles, all the
"this" and "that" is found automatically - writing the includes in
the C code is sufficient.
I don't think so. Seeing:
#include "file.h"
doesn't necessarily mean there is a matching "file.c". It might not
exist, or the header might be for some external library, or maybe
it does exist but in a different location.
As I said, you are duplicating things.
For my builds, I do not have anywhere that I need to specify "file.c".
Or maybe some code may use a file "fred.c", which needs to be
submitted to the compiler, but for which there is either no header
used, or uses a header file with a different name.
As I said, C's uses of .h and .c files are chaotic.
My uses of .h and .c files are not chaotic.
Maybe you can't write well-structured C programs. Certainly not
everyone can. (And /please/ do not give another list of open source programs that you don't like. I didn't write them. I can tell you
how and why /I/ organise my projects and makefiles - I don't speak
for others.)
Did you have in mind using gcc's -MM option? For my 'cipher.c'
demo, that only gives a set of header names. Missing are hmac.c
and sha2.c.
I use makefiles where gcc's "-M" options are part of the solution -
not the whole solution.
If I try it on lua.c, it gives me only 5 header files; the project comprises 33 .c files and 27 .h files.
I don't care. I did not write lua.
But I /have/ integrated lua with one of my projects, long ago. It
fit into my makefile format without trouble - I added the lua
directory as a subdirectory of my source directory, and that was all
that was needed.
Perhaps I would find your tools worked for a "Hello, world"
project. Maybe they were still okay as it got slightly bigger.
Then I'd have something that they could not handle, and I'd
reach for make. What would be the point of using "make" to
automate - for example - post-processing of a binary to add a
CRC check, but using your tools to handle the build? It's much
easier just to use "make" for the whole thing.
Because building one binary is a process should be the job of a
compiler, not some random external tool that knows nothing of the
language or compiler.
No, it is the job of the linker.
There is where you're still stuck in the past.
I first got rid of a formal 'linker' about 40 years ago. I got rid
of the notion of combining independently compiled modules into an executable a decade ago.
No, you built a monolithic tool that /included/ the linker. That's
fine for niche tools that are not intended to work with anything
else. Most people work with many tools - that's why we have
standards, defined file formats, and flexible tools with wide support.
Other people got rid of monolithic tools forty years ago when they
realised it was a terrible way to organise things.
But I suspect you don't understand what a 'whole-program compiler'
does:
I know exactly what it does. I am entirely without doubt that I know
the point and advantages of them better than you do - the /real/
points and advantages, not some pathetic "it means I don't have to
use that horrible nasty make program" reason.
* It means that for each binary, all sources are recompiled at the
same time to create it
No, it does not.
* It doesn't mean that an application can only comprise one binary
Correct.
* It moves the compilation unit granularity from a module to a
single EXE or DLL file
No, it does not.
* Interfaces (in the case of a lower level language), are moved
inter- module to inter-program. The boundaries are between one
program or library and another, not between modules.
Correct.
A language which claims to have a module system, but still compiles
a module at a time, will probably still have discrete inter-module interfaces, although they may be handled automatically.
Correct.
In real-world whole program compilation systems, the focus is on inter-module optimisations. Total build times are expected to go
/up/. Build complexity can be much higher, especially for large
programs. It is more often used for C++ than C.
The main point of a lot of whole-program compilation is to allow cross-module optimisation. It means you can have "access" functions
hidden away in implementation files so that you avoid global
variables or inter-dependencies between modules, but now they can be
inline across modules so that you have no overhead or costs for this.
It means you can write code that is more structured and modular,
with different teams handling different parts, and with layers of abstractions, but when you pull it all together into one
whole-program build, the run-time costs and overhead for this all
disappear. And it means lots of checks and static analysis can be
done across the whole program.
For such programs, each translation unit is still compiled
separately, but the "object" files contain internal data structures
and analysis information, rather than generated code. Lots of the
work is done by this point, with inter-procedural optimisations done
within the unit. These compilations will be done as needed, in
parallel, under the control of a build system. Then they are
combined for the linking and link-time optimisation which fits the
parts together. Doing this in a scalable way is hard, and the
subject of a lot of research, as you need to partition it into chunks
that can be handled in parallel on multiple cpu cores (or even
distributed amongst servers). Once you have parts of code that are
ready, they are handed on to backend compilers that do more
optimisation and generate the object code, and this in turn is linked (sometimes incrementally in parts, again aiming at improving parallel building and scalability.
You go to all this effort because you are building software that is
used by millions of people, and your build effort is minor compared
to the total improvements for all users combined. Or you do it
because you are building speed-critical software. Or you want the
best static analysis you can get, and want that done across modules.
Or you are building embedded systems that need to be as efficient as possible.
You don't do it because you find "make" ugly.
It is also very useful on old-fashioned microcontrollers with
multiple banks for data ram and code memory, and no good data stack
access - the compiler can do large-scale lifetime analysis and
optimise placement and the re-use of the very limited ram.
/Nobody/ has makefiles forced on them. People use "make" because
it is convenient, and it works.
BUT IT DOESN'T.
IT DOES WORK.
People use it all the time.
It fails a lot of the time on Windows, but they are too
complicated to figure out why.
People use it all the time on Windows.
Even Microsoft ships its own version of make, "nmake.exe", and has
done for decades.
/You/ can't work it, but you excel at failing to get things working.
You have a special gift - you just have to look at a computer with
tools that you didn't write yourself, and it collapses.
But I have no interest in changing to something vastly more
limited and which adds nothing at all.
That's right; it adds nothing, but it takes a lot away! Like a lot
of failure points.
Like pretty much everything I need.
On Fri, 2 Feb 2024 09:02:15 +0100
David Brown <david.brown@hesbynett.no> wrote:
But don't hold your breath waiting for something that will replace
make, or attract users of any other build system.
It seems, you already forgot the context of my post that started this
short sub-thread.
BTW, I would imagine that Stu Feldman, if he is still in good health,
would fine talking with Bart more entertaining that talking with you.
I think, you, English speakers, call it birds of feather.
On 02/02/2024 09:47, David Brown wrote:
On 01/02/2024 23:29, bart wrote:
As I said, C's uses of .h and .c files are chaotic.
My uses of .h and .c files are not chaotic.
We can't write tools that only work for careful users. Any
open-source project I want to build WILL be chaotic.
We can however write languages where you are forced to be more
disciplined. Mine doesn't have the equivalent of .h files for example.
However this is about C.
I first got rid of a formal 'linker' about 40 years ago. I got rid
of the notion of combining independently compiled modules into an
executable a decade ago.
No, you built a monolithic tool that /included/ the linker.
No, I ELIMINATED the linker.
And in the past, I wrote a program called a Loader, much simpler than
a linker, and very fast (it had to be as I worked with floppies).
That's fine
for niche tools that are not intended to work with anything else.
Most people work with many tools - that's why we have standards,
defined file formats, and flexible tools with wide support.
Other people got rid of monolithic tools forty years ago when they realised it was a terrible way to organise things.
But I suspect you don't understand what a 'whole-program compiler'
does:
I know exactly what it does. I am entirely without doubt that I
know the point and advantages of them better than you do
You can't create a language devised for whole-program compilation,
and implement a full-stack compiler for it, without learning a lot
about the ins and outs.
So I suspect I know a bit more about it than you do.
Probably you're mixing this up with whole-program optimisation.
- the /real/ points
and advantages, not some pathetic "it means I don't have to use
that horrible nasty make program" reason.
* It means that for each binary, all sources are recompiled at the
same time to create it
No, it does not.
That's not a whole-program compiler then. Not if half the modules
were compiled last week!
* It doesn't mean that an application can only comprise one binary
Correct.
* It moves the compilation unit granularity from a module to a
single EXE or DLL file
No, it does not.
Again, it can't be a whole-program compiler if it can compile modules independently.
In real-world whole program compilation systems, the focus is on inter-module optimisations. Total build times are expected to go
/up/. Build complexity can be much higher, especially for large
programs. It is more often used for C++ than C.
The main point of a lot of whole-program compilation is to allow cross-module optimisation. It means you can have "access"
functions hidden away in implementation files so that you avoid
global variables or inter-dependencies between modules, but now
they can be inline across modules so that you have no overhead or
costs for this. It means you can write code that is more
structured and modular, with different teams handling different
parts, and with layers of abstractions, but when you pull it all
together into one whole-program build, the run-time costs and
overhead for this all disappear. And it means lots of checks and
static analysis can be done across the whole program.
For such programs, each translation unit is still compiled
separately, but the "object" files contain internal data structures
and analysis information, rather than generated code. Lots of the
work is done by this point, with inter-procedural optimisations
done within the unit. These compilations will be done as needed, in parallel, under the control of a build system. Then they are
combined for the linking and link-time optimisation which fits the
parts together. Doing this in a scalable way is hard, and the
subject of a lot of research, as you need to partition it into
chunks that can be handled in parallel on multiple cpu cores (or
even distributed amongst servers). Once you have parts of code
that are ready, they are handed on to backend compilers that do
more optimisation and generate the object code, and this in turn is
linked (sometimes incrementally in parts, again aiming at improving parallel building and scalability.
You've just described a tremendously complex way to do whole-program analysis.
There are easier ways. The C transpiler I use takes a project of
dozens of modules in my language, and produces a single C source file
which will form one EXE or one DLL file.
Now any ordinary optimising C compiler has a view of the entire
program and can do wider optimisations (but that view does not span
multiple EXE/DLL files.)
/You/ can't work it, but you excel at failing to get things
working. You have a special gift - you just have to look at a
computer with tools that you didn't write yourself, and it
collapses.
Yes, I do. I'm like that kid poking fun at the emperor's new clothes;
I'm just stating what I see. But in one way it is hilarious seeing
you lot defend programs like 'as' to the death.
Why not just admit that it is a POS that you've had to learn to live
with, instead of trying to make out it is somehow superior?
On 02/02/2024 14:28, Michael S wrote:
On Fri, 2 Feb 2024 09:02:15 +0100
David Brown <david.brown@hesbynett.no> wrote:
But don't hold your breath waiting for something that will replace
make, or attract users of any other build system.
It seems, you already forgot the context of my post that started
this short sub-thread.
That is absolutely possible. It was not intentional, but the number
of posts in recent times has been overwhelming. I apologise if I
have misinterpreted what you wrote.
BTW, I would imagine that Stu Feldman, if he is still in good
health, would fine talking with Bart more entertaining that talking
with you.
I have no idea who that is, so I'll take your word for it.
I think, you, English speakers, call it birds of feather.
On 02/02/2024 08:02, David Brown wrote:
On 01/02/2024 23:55, Michael S wrote:
On Thu, 1 Feb 2024 22:38:13 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 01/02/2024 21:23, Michael S wrote:
On Thu, 1 Feb 2024 18:34:08 +0000
You proposal and needs of David Brown are not necessarily
contradictory.
All you need to do to satisfy him is to add to your compiler an
option for export of dependencies in make-compatible format, i.e.
something very similar to -MD option of gcc.
Then David could write in his makefile:
out/foo.elf : main_foo.c
mcc -MD $< -o $@
-include out/foo.d
And then to proceed with automatiion of his pre and post-processing
needs.
But then I'd still be using "make", and Bart would not be happy.
And "gcc -MD" does not need any extra #pragmas, so presumably neither
would an implementation of that feature in bcc (or mcc or whatever).
So Bart's new system would disappear entirely.
Bart spares you from managing list(s) of objects in your makefile and
from writing arcan helper macros.
Yes, I know, you copy&past arcan macros from project to project, but
you had to write them n years ago and that surely was not easy.
Google "makefile automatic dependencies", then adapt to suit your own
needs. Re-use the same makefile time and again.
Yes, some of the functions I have in my makefiles are a bit hairy, and
some of the command line options for gcc are a bit complicated. They
are done now.
If there had been an easier way than this, which still let me do what
I need (Bart's system does not), which is popular enough that you can
easily google for examples, blogs, and tutorials, then I'd have been
happy to use that at the time. I won't change to something else
unless it gives me significant additional benefits.
People smarter and more experienced than Bart have been trying to
invent better replacements for "make" for many decades. None have
succeeded. Some build systems are better in some ways, but nothing has
come close to covering the wide range of features and uses of make, or
gaining hold outside a particular niche. Everyone who has ever made
serious use of "make" knows it has many flaws, unnecessarily
complications, limitations and inefficiencies. Despite that, it is
the best we have.
With Bart's limited knowledge and experience,
That's true: only 47 years in computing, and 42 years of evolving, implementing and running my systems language.
What can I possibly know about compiling sources files of a lower-level language into binaries?
That is another aspect you might do well to learn how to do: KISS. (Yes
I can be a patronising fuck too.)
And that, of course, is absolutely fine. No one is paying Bart to
write a generic build system, or something of use to anyone else. He
is free to write exactly what he wants, in the way he wants, and if
ends up with a tool that he finds useful himself, that is great. If
he ends up with something that at least some other people find useful,
that is even better, and I wish him luck with his work.
But don't hold your breath waiting for something that will replace
make, or attract users of any other build system.
Jesus. And you seem to determined to ignore everything I write, or have
a short memory.
I'm not suggesting replacing make, only to reduce its involvement.
Twice I posted a list of 3 things that make takes care of; I'm looking
at replacing just 1 of those things, the I which for me is more critical.
On Fri, 2 Feb 2024 14:14:31 +0000
bart <bc@freeuk.com> wrote:
You've just described a tremendously complex way to do whole-program
analysis.
But it proves that your statement above (it can't be a whole-program
compiler if it can compile modules independently) is false.
There are easier ways. The C transpiler I use takes a project of
dozens of modules in my language, and produces a single C source file
which will form one EXE or one DLL file.
Now any ordinary optimising C compiler has a view of the entire
program and can do wider optimisations (but that view does not span
multiple EXE/DLL files.)
If the program in question is really big then there is a good chance
that your method will expose internal limits of the back-end compiler.
On 02/02/2024 08:02, David Brown wrote:
With Bart's limited knowledge and experience,
That's true: only 47 years in computing, and 42 years of evolving, >implementing and running my systems language.
What can I possibly know about compiling sources files of a lower-level >language into binaries?
How many assemblers, compilers, linkers, and interpreters have /you/
written?
It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
all seem to delight in wheeling out the most complex scenarios you can find.
That is another aspect you might do well to learn how to do: KISS. (Yes
I can be a patronising fuck too.)
I'm not suggesting replacing make, only to reduce its involvement.
Actually, nowadays monolithic tools are solid majority in programming.
I mean, programming in general, not C/C++/Fortran programming which by
itself is a [sizable] minority.
Even in C++, a majority uses non-monolithic tools well-hidden behind
front end (IDE) that makes them indistinguishable from monolithic.
Yes, I do. I'm like that kid poking fun at the emperor's new clothes;
I'm just stating what I see. But in one way it is hilarious seeing you
lot defend programs like 'as' to the death.
Why not just admit that it is a POS that you've had to learn to live
with, instead of trying to make out it is somehow superior?
disciplined. Mine doesn't have the equivalent of .h files for example.
On 02/02/2024 15:14, bart wrote:
Yes, I do. I'm like that kid poking fun at the emperor's new clothes;
I'm just stating what I see. But in one way it is hilarious seeing you
lot defend programs like 'as' to the death.
No, /you/ are the emperor in this analogy. Well, you are actually the
kid - except you are the kid with no clothes who /thinks/ he's an emperor.
Why not just admit that it is a POS that you've had to learn to live
with, instead of trying to make out it is somehow superior?
The whole world is out of step, except Bart.
Has it never occurred to you that when you are in disagreement with
everyone, /you/ might be the one that is wrong? I think you suffer from
the "misunderstood genius" myth. It's surprisingly common amongst
people who have invested heavily in going their own way, against common knowledge or common practice. It's a sort of psychological defence mechanism against realising you've been wrong all this time.
Has it ever occurred to YOU that the world is more than Unix and make
and massive compilers like gcc and clang?
bart <bc@freeuk.com> writes:
On 02/02/2024 08:02, David Brown wrote:
With Bart's limited knowledge and experience,
That's true: only 47 years in computing, and 42 years of evolving,
implementing and running my systems language.
It's pretty clear that you have very limited knowledge
and experience with unix, make and and pretty much
anything that isn't your soi disant compiler.
What can I possibly know about compiling sources files of a lower-level
language into binaries?
Very little, it appears, outside of your toy projects.
How many assemblers, compilers, linkers, and interpreters have /you/
written?
Can't speak for David, but in my case, at least one of each, and
you can add operating systems and hypervisors to that list.
It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
all seem to delight in wheeling out the most complex scenarios you can find.
The "stuff" I write is for customers. Any so-called-bart-complexity is based on
customer requirements. The customers are quite happy with the solutions
they get.
That is another aspect you might do well to learn how to do: KISS. (Yes
I can be a patronising fuck too.)
KISS is a good principle to follow, and while I cannot again speak
for David, it's a principle followed by most programmers I've worked
with. That doesn't mean throwing away perfectly usable tools
(one can easily make KISS-compliant makefiles, for example).
I'm not suggesting replacing make, only to reduce its involvement.
And to reduce it's involvement, something must replace make. ipso facto.
On 02/02/2024 15:18, Scott Lurndal wrote:
You're saying that anyone not using Unix, not building 10Mloc projects,
and not a fan of make, should FOAD?
The way 'as' works IS rubbish.
turn it round and make it about me. There can't possibly be anything
wrong with it, whoever says so must be deluded!
My definition is where you build one program (eg. one EXE or DLL file on Windows) with ONE invocaton of the compiler, which processes ALL source
and support files from scratch.
bart <bc@freeuk.com> writes:
[...]
The way 'as' works IS rubbish. It is fascinating how you keep trying
to turn it round and make it about me. There can't possibly be
anything wrong with it, whoever says so must be deluded!
"as" works. It's not perfect, but it's good enough. Its job is to
translate assembly code to object code. It does that. There's is
nothing you could do with your preferred user interface (whatever that
might be) that can't be done with the existing one. "as" is rarely
invoked directly, so any slight clumsiness in its well defined user
interface hardly matters. Any changes to its interface could break
existing scripts.
Nobody is claiming that "there can't possibly be anything wrong with
it". You made that up.
Why does the way "as" works offend you?
On 2/2/24 16:18, bart wrote:
And can you disclose the magic trick who let your magic
My definition is where you build one program (eg. one EXE or DLL file
on Windows) with ONE invocaton of the compiler, which processes ALL
source and support files from scratch.
compiler know exactly the list of "ALL source and support
files" needed for a scratchy build ?
On 02/02/2024 18:36, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
I saw an example today in tutorial:
as -o filename.o filename.as
having to type the name twice again.
That always seems to be the excuse. Some half-finished test version is produced, with no proper file interface, and an output that temporarily
gets sent to a.out until it can be sorted out properly.
But it never is finished, and the same raw half-finished product works
the same way decades later, surprising every new generation who have to relearn its quirks.
I saw an example today in tutorial:
as -o filename.o filename.as
having to type the name twice again.
It just does. I've used a few assemblers, this one is downright weird:
A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file
in an implementation-defined manner; if that search fails, it tries
again as if <> had been used.
On 02/02/2024 00:30, Lawrence D'Oliveiro wrote:
Ninja was created as an alternative to Make.
It is an alternative to some uses of make - but by no means all uses.
as -o filename.o filename.as
That's true: only 47 years in computing, and 42 years of evolving, implementing and running my systems language.
On Fri, 2 Feb 2024 19:52:45 +0000, bart wrote:
as -o filename.o filename.as
On *nix systems, we can use “cc” as kind of a “universal” compile command,
not just for C code but for assembler as well, e.g.
cc -c filename.o filename.s
(without preprocessor)
cc -c filename.o filename.S
(with preprocessor)
cc -o filename filename.S
(with preprocessor and linking stages as well).
Can your system offer these options?
On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:
That's true: only 47 years in computing, and 42 years of evolving,
implementing and running my systems language.
On how many different platforms?
Seems like your primary experience has been with beating your head against Microsoft Windows. That’s got to have health implications.
What option is that, to have one command 'cc' that can deal with N
different languages?
No. But I can offer a system where you have a choice of N different
compilers or assemblers for the same language:
On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
Seems like your primary experience has been with beating your head
against Microsoft Windows. That’s got to have health implications.
That wasn't a serious question was it; you just wanted to have a go at Windows.
Why do you consider that fair game, but people hate if when anyone
criticises Unix?
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:
A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file
in an implementation-defined manner; if that search fails, it tries
again as if <> had been used.
The trouble with that interpretation is, it would seem to rule out the
use of things like include libraries for user headers. Do you really
think that was the intention?
I don't know. I imagine an implementation could interpret the word
"file" to include information extracted from libraries.
On Fri, 02 Feb 2024 16:09:09 -0800, Keith Thompson wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:
A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file >>>> in an implementation-defined manner; if that search fails, it tries
again as if <> had been used.
The trouble with that interpretation is, it would seem to rule out the
use of things like include libraries for user headers. Do you really
think that was the intention?
I don't know. I imagine an implementation could interpret the word
"file" to include information extracted from libraries.
Then the distinction between “headers” that are “files”, versus those that
are not, as so carefully worded in the standard (as you pointed out),
becomes meaningless.
On 02/02/2024 18:54, Kaz Kylheku wrote:
On 2024-02-02, bart <bc@freeuk.com> wrote:It's a constant problem.
The way 'as' works IS rubbish.
Pretend a developer of "as" (say, the GNU one) is reading this thread.
What is it that is broken?
Do you have a minimal repro test case of your issue?
What is the proposed fix?
turn it round and make it about me. There can't possibly be anything
wrong with it, whoever says so must be deluded!
A vast amount of code is being compiled daily, passing through as,
without anyone noticing.
It's usually easy enough to knock up a piece of code to do something.
The problem is deploying it.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:
A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file
in an implementation-defined manner; if that search fails, it tries
again as if <> had been used.
The trouble with that interpretation is, it would seem to rule out the use >> of things like include libraries for user headers. Do you really think
that was the intention?
I don't know. I imagine an implementation could interpret the word
"file" to include information extracted from libraries. Note that it
doesn't have to correspond to the concept of a "file" used in <stdio.h>;
that refers to files in the execution environment, not the compilation environment.
On 31/01/2024 00:46, Tim Rentsch wrote:
Looking over one of my current projects (modest in size,
a few thousand lines of C source, plus some auxiliary
files adding perhaps another thousand or two), here are
some characteristics essential for my workflow (given
in no particular order):
* have multiple outputs (some outputs the result of
C compiles, others the result of other tools)
* use different flag settings for different translation
units
* be able to express dependency information
* produece generated source files, sometimes based
on other source files
* be able to invoke arbitrary commands, including
user-written scripts or other programs
* build or rebuild some outputs only when necessary
* condition some processing steps on successful
completion of other processing steps
* deliver partially built as well as fully built
program units
* automate regression testing and project archival
(in both cases depending on completion status)
* produce sets of review locations for things like
program errors or TBD items
* express different ways of combining compiler
outputs (such as .o files) depending on what
is being combined and what output is being
produced (sometimes a particular set of inputs
will be combined in several different ways to
produce several different outputs)
Indeed it is the case that producing a complete program is one
part of my overall build process. But it is only one step out
of many, and it is easy to express without needing any special
considerations from the build system.
So, will a specific build of such a project produce a single
EXE/DLL//SO file? (The // includes the typical file extension of
Linux executables.)
On Fri, 2 Feb 2024 22:12:04 +0000, bart wrote:
On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
Seems like your primary experience has been with beating your head
against Microsoft Windows. That’s got to have health implications.
That wasn't a serious question was it; you just wanted to have a go at
Windows.
You yourself have complained endlessly about build setups that work fine
on *nix systems, but that give you trouble on Windows. It’s like you don’t
see the source of your difficulties right in front of your eyes.
Why do you consider that fair game, but people hate if when anyone
criticises Unix?
I don’t care about “Unix” any more, and I doubt many other people do. All
the systems legally entitled to call themselves ”Unix” are dying if not already dead.
bart <bc@freeuk.com> writes:
Indeed it is the case that producing a complete program is one
part of my overall build process. But it is only one step out
of many, and it is easy to express without needing any special
considerations from the build system.
So, will a specific build of such a project produce a single
EXE/DLL//SO file? (The // includes the typical file extension of
Linux executables.)
No, there are several outputs of this kind, including object
files, static libraries, and dynamic libraries, and all for a C
environment. (There are also other outputs but of a different
kind than what you are asking about.)
I have no expectation that you will incorporate these ideas or
capabilities into a tool you are building for yourself. I gave
the list in case other readers might have an interest.
On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:
What option is that, to have one command 'cc' that can deal with N
different languages?
Hint: it uses the filename extension to determine which language, and
which flavour of the language even, it is dealing with.
No. But I can offer a system where you have a choice of N different
compilers or assemblers for the same language:
Are their ABIs compatible?
bart <bc@freeuk.com> writes:
It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
all seem to delight in wheeling out the most complex scenarios you can find.
[...]
That is another aspect you might do well to learn how to do: KISS. [...]
KISS is a good principle to follow, and while I cannot again speak
for David, it's a principle followed by most programmers I've worked
with. That doesn't mean throwing away perfectly usable tools
(one can easily make KISS-compliant makefiles, for example).
I'm not suggesting replacing make, only to reduce its involvement.
[...]
[...] The Eclipse folk are experts at making an editor and IDE, [...]
On 02/02/2024 22:12, bart wrote:
Because UNIX systems used to typically cost tens of thousands of
Why do you consider that fair game, but people hate if when anyone
criticises Unix?
dollars, whilst a PC could be had for under a thousand dollars.
So
everyone could have a PC, but if you were given a UNIX system you were a
bit special. And that gave UNIX programmers a sense of superiority.
It's a very silly attitude of course.
On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:
That's true: only 47 years in computing, and 42 years of evolving,
implementing and running my systems language.
On how many different platforms?
I started in 1976. I started using Windows for my products in 1995
because all my potential customers could buy an off-the-shelf Windows
PC.
Linux was nowhere. Unix was only in academia, I think; nowhere
relevant to me anyway.
[...]
Why do you consider that fair game, but people hate if when anyone
criticises Unix?
bart <bc@freeuk.com> writes:
[...][...]
But OK, let me drop everything and fix it for you. I can submit a patch
for "as" so it behaves the way you want. I'll also submit patches for
gcc so it invokes "as" with the new interface. It will still have to
handle the old interface at least temporarily, so there will have to be
a way to ask "as" which interface it uses. Nothing will ever generate a
file named "a.out" unless it's explicitly told to do so. I'll also send
the word out so everyone knows not to rely on the name "a.out" anymore.
And I'll convince everyone that they've been doing it wrong for the last several decades.
I'll let you know when that's done. Because nothing short of that would satisfy you.
On 02.02.2024 23:12, bart wrote:
On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:
That's true: only 47 years in computing, and 42 years of evolving,
implementing and running my systems language.
On how many different platforms?
I started in 1976. I started using Windows for my products in 1995
because all my potential customers could buy an off-the-shelf Windows
PC.
And that is actually the problem that folks have tried to make clear
to you. Being so long in a bubble,
I don't know about you, whether you have an academic technical
background,
whether you had the chance to try out UNIX or the BSD
variant these days.
Myself I already knew a couple OSes (for PCs, some not even worth
to call them OS, for medium scale systems, and also for mainframes)
before I had my first contact with a Unix system. With that systems
and OS background it was easy to strive for the better ones; from
the ones I met it was Unix. (BTW, I observed a similar enthusiasm
with a friend of mine, a long year hardcore MS DOS user, when he
got his fingers onto a Unix system.) You might imagine what a joy
it thus was when Linux and the GNU tools appeared, a powerful and
reliable(!) system and OS base, and even (almost) for free.
(For you, I dare to say, it's obviously far too late. That ship has
sailed. If you'd have strived for a broader experience in your early
days it would certainly be a different situation.
I cannot speak for "people". Myself I name any issues I see; Unix
issues are not exempt from that. - I have even a printed version of
"The UNIX - HATERS Handbook" in my bookshelf (though a lot of its
content is meanwhile outdated, it doesn't apply any more). - And I
can certainly collect a page full of deficiencies I see with Linux.
But so what? (For the MS platforms I could probably "fill a book".)
Yet, in past decades, I haven't seen any serious competitor to Unix.
(Note: When I'm saying that I am not considering e.g. supercomputers
doing e.g. massive hydrodynamic computations. But even in this area
there's meanwhile also Linux clusters running.)
On 03/02/2024 06:52, Malcolm McLean wrote:
On 02/02/2024 22:12, bart wrote:
Because UNIX systems used to typically cost tens of thousands of
Why do you consider that fair game, but people hate if when anyone
criticises Unix?
dollars, whilst a PC could be had for under a thousand dollars.
I think you got stuck somewhere in the 1990's.
So everyone could have a PC, but if you were given a UNIX system you
were a bit special. And that gave UNIX programmers a sense of
superiority.
Anyone who has used both *nix and Windows for development work knows
*nix is superior for that. (Windows has other advantages - I use both Windows and Linux. Neither is perfect, each has their good points and
bad points.)
It's a very silly attitude of course.
Fighting against a system that works against everything you are trying
to do is a very silly attitude. Bart has choices - no one is forcing
him to compile open source software that causes him trouble, no one is forcing him to use Windows, or C, or make, or anything else. He has completely free choices. He could use Linux and compile the projects without trouble. He could use Windows and pre-built binaries. He could use other projects, other languages, other tools. He could choose to
put his feet by the fireside and do Sudokus, or to travel the world and
see other places.
On 03/02/2024 09:05, Tim Rentsch wrote:
bart <bc@freeuk.com> writes:
Indeed it is the case that producing a complete program is one
part of my overall build process. But it is only one step out
of many, and it is easy to express without needing any special
considerations from the build system.
So, will a specific build of such a project produce a single
EXE/DLL//SO file? (The // includes the typical file extension of
Linux executables.)
No, there are several outputs of this kind, including object
files, static libraries, and dynamic libraries, and all for a C
environment. (There are also other outputs but of a different
kind than what you are asking about.)
I have no expectation that you will incorporate these ideas or
capabilities into a tool you are building for yourself. I gave
the list in case other readers might have an interest.
OK. You seem fairly level-headed and calm, so I'll try this
explanation. [...]
On 02.02.2024 16:26, David Brown wrote:
[...] The Eclipse folk are experts at making an editor and IDE, [...]
I have to disagree with this bit.
bart <bc@freeuk.com> writes:
On 03/02/2024 09:05, Tim Rentsch wrote:
bart <bc@freeuk.com> writes:
Indeed it is the case that producing a complete program is one
part of my overall build process. But it is only one step out
of many, and it is easy to express without needing any special
considerations from the build system.
So, will a specific build of such a project produce a single
EXE/DLL//SO file? (The // includes the typical file extension of
Linux executables.)
No, there are several outputs of this kind, including object
files, static libraries, and dynamic libraries, and all for a C
environment. (There are also other outputs but of a different
kind than what you are asking about.)
I have no expectation that you will incorporate these ideas or
capabilities into a tool you are building for yourself. I gave
the list in case other readers might have an interest.
OK. You seem fairly level-headed and calm, so I'll try this
explanation. [...]
You have no interest in what's imprtant to me in a build system.
On 03/02/2024 13:52, David Brown wrote:
On 03/02/2024 06:52, Malcolm McLean wrote:You right. On Unix you can fire up the system, launch an editor, type
On 02/02/2024 22:12, bart wrote:
Because UNIX systems used to typically cost tens of thousands of
Why do you consider that fair game, but people hate if when anyone
criticises Unix?
dollars, whilst a PC could be had for under a thousand dollars.
I think you got stuck somewhere in the 1990's.
So everyone could have a PC, but if you were given a UNIX system you
were a bit special. And that gave UNIX programmers a sense of
superiority.
Anyone who has used both *nix and Windows for development work knows
*nix is superior for that. (Windows has other advantages - I use both
Windows and Linux. Neither is perfect, each has their good points and
bad points.)
"hello world" into it, type gcc or cc *.c -lm at the shell, type
./a.out, and yove got the outout "Hello world" and a template you can
then modify to do practically anything you want.
On Windows you've got the fire up Visual Studio, and set up a project
file. And then you've gt to fiddle with it to enable the standard
library. And then it will demand you include "stdafx.h" and you've got
to fiddle with it a bit more to sop it asking for that. Then, whilst you
will get an executable, when you launch it from the IDE, the output
window will disappear before you can read it. And you have to fiddle
with it a bit more. It's much less convenient.
On the other had, if you want a GUI, the Windows system is all set up
for you and you just have to call the right functions. On Unix you have
to configure some sort of front end to X, there's a lot more messing
about, and the GUI elements aren't consistent.
On 03/02/2024 14:39, Janis Papanagnou wrote:
On 02.02.2024 16:26, David Brown wrote:
[...] The Eclipse folk are experts at making an editor and IDE, [...]
I have to disagree with this bit.
My point is independent of whether or not you like Eclipse (people are
split on that), or what editor you think is best (people break out in
fights over that).
The point is that the editor and IDE people make the editor and IDE,
the
compiler people make the compiler, the debugger people make the
debugger, and so on - while to the user, the package looks more or less
like a complete "do everything" development tool.
On 03/02/2024 01:31, Lawrence D'Oliveiro wrote:
On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:
What option is that, to have one command 'cc' that can deal with N
different languages?
Hint: it uses the filename extension to determine which language, and
which flavour of the language even, it is dealing with.
This is the filename extension which Linux famously ignores, because you
can use any extension you like?
Hint: my tools KNOW which language they are dealing with:
root@XXX:/mnt/c/c# cp hello.c hello.x
root@XXX:/mnt/c/c# gcc hello.x
hello.x: file not recognized: file format not recognized
On 03/02/2024 13:23, Janis Papanagnou wrote:
On 02.02.2024 23:12, bart wrote:
On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:
That's true: only 47 years in computing, and 42 years of evolving,
implementing and running my systems language.
On how many different platforms?
I started in 1976. I started using Windows for my products in 1995
because all my potential customers could buy an off-the-shelf Windows
PC.
And that is actually the problem that folks have tried to make clear
to you. Being so long in a bubble,
Which bubble, the one before 1995, or after?
Don't you see that using only Unix-like systems is also a bubble?
I'd say using anything but Unix /is/ a broader experience than using
only Unix.
That latter seems to give people the impression that unless
every OS should either be exactly like Unix, then it is worthless.
On 2024-02-03, bart <bc@freeuk.com> wrote:
On 03/02/2024 01:31, Lawrence D'Oliveiro wrote:
On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:
What option is that, to have one command 'cc' that can deal with N
different languages?
Hint: it uses the filename extension to determine which language, and
which flavour of the language even, it is dealing with.
This is the filename extension which Linux famously ignores, because you
can use any extension you like?
Hint: my tools KNOW which language they are dealing with:
You're arguing for a user-unfriendly system where you have to memorize
a separate command for processing each language.
Recognizing files by suffix is obviously superior.
root@XXX:/mnt/c/c# cp hello.c hello.x
root@XXX:/mnt/c/c# gcc hello.x
hello.x: file not recognized: file format not recognized
This is good; it's one more little piece of resistance
against people using the wrong suffix.
It's not the only one. Editors won't bring up the correct syntax
formatting and coloring if the file suffix is wrong.
Tools for cross-referencing identifiers in source code may also get
things wrong due to the wrong suffix, or ignore the file entirely.
Your argument of "I can rename my C to any suffix and my compiler
still recognizes it" is completely childish.
On 2024-02-03, bart <bc@freeuk.com> wrote:
Don't you see that using only Unix-like systems is also a bubble?
Don't you see that living on Earth is literally being in bubble?
Your bubble contains only one person.
The Unix-like bubble is pertty huge, full of economic opportunities,
spanning from embedded to server.
While you were dismissing Linux in 1995, it was actually going strong, marching forward. Only fools ignored it.
A year before that, in 1994, I was doing contract work for Linux
already. My client used it for serving up pay-per-click web pages to
paying customers. I was working on the log processing and billing side
of it, and also created a text-UI (curses) admin tool.
I'd say using anything but Unix /is/ a broader experience than using
only Unix.
No, it isn't. That is fallacious. Working with anything else plus Unix
is a broader experience than only Unix. Otherwise, we can't say.
An OS that provides more or less the same semantics as POSIX, but using interfaces that are gratuitously different, and incompatible, is
worthless in this day and age.
Something that doesn't conform to compatibility standards, and isn't demonstrably better for it, is a dud.
There is good different and bad different. More or less same, but incompatible, is bad different.
But what is the point? Do you routinely invoke cc with multiple files of mixed languages? Suppose you wanted a different C compiler on each .c
file? Oh, you then invoke it separately for each file. So you do that
anyway in that rare event.
On 02/02/2024 18:26, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 02/02/2024 15:18, Scott Lurndal wrote:
To build a "smaller, easier, nicer" make, if that is the goal (and it's
a very legitimate one),
(Unfortunately if you write C++ rather than C, even a 3.7 GHz machine
isn't going to be fast enough. But maybe your users don't use C++).
.. but if you were given a UNIX system you were a
bit special. And that gave UNIX programmers a sense of superiority.
It's a very silly attitude of course.
On 2/3/2024 8:03 AM, bart wrote:
[...]
Do you have a windows installation with a recent version of MSVC
installed? Give vcpkg a go, and see how it builds things... Then automatically integrates them into MSVC. It's pretty nice and about
time. ;^)
It's a very silly attitude of course.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:
A #include directive with <> searches for a "header", which is not
stated to be a file. A #include directive with "" searches for a file >>>>> in an implementation-defined manner; if that search fails, it tries >>>>> again as if <> had been used.
The trouble with that interpretation is, it would seem to rule out the use >>>> of things like include libraries for user headers. Do you really think >>>> that was the intention?
I don't know. I imagine an implementation could interpret the word
"file" to include information extracted from libraries. Note that it
doesn't have to correspond to the concept of a "file" used in <stdio.h>; >>> that refers to files in the execution environment, not the compilation
environment.
To me what the C standard says is clear. A #include "whatever.h"
gets its stuff from a file (assuming of course the appropriate
file can be found, and not revert to the #include <whatever.h>
form). A #include <whatever.h> gets its stuff from a header,
said header perhaps being stored in a file or perhaps not, and if
file-stored then it could be a 1-1 relationship, or a 1-many
relationship, or a many-1 relationship. Since the C standard
doesn't define the term 'header', an implementation is allowed to
actualize it however the implementation chooses (including the
possibility of storing information inside the compiler itself).
On further thought, I tend to agree.
I was thinking that an implementation could usefully provide some of its
own headers as something other than files, as it's clearly allowed to do
for the C standard headers. But the obvious way to do that would be to require such headers to be included with <>, not "". POSIX-specific
headers like unistd.h are already conventionally included with <>.
An implementation probably *could* bend the meaning of "file" enough to support having `#include "whatever.h"` load something other than a file
in the host filesystem, but it's not as useful as I first thought it
might be -- and it could interfere with user-provided header files that happen to have the same name.
On 2/3/2024 3:54 AM, bart wrote:
[...]
Say I want to use your C compiler. How do I use it when I need to
assemble and link external asm code? Say, I assembled something into an
.o file, how can I make your C compiler use it, link it in, ect...
Using the C ABI, I would create declarations for its functions.
masm version, intel syntax:
http://web.archive.org/web/20060214112539/http://appcore.home.comcast.net/appcore/src/cpu/i686/ac_i686_masm_asm.html
So, this creates some functions. How would I use your compiler to call
these functions from my C code in your system?
On 2/3/2024 2:11 PM, bart wrote:
On 03/02/2024 20:31, Chris M. Thomasson wrote:
On 2/3/2024 8:03 AM, bart wrote:
[...]
Do you have a windows installation with a recent version of MSVC
installed? Give vcpkg a go, and see how it builds things... Then
automatically integrates them into MSVC. It's pretty nice and about
time. ;^)
You haven't followed by posts very well. I want to keep as far away
from all that stuff as possible.
Okay.
(The last time I installed VS, it took 90 minutes. Each time it
started up, usually by inadvertently because of file association, it
took 90 seconds. On the same machine, an old one, it took 0.2 seconds
to build my C compiler.)
It boots right up for me, less than two seconds, even though it is
pretty damn fat.
Everything I am about is managing to do this stuff by the simplest,
leanest means possible. If a program is written in C, then why would
you need anything other than a C compiler to build it?
Can you C compiler handle C11? If so, that would be great. This one can
do it, MSVC well, nope. MSVC handles C11 atomics, but not threads! GRRRRR.
On 04/02/2024 00:24, Chris M. Thomasson wrote:
On 2/3/2024 2:11 PM, bart wrote:
On 03/02/2024 20:31, Chris M. Thomasson wrote:
On 2/3/2024 8:03 AM, bart wrote:
[...]
Do you have a windows installation with a recent version of MSVC
installed? Give vcpkg a go, and see how it builds things... Then
automatically integrates them into MSVC. It's pretty nice and about
time. ;^)
You haven't followed by posts very well. I want to keep as far away
from all that stuff as possible.
Okay.
(The last time I installed VS, it took 90 minutes. Each time it
started up, usually by inadvertently because of file association, it
took 90 seconds. On the same machine, an old one, it took 0.2 seconds
to build my C compiler.)
It boots right up for me, less than two seconds, even though it is
pretty damn fat.
It might be faster now on my SSD drive. However my own stuff didn't need
an SSD drive; that's part of the point of keeping things small.
Everything I am about is managing to do this stuff by the simplest,
leanest means possible. If a program is written in C, then why would
you need anything other than a C compiler to build it?
Can you C compiler handle C11? If so, that would be great. This one
can do it, MSVC well, nope. MSVC handles C11 atomics, but not threads!
GRRRRR.
It compiles some undefined subset of C. But I haven't touched that side
of it for years. That last update of it changed the backend.
MCC is anyway now a private tool. Either programs work with it or they
don't.
But the problem being discussed at length is getting that input into the compiler in the first place!
Everybody says use makefiles; well they don't work. They tend to be
heavily skewed towards the use of gcc. My compiler isn't gcc.
AFAIK the C standard doesn't mention gcc (nor, probably, makefiles!).
So I'm disappointed there isn't a better, simpler solution to a very,
very simple problem: what exactly goes in place of ... when building any complete program:
cc ...
And after 100s of posts, still nobody gets it. Oh, just use an
invariably Linux-centric, gcc-centric script in a different language.
How about an OS-neutral, compiler-neutral solution that doesn't involve
a third-party language? (English - or Norwegian - accepted.)
On the other had, if you want a GUI, the Windows system is all set up
for you and you just have to call the right functions.
I guess you're not curious about WHY a project that builds easily on
Unix causes problems on Windows?
I'm concerned about increasing bloat and complexity everywhere. So I'm
just making a stand by developing my own small, human-scale products.
On Sat, 3 Feb 2024 14:59:14 +0000, bart wrote:
I'm concerned about increasing bloat and complexity everywhere. So I'm
just making a stand by developing my own small, human-scale products.
And you run your stuff on what is probably the most bloated, monolithic, inflexible, unwieldy, clumsy, overcomplicated, inefficient and bug-ridden
OS in existence--Microsoft Windows.
On 03/02/2024 16:03, bart wrote:
On 03/02/2024 15:44, Malcolm McLean wrote:
On the other had, if you want a GUI, the Windows system is all set up
for you and you just have to call the right functions. On Unix you
have to configure some sort of front end to X, there's a lot more
messing about, and the GUI elements aren't consistent.
For GUI they're both a nightmare unless you use a simpler library that
sits on top. Or are you saying that X is even worse than WinAPI?
I've programmed for both and Windows GUI is quite a bit easier to use.
You have to enter a library name explictly to get the common controls,
for some stupid reason, but once you do that the whole system is set up
for you. Just call the API more or less as you would any other C
function (except for tiny message loop interface), you've got a rich set
of controls, and they are well designed and harmonised with the rest of
the programs on the system.
X - if you try to program to Xlib directly you're messing about with
colur maps and goodness knows what just to get up a window. And if you
don't it's dependency land and all that that entails, with some popular widget toolsets but no real standards. And often you find that these
will break. However nowadays you can use QT. Which is alot better but
still not very well designed with a non-canonical slot / message system
and poor facilites for layout. That's why I was driven to write Baby X.
A simple clean interface to Xlib that would allow you to get graphics up quickly and easily. You shouldn't have to do that, of course.
MSDOS and Windows were intended for direct use by ordinary consumers.
Unix was intended for developers.
Few ordinary consumers directly use a Unix-like system unless it is made
to look like MacOS or Android. Or they run a GUI desktop that makes it
look a bit like Windows.
On 04/02/2024 01:19, bart wrote:
So I'm disappointed there isn't a better, simpler solution to a very,
very simple problem: what exactly goes in place of ... when building
any complete program:
No. I get it. Over complicated build systems which break. Very serious
issue. I've had builds break on me and I'm very surprised more people
haven't had the same experience and don't easily understand what you are saying.
But where David Brown is right is that it is one thing to diagnose the problem, quite another to solve it. That is extremely difficult and I
don't think we'll find the answer easily. But continue to discuss.
On 03/02/2024 15:16, bart wrote:
MSDOS and Windows were intended for direct use by ordinary consumers.
Unix was intended for developers.
There is a bit of truth in that - though Unix was also targeted at
serious computer users, workstation users (such as for CAD,
(Given your statement here, why do you find it so hard to accept that
people find Linux a much better platform for developers than Windows?)
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:
A #include directive with <> searches for a "header", which is
not stated to be a file. A #include directive with "" searches
for a file in an implementation-defined manner; if that search
fails, it tries again as if <> had been used.
The trouble with that interpretation is, it would seem to rule
out the use of things like include libraries for user headers.
Do you really think that was the intention?
I don't know. I imagine an implementation could interpret the
word "file" to include information extracted from libraries. Note
that it doesn't have to correspond to the concept of a "file" used
in <stdio.h>; that refers to files in the execution environment,
not the compilation environment.
To me what the C standard says is clear. A #include "whatever.h"
gets its stuff from a file (assuming of course the appropriate
file can be found, and not revert to the #include <whatever.h>
form). A #include <whatever.h> gets its stuff from a header,
said header perhaps being stored in a file or perhaps not, and if
file-stored then it could be a 1-1 relationship, or a 1-many
relationship, or a many-1 relationship. Since the C standard
doesn't define the term 'header', an implementation is allowed to
actualize it however the implementation chooses (including the
possibility of storing information inside the compiler itself).
On further thought, I tend to agree.
I was thinking that an implementation could usefully provide some of
its own headers as something other than files, as it's clearly
allowed to do for the C standard headers. But the obvious way to do
that would be to require such headers to be included with <>, not
"". POSIX-specific headers like unistd.h are already conventionally
included with <>.
An implementation probably *could* bend the meaning of "file" enough
to support having `#include "whatever.h"` load something other than
a file in the host filesystem, but it's not as useful as I first
thought it might be -- and it could interfere with user-provided
header files that happen to have the same name.
On 02/02/2024 00:30, Lawrence D'Oliveiro wrote:Meson, then
On Thu, 1 Feb 2024 22:34:36 +0100, David Brown wrote:
I am, however, considering CMake (which works at a
higher level, and outputs makefiles, ninja files or other project
files).
Ninja was created as an alternative to Make.
It is an alternative to some uses of make - but by no means all uses.
Basically, if your Makefiles
are going to be generated by a meta-build system like CMake or
functionality,they don’t need to support the kinds of niceties that facilitate writing
them by hand. So you strip it write down to the bare-bones
relatively simple, but quite limited. So it covers the lower level bitswhich makes your builds fast while consuming minimal resources, and that
is Ninja.
Yes.
It is not normal to write ninja files by hand - the syntax is
Perhaps ninja is the tool that Bart is looking for? For the kinds ofthings he is doing, I don't think it would be hard to write the ninja
On 2/3/24 4:51 PM, Keith Thompson wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:
A #include directive with <> searches for a "header", which is
not stated to be a file. A #include directive with "" searches
for a file in an implementation-defined manner; if that search
fails, it tries again as if <> had been used.
The trouble with that interpretation is, it would seem to rule
out the use of things like include libraries for user headers.
Do you really think that was the intention?
I don't know. I imagine an implementation could interpret the
word "file" to include information extracted from libraries.
Note that it doesn't have to correspond to the concept of a
"file" used in <stdio.h>; that refers to files in the execution
environment, not the compilation environment.
To me what the C standard says is clear. A #include "whatever.h"
gets its stuff from a file (assuming of course the appropriate
file can be found, and not revert to the #include <whatever.h>
form). A #include <whatever.h> gets its stuff from a header,
said header perhaps being stored in a file or perhaps not, and if
file-stored then it could be a 1-1 relationship, or a 1-many
relationship, or a many-1 relationship. Since the C standard
doesn't define the term 'header', an implementation is allowed to
actualize it however the implementation chooses (including the
possibility of storing information inside the compiler itself).
On further thought, I tend to agree.
I was thinking that an implementation could usefully provide some
of its own headers as something other than files, as it's clearly
allowed to do for the C standard headers. But the obvious way to
do that would be to require such headers to be included with <>,
not "". POSIX-specific headers like unistd.h are already
conventionally included with <>.
An implementation probably *could* bend the meaning of "file"
enough to support having `#include "whatever.h"` load something
other than a file in the host filesystem, but it's not as useful as
I first thought it might be -- and it could interfere with
user-provided header files that happen to have the same name.
I beleive an implementation doesn't need to provide a way to provide replacements for the standard defined headers.
The include search method is fully implementation defined,
with only
the provision that if you use " " and don't find the file, it
needs to use the < > method, but that doesn't say that the
standard headers might not be first in the " " search order.
Als 7.1.2p4 says:
If a file with the same name as one of the above < and > delimited
sequences, not provided as part of the implementation, is placed in
any of the standard places that are searched for included source
files, the behavior is undefined.
So overridding a Standard defined header is explicitly Undefined
Behaivor. (Not sure if POSIX extends that to its headers).
On 03/02/2024 18:17, Kaz Kylheku wrote:^^^
On 2024-02-03, bart <bc@freeuk.com> wrote:
Don't you see that using only Unix-like systems is also a bubble?
Don't you see that living on Earth is literally being in bubble?
Your bubble contains only one person.
The Unix-like bubble is pertty huge, full of economic opportunities,
spanning from embedded to server.
You're missing my point. Unix imposes a certain mindset, mainly that
there is only ONE way to things, and that is the Unix way.
On 04/02/2024 12:44, David Brown wrote:
On 04/02/2024 05:56, Malcolm McLean wrote:
On 04/02/2024 01:19, bart wrote:
So I'm disappointed there isn't a better, simpler solution to a
very, very simple problem: what exactly goes in place of ... when
building any complete program:
No. I get it. Over complicated build systems which break. Very
serious issue. I've had builds break on me and I'm very surprised
more people haven't had the same experience and don't easily
understand what you are saying.
But where David Brown is right is that it is one thing to diagnose
the problem, quite another to solve it. That is extremely difficult
and I don't think we'll find the answer easily. But continue to discuss. >>>
I'm glad you think I am right - and I agree that as a general point,
solving issues is usually harder than diagnosing them. But I did not
say anything remotely like that in any posts, as far as I am aware.
In particular, I am not aware of any "diagnosis" of fundamental issues
with build tools that need solving - certainly not "solving" by Bart's
solution. (I am aware that /Bart/ has trouble using common tools, and
that his solution might help /him/ - which is fine, and I wish him
luck with it for fixing his own issues.) Some people might use tools
badly, and some people publish projects where others find the builds
difficult on different systems. That's a matter of use, not the tools
- others find they work fine. (No tool is perfect, of course, and
there's always scope for improvement.)
So if you want to use my name, I'd rather you did it in reference to
things I have actually said.
You've said repeatedly and at great length that Bart's proposed
solutions won't work.
You haven't actually admitted that he has
diagnosed a problem which needs to be solved
and maybe I should have
made that clearer.
Where you're right is that writing a better build
system than make is hard. Bart referenced Norwegian, which obviously
meant you, and so I didn't introduce your name.
On 04/02/2024 12:53, David Brown wrote:
On 03/02/2024 15:16, bart wrote:
MSDOS and Windows were intended for direct use by ordinary consumers.
Unix was intended for developers.
There is a bit of truth in that - though Unix was also targeted at
serious computer users, workstation users (such as for CAD,
(My company specialised in low-end CAD products, one of them running on
an 8-bit computer using CP/M. I think at one CAD/CAM trade show, we had
the cheapest product by far.)
(Given your statement here, why do you find it so hard to accept that
people find Linux a much better platform for developers than Windows?)
I didn't quite say that. I meant that Unix with its abstruse interface
was more suited to technical people such as developers, but also those
in academia or industry. Who could also afford such a machine (because somebody else was paying).
Some aspects of it, such as case-sensitive commands and file system,
would have caused difficulties.
Real-life is not usually case-sensitive.
Even now, ordinary people's exposure to it seems to be mainly with
passwords.
(I did a lot of telephone support walking people through dialogs on a terminal. A case-sensitive OS would have made things considerably harder.)
But it does seem as though Unix was a breeding ground for multitudinous developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble
on an OS like MSDOS or Windows where they have to start from nothing.
Even if most of the tools are now free.
On 03/02/2024 17:59, Kaz Kylheku wrote:
On 2024-02-03, bart <bc@freeuk.com> wrote:
On 03/02/2024 01:31, Lawrence D'Oliveiro wrote:
On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:
What option is that, to have one command 'cc' that can deal with N
different languages?
Hint: it uses the filename extension to determine which language, and
which flavour of the language even, it is dealing with.
This is the filename extension which Linux famously ignores, because you >>> can use any extension you like?
Hint: my tools KNOW which language they are dealing with:
You're arguing for a user-unfriendly system where you have to memorize
a separate command for processing each language.
You have to impart that information to the tool in any case. It can
either be by file extension, or the name of the command.
So, 'cc' is some tool that looks at a file extension and selects a
suitable program based on that extension; well done.
But what is the point? Do you routinely invoke cc with multiple files of mixed languages?
Suppose you wanted a different C compiler on each .c
file? Oh, you then invoke it separately for each file. So you do that
anyway in that rare event.
Recognizing files by suffix is obviously superior.
root@XXX:/mnt/c/c# cp hello.c hello.x
root@XXX:/mnt/c/c# gcc hello.x
hello.x: file not recognized: file format not recognized
This is good; it's one more little piece of resistance
against people using the wrong suffix.
It's not the only one. Editors won't bring up the correct syntax
formatting and coloring if the file suffix is wrong.
Tools for cross-referencing identifiers in source code may also get
things wrong due to the wrong suffix, or ignore the file entirely.
This completely contradicts what people have been saying about Linux
where file extensions are optional and only serve as a convenience.
For example, executables can have no extension, or .exe, or even .c.
It is Windows that places more store by file extensions, which Linux
people say is a bad thing.
But above you say that is the advantage of Linux.
Your argument of "I can rename my C to any suffix and my compiler
still recognizes it" is completely childish.
It only seems to be childish when one of my programs handles this better
than one of yours!
There is only thing my mcc program can't do, which is to compile a C
file named 'filename.'; that is, 'filename' followed by an actual '.',
not 'filename' with no extension.
And that's only when I run it under Linux. That's because under Linux, 'filename' and 'filename.' are distinct files; the "." is part of the
file name, not a notional separator.
On 03/02/2024 18:17, Kaz Kylheku wrote:
On 2024-02-03, bart <bc@freeuk.com> wrote:
Don't you see that using only Unix-like systems is also a bubble?
Don't you see that living on Earth is literally being in bubble?
Your bubble contains only one person.
The Unix-like bubble is pertty huge, full of economic opportunities,
spanning from embedded to server.
You're missing my point. Unix imposes a certain mindset, mainly that
there is only ONE way to things, and that is the Unix way.
That is pretty obvious from the passionate posts people make about it.
And it is obvious that they struggle outside it, which is why they hate Windows - it just isn't Unix!
While you were dismissing Linux in 1995, it was actually going strong,
marching forward. Only fools ignored it.
A year before that, in 1994, I was doing contract work for Linux
already. My client used it for serving up pay-per-click web pages to
paying customers. I was working on the log processing and billing side
of it, and also created a text-UI (curses) admin tool.
Meanwhile, a decade before that, the question of OS in my first
commercial product was utterly irrelevant. It provided a file system and
it was used to launch my app.
What was it again? I can barely remember. I JUST DO NOT CARE.
Of all those OSes I have used, Windows might rank near the bottom, but
not for the reasons you think. That's because it operated in protected
mode so that lots of things which had been easy, became hard.
How would Unix have helped with that? It wouldn't.)
An OS that provides more or less the same semantics as POSIX, but using
interfaces that are gratuitously different, and incompatible, is
worthless in this day and age.
Because .... you say so?
I mean, are core OSes really so hard to write that everyone in the world
has to use the same one? There seems to plenty of amateur OS development still.
Something that doesn't conform to compatibility standards, and isn't
demonstrably better for it, is a dud.
There is good different and bad different. More or less same, but
incompatible, is bad different.
I build a box where you feed in data in the form of byte-stream, and it
gives results in the form of a byte-stream. Or replace one of those by something physical; say the box is a printer or scanner.
There is no OS specified, you've no idea whether it uses POSIX, or even
if there's a computer inside.
But if it performs a useful task, then what is the problem?
Same thing if you are working on a self-contained function, library or
app. It may have inputs or outputs. Do you need to care what OS is
running? No, only on the job it has to do.
Really you make too much of it. The main thing I don't like is when I
have some software that is hard to build on Windows when there is no
reason for it.
On 04/02/2024 17:48, David Brown wrote:
On 03/02/2024 20:35, bart wrote:
It is Windows that places more store by file extensions, which Linux
people say is a bad thing.
Windows is too dependent on them, and too trusting.
But above you say that is the advantage of Linux.
Yes, it's a hands-down win for Linux (and other *nix) in this aspect.
Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
script, and ASSUMES that .s is an assembly source file;
On 03/02/2024 20:35, bart wrote:
It is Windows that places more store by file extensions, which Linux
people say is a bad thing.
Windows is too dependent on them, and too trusting.
But above you say that is the advantage of Linux.
Yes, it's a hands-down win for Linux (and other *nix) in this aspect.
And that's only when I run it under Linux. That's because under Linux,
'filename' and 'filename.' are distinct files; the "." is part of the
file name, not a notional separator.
Of course it is. It's simple and consistent.
In Windows, it is sometimes part of a file name (when it is not the last period in the name), sometimes a magical character that appears or
disappears (when the file ends in a period), and sometimes it delimits a
file extension.
On 04/02/2024 17:48, David Brown wrote:
On 03/02/2024 20:35, bart wrote:
It is Windows that places more store by file extensions, which Linux
people say is a bad thing.
Windows is too dependent on them, and too trusting.
But above you say that is the advantage of Linux.
Yes, it's a hands-down win for Linux (and other *nix) in this aspect.
Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
script, and ASSUMES that .s is an assembly source file; INCORRECT assumptions.
I think I'm starting to understand the rules: whatever Windows does is
always wrong, and whatever Linux does is always right!
To be clear, this is the behaviour of /my/ applications, which work the
same way on Windows /or/ Linux, that work primarily work on one type of
file, that assume that file type no matter what the extension.
BOTH methods can be problematic if you deliberately or accidentally mix
up file types and extensions.
And that's only when I run it under Linux. That's because under
Linux, 'filename' and 'filename.' are distinct files; the "." is part
of the file name, not a notional separator.
Of course it is. It's simple and consistent.
In Windows, it is sometimes part of a file name (when it is not the
last period in the name), sometimes a magical character that appears
or disappears (when the file ends in a period), and sometimes it
delimits a file extension.
It probably still needs to be a notional dot for backwards compatibility
over decades.
The first two DEC systems I used had 6.3 filenames, storing 'sixbit' characters in 1.5 words for 36 bits, or using 'radix-50' in 3 words for
16 bits. You can see there is nowhere to put the dot.
That was carried over to DOS's 8.3 filename.
This dot then was really a virtual separator that did not need storing,
any more than you need to store the dot in the ieee754 representation of 73.945.
It has given very little trouble, and has the huge advantage that you
can have default extensions on input files with no ambiguity.
Let me guess: Unix allows you to have numbers like 73.945.112, while 73.
is a different value from 73? Cool.
I think I'm starting to understand the rules: whatever Windows does is
always wrong, and whatever Linux does is always right!
But it does seem as though Unix was a breeding ground for multitudinous developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble
on an OS like MSDOS or Windows where they have to start from nothing.
Even if most of the tools are now free.
On 04/02/2024 21:18, bart wrote:
BOTH methods can be problematic if you deliberately or accidentally
mix up file types and extensions.
So stop deliberately being a screw-up.
That was carried over to DOS's 8.3 filename.
At a time when real OS's had moved beyond that.
- it's what you expect when you remember that MS DOS was written as a
quick hack on a system called "quick and dirty OS" as a way for MS to
con its customers.
This dot then was really a virtual separator that did not need
storing, any more than you need to store the dot in the ieee754
representation of 73.945.
It has given very little trouble, and has the huge advantage that you
can have default extensions on input files with no ambiguity.
Let me guess: Unix allows you to have numbers like 73.945.112, while
73. is a different value from 73? Cool.
Um, you remember this is comp.lang.c ? "73" is an integer constant,
"73." is a double.
On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:
But it does seem as though Unix was a breeding ground for multitudinous
developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble
on an OS like MSDOS or Windows where they have to start from nothing.
Even if most of the tools are now free.
Yet it seems like even someone like you, who is supposed to be “used to” Windows rather than *nix, still has the same trouble.
On Windows you can't assume that the end user will be interested in development or have any develoment tools available.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 2/4/2024 5:45 PM, Lawrence D'Oliveiro wrote:
On Mon, 5 Feb 2024 00:07:33 +0000, Malcolm McLean wrote:
On Windows you can't assume that the end user will be interested inWorse than that, the assumption is that development will be done in a
development or have any develoment tools available.
proprietary, self-contained IDE, primarily sourced from a single
vendor.
https://youtu.be/i_6zPIWQaUI ;^)
If you must post random YouTube links, can you at least include a 1-line >description so we don't waste *too* much time?
Better yet, if you could cut down on the followups that don't add
anything relevant, I for one would appreciate it.
On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:On Windows you can't assume that the end user will be interested in development or have any develoment tools available. Or that he'll be
But it does seem as though Unix was a breeding ground for multitudinous
developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble
on an OS like MSDOS or Windows where they have to start from nothing.
Even if most of the tools are now free.
Yet it seems like even someone like you, who is supposed to be “used to” >> Windows rather than *nix, still has the same trouble. So maybe it’s not
about being “used to” *nix at all, there really is something inherent in >> the fundamental design of that environment that makes development work
easier.
able to do anything other than the most basic installation. It's a
consumer platform.
On 04/02/2024 21:51, David Brown wrote:
On 04/02/2024 21:18, bart wrote:
BOTH methods can be problematic if you deliberately or accidentally
mix up file types and extensions.
So stop deliberately being a screw-up.
That was carried over to DOS's 8.3 filename.
At a time when real OS's had moved beyond that.
When was that? The IBM PC came out in 1981. The DEC machines I mentioned
were still in use. Oh, you mean Unix was the One and Only Real OS? I get
it.
What a stupid decision
- it's what you expect when you remember that MS DOS was written as a
quick hack on a system called "quick and dirty OS" as a way for MS to
con its customers.
Funny you should fixate on that, and not on the idea of a business
computer running on a 4.8MHz 8088 processor with a crappy 'CGA' video
board design that would barely pass as a student assignment. (Oh, that
was IBM and not MS, and it is only MS you want to shit all over.)
However it brought business computing to the masses. Where were the
machines running your beloved Unix?
I believe you were working on Spectrums then or some such machines; what filenames did /they/ allow, or did they not actually have a file system?
You're being unjust on the people working on all this stuff at that
period, trying to make things work with small processors, tiny amounts
of memory and limited storage.
This dot then was really a virtual separator that did not need
storing, any more than you need to store the dot in the ieee754
representation of 73.945.
It has given very little trouble, and has the huge advantage that you
can have default extensions on input files with no ambiguity.
Let me guess: Unix allows you to have numbers like 73.945.112, while
73. is a different value from 73? Cool.
Um, you remember this is comp.lang.c ? "73" is an integer constant,
"73." is a double.
Yes. But the question is whether the "." separating out the two parts of
a filename should be actually stored, as a '.' character taking up extra space.
It made perfect sense not to store it the time. But Unix made a decision
at the time to store it literally, which could also have been thought
crass.
In hindsight, with filenames now allowing arbitrary dots, they made the
right decision. But that was more due to luck. And probably not having
to make concessions to running on low-end hardware.
You however would try and argue that some great foresight was
deliberately exercised and that the people behind those other systems
made a dumb decision.
I'm sorry but you weren't there.
On 05/02/2024 00:11, bart wrote:
[...] Oh, you mean Unix was the One and Only Real OS? I get it.
There have been lots of OS's. MS DOS was - from the beginning - a hack
on a simple limited OS.
[...]
The all-caps names (which then led to the silly case insensitive
behaviour) had no excuse at all.
And /relying/ on file extensions for
critical things like executable type was never smart. (File extensions
for user convenience is fine as a useful convention.)
[...]
Is it "funny" that in discussion about operating systems, I talked about
the operating system - not the hardware? I agree that the IBM PC
hardware was pathetic for its time - for a start, it should have been,
as the designers wanted, built around an 68000 cpu.
You're being unjust on the people working on all this stuff at that
period, trying to make things work with small processors, tiny amounts
of memory and limited storage.
No, I just think they could have done a lot better with what they had.
Let me guess: Unix allows you to have numbers like 73.945.112, while
73. is a different value from 73? Cool.
Um, you remember this is comp.lang.c ? "73" is an integer constant,
"73." is a double.
Yes. But the question is whether the "." separating out the two parts
of a filename should be actually stored, as a '.' character taking up
extra space.
I understand how DOS and its descendants handle this. I understand how almost every other file system and OS handles this. I know which is
better.
[...]
In hindsight, with filenames now allowing arbitrary dots, they made
the right decision.
But that was more due to luck. And probably not
having to make concessions to running on low-end hardware.
[...]
On 04/02/2024 21:51, David Brown wrote:
On 04/02/2024 21:18, bart wrote:
BOTH methods can be problematic if you deliberately or accidentally
mix up file types and extensions.
So stop deliberately being a screw-up.
I was replying initially to somebody claiming that being able to do:
cc prog.a
cc prog.b
cc prog.c
and marshalling the file into the right tool was not only some great >achievement only possible on Linux, but also desirable.
On 2/4/2024 8:41 PM, Keith Thompson wrote:
[...]
Better yet, if you could cut down on the followups that don't add
anything relevant, I for one would appreciate it.
On 30/01/2024 08:17, David Brown wrote:
The build system isn't really about specifying an executable from
sources. If that was all there was to I'd probably heve been told to set
it up myself. It's more about giving people access to sources and
ensuring that they are consistent and the right version is being used,
On 05.02.2024 13:42, David Brown wrote:
On 05/02/2024 00:11, bart wrote:
[...] Oh, you mean Unix was the One and Only Real OS? I get it.
(Obviously not.)
There have been lots of OS's. MS DOS was - from the beginning - a hack
on a simple limited OS.
And MS marketing was able to foster a community who could easily be brainwashed to find it natural that SW is so buggy and unreliable.
And few (from the many) flaws, deficiencies, and bugs can be clumsily
worked around. Countless "experts" were arising from that who have specialized "guru wisdom" about the magic to work around some of these
well known flaws. Blue screens were common. A standard tip - and even
still in use nowadays! - was and is "Reboot your system.", and if that doesn't help then "Reinstall the software.", or the "Reinstall the OS"
if nothing helped, and finally "Wait for version N+1 of this OS, there
will be all good then." - and of course it never was.
[...]
The all-caps names (which then led to the silly case insensitive
behaviour) had no excuse at all.
All caps was initially a historic restriction of many OSes due to the
limited character sets. At some point working case sensitivity became possible and supported; MS was not amongst the first here. Later the
need for non-ASCII and internationalization became prevalent and it
became technically possible to support that. Meanwhile we have multi-
lingual computing. For certain user front-ends of applications it is
more useful to not distinguish case; see Google search for a prominent example.
Filenames consisting of "two parts" is a fundamental misconception.
I understand how DOS and its descendants handle this. I understand how
almost every other file system and OS handles this. I know which is
better.
[...]
In hindsight, with filenames now allowing arbitrary dots, they made
the right decision.
(What a bright enlightenment. Great.)
But that was more due to luck. And probably not
having to make concessions to running on low-end hardware.
(And again some stupid continuation; random guesses based on opinion.)
[...]
Janis
On 2/4/2024 4:07 PM, Malcolm McLean wrote:
On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:On Windows you can't assume that the end user will be interested in
But it does seem as though Unix was a breeding ground for multitudinous >>>> developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble >>>> on an OS like MSDOS or Windows where they have to start from nothing.
Even if most of the tools are now free.
Yet it seems like even someone like you, who is supposed to be “used to”
Windows rather than *nix, still has the same trouble. So maybe it’s not >>> about being “used to” *nix at all, there really is something inherent in
the fundamental design of that environment that makes development work
easier.
development or have any develoment tools available.
Fwiw, I have seen Linux users that have no intent to program anything at
all.
Or that he'll be able to do anything other than the most basic
installation. It's a consumer platform.
On 2/4/2024 9:48 AM, David Brown wrote:
[...]
In Windows, it is sometimes part of a file name (when it is not the
last period in the name), sometimes a magical character that appears
or disappears (when the file ends in a period), and sometimes it
delimits a file extension.
picture_of_a_cow____________________this_is_not_a_virus_really.jpeg.gif.exe
lol.
Everybody says use makefiles; well they don't work. They tend to be
heavily skewed towards the use of gcc. My compiler isn't gcc.
On 05/02/2024 16:48, candycanearter07 wrote:
On 2/4/24 16:02, Chris M. Thomasson wrote:
On 2/4/2024 9:48 AM, David Brown wrote:
[...]
In Windows, it is sometimes part of a file name (when it is not the
last period in the name), sometimes a magical character that appears
or disappears (when the file ends in a period), and sometimes it
delimits a file extension.
picture_of_a_cow____________________this_is_not_a_virus_really.jpeg.gif.exe >>>
lol.
Windows making such a big deal over file extensions and outright
hiding them is silly IMO
Hiding the extension is a complete nightmare. Unless the automatic recognition system works perfectly, you can end up with a file you can't
use.
On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:
But it does seem as though Unix was a breeding ground for multitudinous
developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble
on an OS like MSDOS or Windows where they have to start from nothing.
Even if most of the tools are now free.
Yet it seems like even someone like you, who is supposed to be ???used to??? >> Windows rather than *nix, still has the same trouble.
*I* don't have trouble. Only with other people's projects originating
from Linux.
Apparently, on that OS, nobody knows how to build a program given only
the C source files, and a C compiler.
Or if they do, they are unwilling to part with that information. It is encrypted into a makefile, or worse.
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
On 05/02/2024 17:37, Jim Jackson wrote:
On 2024-02-04, bart <bc@freeuk.com> wrote:Here's one on my machine I selected almost at random
On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:
But it does seem as though Unix was a breeding ground for
multitudinous
developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have
trouble
on an OS like MSDOS or Windows where they have to start from nothing. >>>>> Even if most of the tools are now free.
Yet it seems like even someone like you, who is supposed to be
???used to???
Windows rather than *nix, still has the same trouble.
*I* don't have trouble. Only with other people's projects originating
from Linux.
Apparently, on that OS, nobody knows how to build a program given only
the C source files, and a C compiler.
Programmers and Developers do.
Or if they do, they are unwilling to part with that information. It is
encrypted into a makefile, or worse.
Encrypted? I always thought makefiles were plain text? You can read them
with less^H^H^H^H "more" - which if memory serves, is also a DOS command?
!ifndef BCROOT
BCROOT=$(MAKEDIR)\..
!endif
BCC32 = $(BCROOT)\bin\Bcc32.exe
IDE_LinkFLAGS32 = -L$(BCROOT)\LIB
COMPOPTS= -O2 -tWC -tWM- -Vx -Ve -D_NO_VCL; -I../../../../; -L..\..\build\bcb5
timer.exe : regex_timer.cpp
$(BCC32) @&&|
$(COMPOPTS) -e$@ regex_timer.cpp
|
Whilst some of this is pretty clear, it's not all obvious what the
second half of the line
$(BCC32) @&&|
is meant to mean.
On Sun, 4 Feb 2024 01:19:53 +0000, bart <bc@freeuk.com> wrote:
Everybody says use makefiles; well they don't work. They tend to be
heavily skewed towards the use of gcc. My compiler isn't gcc.
By default a lot of builtin "implicit rules" determine which
program to use to make a .o from a .c etc. etc., and yes, that
is GCC-centric.
On 05/02/2024 17:37, Jim Jackson wrote:
On 2024-02-04, bart <bc@freeuk.com> wrote:Here's one on my machine I selected almost at random
On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:
But it does seem as though Unix was a breeding ground for multitudinous >>>>> developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble >>>>> on an OS like MSDOS or Windows where they have to start from nothing. >>>>> Even if most of the tools are now free.
Yet it seems like even someone like you, who is supposed to be ???used to???
Windows rather than *nix, still has the same trouble.
*I* don't have trouble. Only with other people's projects originating
from Linux.
Apparently, on that OS, nobody knows how to build a program given only
the C source files, and a C compiler.
Programmers and Developers do.
Or if they do, they are unwilling to part with that information. It is
encrypted into a makefile, or worse.
Encrypted? I always thought makefiles were plain text? You can read them
with less^H^H^H^H "more" - which if memory serves, is also a DOS command?
!ifndef BCROOT
BCROOT=$(MAKEDIR)\..
!endif
BCC32 = $(BCROOT)\bin\Bcc32.exe
IDE_LinkFLAGS32 = -L$(BCROOT)\LIB
COMPOPTS= -O2 -tWC -tWM- -Vx -Ve -D_NO_VCL; -I../../../../; >-L..\..\build\bcb5
timer.exe : regex_timer.cpp
$(BCC32) @&&|
$(COMPOPTS) -e$@ regex_timer.cpp
|
bart <bc@freeuk.com> writes:
[...]
There was also 'configure' of 11,000 lines, so I switched to WSL. Now
typing ./configure shows:
-bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
It looks like you've downloaded the source as a .zip file, which was
packaged incorrectly. I've reported this to their mailing list. Try downloading the .tar.gz file instead.
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 2/5/2024 6:48 AM, Tim Rentsch wrote:
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
On 2/4/2024 8:41 PM, Keith Thompson wrote:For what it's worth, I second Keith's request, and strenuously
[...]
Better yet, if you could cut down on the followups that don't add
anything relevant, I for one would appreciate it.
support it.
I was trying to lighten the mood, so to speak. Well, it backfired on me. ;^o
Does that mean you're going to stop? You're just about to land in my >killfile, but I'm willing to reconsider. You do sometimes post relevant >content, but it's just not worth digging through the noise.
bart <bc@freeuk.com> writes:
[...]
There was also 'configure' of 11,000 lines, so I switched to WSL. Now
typing ./configure shows:
-bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
It looks like you've downloaded the source as a .zip file, which was
packaged incorrectly.
Pretty much every front-end not aimed at technical users is
case-insensitive.
-bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
This is all quite typical.
It /is/ a consumer platform, yes. And because it has no standard ways
to build software, and no one (approximately) using it wants to build software on it, the norm is to distribute code in binary form for
Windows. That works out fine for almost all Windows users. That
includes libraries - even C programmers on Windows don't want to build "libjpeg" or whatever, they want a DLL.
On 2024-02-05, bart <bc@freeuk.com> wrote:
-bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory
This indicates that the thing you're trying to build was converted
to Windows format. See that ^M? It's a carriage return; what's that
doing in a POSIX shell script? Someone likely did that on purpose.
Firstly, projects with ./configure shell scripts are often not ported to Windows at all. If that is the case, you could be the first one trying
that. In that situation, the best bet is Cygwin. (Or WSL2, but that's basically not Windows.)
The .zip file containing files converted to Windows format suggests
that the package is ported to Windows, using some build environment that
uses CR-LF files like MinGW.
Your best bet is to consult the project and ask them, how is it ported
to Windows? Then do it their way. Otherwise you're on your own.
Another pattern that occurs is that FOSS projects which port their code
to Windows themselves provide binaries for Windows, so they don't expect users to build those. Thus their procedure for building on Windows might not be well documented.
This is all quite typical.
You not knowing where to get a glue and generally being lost
at sea with no rudder or sail?
Don't you have some nephew or niece in the fifth grade who could
help with this?
When I go to the NASM site (https://www.nasm.us) there is a clear
Download link.
In the download link, there are versioned and dated release
directories.
In the most recent one, there are Win32 and Win64 subdirectories.
There is a file
nasm-2.16.02rc9-installer-x64.exe
Doh?
They've gone out of the way to support Windows users with an executable installer.
If you want to know how they built that, they may have documentation elsewhere. There might be instructions in the accompanying .zip or else
you just have to as in the mailing list.
On Mon, 5 Feb 2024 13:02:52 +0100, David Brown wrote:
It /is/ a consumer platform, yes. And because it has no standard ways
to build software, and no one (approximately) using it wants to build
software on it, the norm is to distribute code in binary form for
Windows. That works out fine for almost all Windows users. That
includes libraries - even C programmers on Windows don't want to build
"libjpeg" or whatever, they want a DLL.
But without integrated package management, how do you keep it all up to
date? If two separate apps use the same library, do they each end up with their own version, or do they share one version? Does each app have to run its own periodic background updater task to tell you there’s a new version available?
I am hoping Chris is lucky enough to be given this honor.
Ah. So of course Keith couldn't understand what I was saying.
On 05/02/2024 22:51, Lawrence D'Oliveiro wrote:
On Mon, 5 Feb 2024 13:02:52 +0100, David Brown wrote:
It /is/ a consumer platform, yes. And because it has no standard ways
to build software, and no one (approximately) using it wants to build
software on it, the norm is to distribute code in binary form for
Windows. That works out fine for almost all Windows users. That
includes libraries - even C programmers on Windows don't want to build
"libjpeg" or whatever, they want a DLL.
But without integrated package management, how do you keep it all up to
date? If two separate apps use the same library, do they each end up with
their own version, or do they share one version? Does each app have to run >> its own periodic background updater task to tell you there’s a new version >> available?
The term is DLL hell.
If a DLL changes, does that means that apps which called the old DLL and
are were buggy should call the new DLL and will now be fixed?
On 2024-02-05, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 5 Feb 2024 13:02:52 +0100, David Brown wrote:
It /is/ a consumer platform, yes. And because it has no standard ways
to build software, and no one (approximately) using it wants to build
software on it, the norm is to distribute code in binary form for
Windows. That works out fine for almost all Windows users. That
includes libraries - even C programmers on Windows don't want to build
"libjpeg" or whatever, they want a DLL.
But without integrated package management, how do you keep it all up to
date? If two separate apps use the same library, do they each end up with
their own version, or do they share one version? Does each app have to run >> its own periodic background updater task to tell you there’s a new version >> available?
Windows has solved this problem. Executables find .DLL libraries in
their own directory.
You ship a program with the exact libraries it needs which you
tested with and those are the ones it will use.
On 02/02/2024 14:45, Michael S wrote:
Actually, nowadays monolithic tools are solid majority in
programming. I mean, programming in general, not C/C++/Fortran
programming which by itself is a [sizable] minority.
Even in C++, a majority uses non-monolithic tools well-hidden behind
front end (IDE) that makes them indistinguishable from monolithic.
It can often be helpful to have a single point of interaction - a
front-end that combines tools. But usually these are made of parts.
For many of the microcontrollers I work with, the manufacturer's
standard development toolset is based around Eclipse and gcc. From
the user point of view, it looks a lot like one monolithic IDE that
lets you write your code, compile and link it, and download and debug
it on the microcontroller. Under the hood, it is far from a
monolithic application. Different bits come from many different
places. This means the microcontroller manufacturer is only making
the bits that are specific to /their/ needs - such as special views
while debugging, or "wizards" for configuring chip pins. The Eclipse
folk are experts at making an editor and IDE, the gcc folks are
experts at the compiler, the openocd folks know about jtag debugging,
and so on. And to a fair extent, advanced users can use the bits
they want and leave out other bits. I sometimes use other editors,
but might still use the toolchain provided with the manufacturer's
tools. I might swap out the debugger connection. I might use the
IDE for something completely different. I might install additional
features in the IDE. I might use different toolchains.
Manufacturers, when putting things together, might change where they
get their toolchains, or what debugging connectors they use. It's
even been known for them to swap out the base IDE while keeping most
of the rest the same (VS Code has become a popular choice now, and a
few use NetBeans rather than Eclipse).
(Oh, and for those that don't believe "make" and "gcc" work on
Windows, these development tools invariably have "make" and almost
invariably use gcc as their toolchain, all working in almost exactly
the same way on Linux and Windows. The only difference is builds are
faster on Linux.)
This is getting the best (or at least, trying to) from all worlds.
It gives people the ease-of-use advantages of monolithic tools
without the key disadvantages of real monolithic tools - half-arse
editors, half-arsed project managers, half-arsed compilers, and poor extensibility because the suppliers are trying to do far too much
themselves.
I don't think it is common now to have /real/ monolithic development
tools. But it is common to have front-ends aimed at making the
underlying tools easier and more efficient to use, and to provide
all-in-one base packages.
On Fri, 2 Feb 2024 16:26:12 +0100
David Brown <david.brown@hesbynett.no> wrote:
On 02/02/2024 14:45, Michael S wrote:
Actually, nowadays monolithic tools are solid majority in
programming. I mean, programming in general, not C/C++/Fortran
programming which by itself is a [sizable] minority.
Even in C++, a majority uses non-monolithic tools well-hidden behind
front end (IDE) that makes them indistinguishable from monolithic.
It can often be helpful to have a single point of interaction - a
front-end that combines tools. But usually these are made of parts.
For many of the microcontrollers I work with, the manufacturer's
standard development toolset is based around Eclipse and gcc. From
the user point of view, it looks a lot like one monolithic IDE that
lets you write your code, compile and link it, and download and debug
it on the microcontroller. Under the hood, it is far from a
monolithic application. Different bits come from many different
places. This means the microcontroller manufacturer is only making
the bits that are specific to /their/ needs - such as special views
while debugging, or "wizards" for configuring chip pins. The Eclipse
folk are experts at making an editor and IDE, the gcc folks are
experts at the compiler, the openocd folks know about jtag debugging,
and so on. And to a fair extent, advanced users can use the bits
they want and leave out other bits. I sometimes use other editors,
but might still use the toolchain provided with the manufacturer's
tools. I might swap out the debugger connection. I might use the
IDE for something completely different. I might install additional
features in the IDE. I might use different toolchains.
Manufacturers, when putting things together, might change where they
get their toolchains, or what debugging connectors they use. It's
even been known for them to swap out the base IDE while keeping most
of the rest the same (VS Code has become a popular choice now, and a
few use NetBeans rather than Eclipse).
(Oh, and for those that don't believe "make" and "gcc" work on
Windows, these development tools invariably have "make" and almost
invariably use gcc as their toolchain, all working in almost exactly
the same way on Linux and Windows. The only difference is builds are
faster on Linux.)
This is getting the best (or at least, trying to) from all worlds.
It gives people the ease-of-use advantages of monolithic tools
without the key disadvantages of real monolithic tools - half-arse
editors, half-arsed project managers, half-arsed compilers, and poor
extensibility because the suppliers are trying to do far too much
themselves.
I don't think it is common now to have /real/ monolithic development
tools. But it is common to have front-ends aimed at making the
underlying tools easier and more efficient to use, and to provide
all-in-one base packages.
First, you moved a goal post from monolithic compiler to monolithic IDE. Second, you are still talking about C/C++/Fortran.
That's not where majority of software development goes this days.
The first most used language is JavaScript. Where exactly JavaScript dev
sees separate compiler and linker?
The second most used language is python. The same question here.
Even in more traditional compiled/jitted and mostly statically typed programming environments like Java/Cotlin, .net, Swift, go, Rust, even
if they use separate tools for compiling, assembling, linking and build management, it's all integrated in a way that even die-hard command-line
user does not know about separation.
It is not monolithic by any means - but it /looks/ that way for user convenience.
On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
The Glibc shared library loading mechanism doesn't implement the nice >strategy of finding libraries in the same directory as the executable.
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
The Glibc shared library loading mechanism doesn't implement the nice >>strategy of finding libraries in the same directory as the executable.
Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.
On Tue, 6 Feb 2024 13:14:14 +0100
David Brown <david.brown@hesbynett.no> wrote:
It is not monolithic by any means - but it /looks/ that way for user
convenience.
And Bart wants the same for slightly extended variant of C, that's all. According to my understanding, he does not care deeply about
distinction between "true monolithic" and integrated compiler + linker
+ build system as long as it looks like monolithic.
Or may be I should say that he will certainly express his unhappiness
about size and speed of looks-monolithic tool and about the fact that
they have to be installed, if they have to be installed, at least 20
times per week, but at least he will be reasonably satisfied with functionality.
There are also products like Pico C, an interpreter, about 130KB self-contained in one file, although it has limitations and is very slow
even for an interpreter. It could be adequate though for scripting builds.
I know David Brown doesn't like 'toy' implementations of C, but if you
need to bundle something for example, then the smaller and more self-contained the better.
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
The Glibc shared library loading mechanism doesn't implement the nice >>strategy of finding libraries in the same directory as the executable.
Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.
On 2024-02-06, Scott Lurndal <scott@slp53.sl.home> wrote:
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
The Glibc shared library loading mechanism doesn't implement the nice >>>strategy of finding libraries in the same directory as the executable.
Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.
Ah, that has this $ORIGIN mechanism now.
Even if the distro doesn't have that in its LD_LIBRARY_PATH,
you can put that into your executable's rpath.
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
There is a trend now for newer languages to come as one giant
executable ...
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
It's a UNIX thing. GNU supports it, as it supports other
UNIX requirements.
Lew Pitcher <lew.pitcher@digitalfreehold.ca> writes:
On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
It's a UNIX thing. GNU supports it, as it supports other
UNIX requirements.
Where is it documented as a UNIX requirement? POSIX doesn't seem to
mention it.
On 2024-02-06, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:
On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
It's a UNIX thing. GNU supports it, as it supports other
UNIX requirements.
I can't find any mention of LD_LIBRARY_PATH in SuS.
Not under dlopen or anywhere else.
I'm looking at (pretty old) Solaris documentation. It has the $ORIGIN >variable suppoted in both LD_LIBRARY_PATH and the internal path you can
set in executables.
I also found a 1998-08 commit from Ulrich Drepper adding the expansion >support with ORIGIN.
I think the documentation of it may have lagged behind, that's all,
but we have had it "forever".
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
I think we've used it on AIX and HP-UX already.
bart <bc@freeuk.com> writes:
On 04/02/2024 17:48, David Brown wrote:
On 03/02/2024 20:35, bart wrote:
It is Windows that places more store by file extensions, which Linux
people say is a bad thing.
Windows is too dependent on them, and too trusting.
But above you say that is the advantage of Linux.
Yes, it's a hands-down win for Linux (and other *nix) in this aspect.
Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
I've never seen a '.x ' suffix. Ever. And I use linker scripts
regularly.
On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
I think we've used it on AIX and HP-UX already.
Some IBM documentation I was able to dig up on the web says that AIX 5.3 [2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
5.1, which continues to work. Nothing about the $SOURCE expansion.
[...]
Also, there are linker scripts that end in ".x"
which on my system live here:
/usr/lib/x86_64-linux-gnu/ldscripts/
Fascinating to read -- and way over my head. (The man page for GNU ld
says they are "AT&T's Link Editor Command Language syntax".) I'm not
sure how often an average programmer would look around in there.
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base system of a distro.
On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
I think we've used it on AIX and HP-UX already.
Some IBM documentation I was able to dig up on the web says that AIX 5.3 [2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
5.1, which continues to work. Nothing about the $SOURCE expansion.
The GCC Compile Farm Project has an AIX machine. I'm logging in there
now. Looks like the "load" and "dlopen" man pages reference
LD_LIBRARY_PATH. None of them mention any interpolation of parameters
being supported. It probably doesn't exist.
On Sun, 04 Feb 2024 20:55:12 GMT, scott@slp53.sl.home (Scott Lurndal)
wrote in <QQSvN.294647$Wp_8.94897@fx17.iad>:
bart <bc@freeuk.com> writes:
On 04/02/2024 17:48, David Brown wrote:
On 03/02/2024 20:35, bart wrote:
It is Windows that places more store by file extensions, which Linux >>>>> people say is a bad thing.
Windows is too dependent on them, and too trusting.
But above you say that is the advantage of Linux.
Yes, it's a hands-down win for Linux (and other *nix) in this aspect.
Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
I've never seen a '.x ' suffix. Ever. And I use linker scripts
regularly.
This was the first I'd heard about them in this context, but Open
Network Computing's RPC (ONCRPC, was SunRPC) does use .x files
for its RPC specifications.
ONCRPC is a system for generating C stubs for network
services, and it is (was?) also used to specify
UNIX services like NFS and NIS. The Sun of yore
were, indeed, good denizens of the Net. (So, crossposting
conditions satisfied...I think?)
Anyway, if you have the "standard" .x files
installed on Linux Mint, they live in
/usr/include/rpcsvc/
Also, there are linker scripts that end in ".x"
which on my system live here:
/usr/lib/x86_64-linux-gnu/ldscripts/
Fascinating to read -- and way over my head. (The
man page for GNU ld says they are
"AT&T's Link Editor Command Language syntax".) I'm
not sure how often an average programmer would look
around in there.
In any event, the ".x" files in that directory are in
the minority...
On 07/02/2024 03:57, vallor wrote:
On Sun, 04 Feb 2024 20:55:12 GMT, scott@slp53.sl.home (Scott Lurndal)
wrote in <QQSvN.294647$Wp_8.94897@fx17.iad>:
bart <bc@freeuk.com> writes:
On 04/02/2024 17:48, David Brown wrote:
On 03/02/2024 20:35, bart wrote:Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
It is Windows that places more store by file extensions, which Linux >>>>>> people say is a bad thing.
Windows is too dependent on them, and too trusting.
But above you say that is the advantage of Linux.
Yes, it's a hands-down win for Linux (and other *nix) in this aspect. >>>>
I've never seen a '.x ' suffix. Ever. And I use linker scripts
regularly.
This was the first I'd heard about them in this context, but Open
Network Computing's RPC (ONCRPC, was SunRPC) does use .x files
for its RPC specifications.
ONCRPC is a system for generating C stubs for network
services, and it is (was?) also used to specify
UNIX services like NFS and NIS. The Sun of yore
were, indeed, good denizens of the Net. (So, crossposting
conditions satisfied...I think?)
Anyway, if you have the "standard" .x files
installed on Linux Mint, they live in
/usr/include/rpcsvc/
Also, there are linker scripts that end in ".x"
which on my system live here:
/usr/lib/x86_64-linux-gnu/ldscripts/
Fascinating to read -- and way over my head. (The
man page for GNU ld says they are
"AT&T's Link Editor Command Language syntax".) I'm
not sure how often an average programmer would look
around in there.
In any event, the ".x" files in that directory are in
the minority...
If you look in that directory, you'll see all the files are ".x<flags>", where <flags> are letters. So you get ".x", ".xbn", ".xc", ".xce", and
a dozen other combinations. I don't know the details of the flags, but
they generally refer to different arrangements of code and data (for
example, merging read-only data and executable code, or keeping them separate).
There's no doubt that ".x", and ".x<flags>", are common extensions for
linker files, but that they do not act as file extensions in the same
way as for other source code. Instead, they are sets of flags. (That's why gcc treats any unknown extension as a linker file.)
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:Wait really?
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
If you install Windows you don't get Visual Studio and you have to install
it separately. If you install Linux you get gcc and other development
tools, and I don't think there's even a way of setting up the install to
say you don't want them.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:Wait really?
But the tools are *still preinstalled*, so installers can definitely >>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base >>>> system of a distro.
If you install Windows you don't get Visual Studio and you have to install >> it separately. If you install Linux you get gcc and other development
tools, and I don't think there's even a way of setting up the install to
say you don't want them.
Why do you say these things without checking? It's not uncommon to have Linux installs without gcc.
On 07/02/2024 02:18, Kaz Kylheku wrote:
On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
I think we've used it on AIX and HP-UX already.
Some IBM documentation I was able to dig up on the web says that AIX 5.3
[2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
5.1, which continues to work. Nothing about the $SOURCE expansion.
The GCC Compile Farm Project has an AIX machine. I'm logging in there
now. Looks like the "load" and "dlopen" man pages reference
LD_LIBRARY_PATH. None of them mention any interpolation of parameters
being supported. It probably doesn't exist.
Wasn't it SHLIB_PATH on HP/UX?
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
On 07/02/2024 02:18, Kaz Kylheku wrote:
On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
I think we've used it on AIX and HP-UX already.
Some IBM documentation I was able to dig up on the web says that AIX 5.3
[2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
5.1, which continues to work. Nothing about the $SOURCE expansion.
The GCC Compile Farm Project has an AIX machine. I'm logging in there
now. Looks like the "load" and "dlopen" man pages reference
LD_LIBRARY_PATH. None of them mention any interpolation of parameters
being supported. It probably doesn't exist.
Wasn't it SHLIB_PATH on HP/UX?
On 07/02/2024 18:17, Richard Harnden wrote:
On 07/02/2024 02:18, Kaz Kylheku wrote:
On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com>
wrote:
On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
I think we've used it on AIX and HP-UX already.
Some IBM documentation I was able to dig up on the web says that
AIX 5.3 [2004] introduced LD_LIBRARY_PATH; before that it was
LIBPATH in AIX 5.1, which continues to work. Nothing about the
$SOURCE expansion.
The GCC Compile Farm Project has an AIX machine. I'm logging in
there now. Looks like the "load" and "dlopen" man pages reference
LD_LIBRARY_PATH. None of them mention any interpolation of
parameters being supported. It probably doesn't exist.
Wasn't it SHLIB_PATH on HP/UX?It still is. (Yes, some of us have to maintain these boxes because, although they were all amortised a decade or two ago, someone in a bank/taxation department/insurance company/&c knows that replacing
them will be an expensive and time consuming process. So they'll be
replaced
- after they collapse into a pile of rust - in a mad panic with
Linux boxes with something written in a mad rush in Python/PHP/Perl -
by people who don't understand the requirements, briefed by people
who don't understand the requirements - that sort of does the same
job the old machines did, if you squint really, really hard. And
/don't/ get audited by anyone competent. However, that one's
*really* unlikely. :-) )
Cheers,
Gary B-)
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
On 07/02/2024 10:56, Malcolm McLean wrote:
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely >>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base >>>> system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
There are several hundred Linux distributions, not including the niche
ones or outdated ones. Have you tried them all?
Most "normal user" oriented distros do not have gcc or related tools installed by default, nor do most server systems, or firewall systems,
or small installations. Installing the tools is usually very simple ("apt-get install build-essentials", or equivalent), but they are not included by default in the installation.
Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent),
On 07/02/2024 08:42, David Brown wrote:
On 07/02/2024 03:57, vallor wrote:
On Sun, 04 Feb 2024 20:55:12 GMT, scott@slp53.sl.home (Scott Lurndal)
wrote in <QQSvN.294647$Wp_8.94897@fx17.iad>:
bart <bc@freeuk.com> writes:
On 04/02/2024 17:48, David Brown wrote:
On 03/02/2024 20:35, bart wrote:Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
It is Windows that places more store by file extensions, which Linux >>>>>>> people say is a bad thing.
Windows is too dependent on them, and too trusting.
But above you say that is the advantage of Linux.
Yes, it's a hands-down win for Linux (and other *nix) in this aspect. >>>>>
I've never seen a '.x ' suffix. Ever. And I use linker scripts
regularly.
This was the first I'd heard about them in this context, but Open
Network Computing's RPC (ONCRPC, was SunRPC) does use .x files
for its RPC specifications.
ONCRPC is a system for generating C stubs for network
services, and it is (was?) also used to specify
UNIX services like NFS and NIS. The Sun of yore
were, indeed, good denizens of the Net. (So, crossposting
conditions satisfied...I think?)
Anyway, if you have the "standard" .x files
installed on Linux Mint, they live in
/usr/include/rpcsvc/
Also, there are linker scripts that end in ".x"
which on my system live here:
/usr/lib/x86_64-linux-gnu/ldscripts/
Fascinating to read -- and way over my head. (The
man page for GNU ld says they are
"AT&T's Link Editor Command Language syntax".) I'm
not sure how often an average programmer would look
around in there.
In any event, the ".x" files in that directory are in
the minority...
If you look in that directory, you'll see all the files are
".x<flags>", where <flags> are letters. So you get ".x", ".xbn",
".xc", ".xce", and a dozen other combinations. I don't know the
details of the flags, but they generally refer to different
arrangements of code and data (for example, merging read-only data and
executable code, or keeping them separate).
There's no doubt that ".x", and ".x<flags>", are common extensions for
linker files, but that they do not act as file extensions in the same
way as for other source code. Instead, they are sets of flags.
(That's why gcc treats any unknown extension as a linker file.)
A bit like my tools treat an unknown extension as a file of whatever
language the tool primarily works with?
Cool. But is gcc primarily used for linker files? I'm not even sure what
a linker file is!
Lew Pitcher <lew.pitcher@digitalfreehold.ca> writes:
On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It’s a GNU thing, I think.
It's a UNIX thing. GNU supports it, as it supports other
UNIX requirements.
Where is it documented as a UNIX requirement? POSIX doesn't seem to
mention it.
On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:
Also, there are linker scripts that end in ".x"
which on my system live here:
/usr/lib/x86_64-linux-gnu/ldscripts/
Fascinating to read -- and way over my head. (The man page for GNU ld
says they are "AT&T's Link Editor Command Language syntax".) I'm not
sure how often an average programmer would look around in there.
Documentation on the script language here ><https://sourceware.org/binutils/docs/ld/Scripts.html>.
An obvious example of the need for a custom linker script would be
building the Linux kernel, where you need a special format for the
resulting binary that can be loaded by a bootloader.
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
On 07/02/2024 14:09, David Brown wrote:
On 07/02/2024 10:56, Malcolm McLean wrote:
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the
base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
There are several hundred Linux distributions, not including the niche
ones or outdated ones. Have you tried them all?
Most "normal user" oriented distros do not have gcc or related tools
installed by default, nor do most server systems, or firewall systems,
or small installations. Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent), but they are not
included by default in the installation.
I've tried a fair number. The ones that used to come CDs for you to boot
on a Windows PC. Ones installed on one or two crummy Linux notebooks.
The ones you downloaded to use with Virtual Box. The various versions
you downloaded and burned into an SD drive to plug into RPIs. And mostly recently the ones that come with WSL.
I think pretty much all of them that I remember came with a C compiler.
So it is easy to make the assumption that gcc is always available.
But isn't this also supposed to be one big advantage of Linux over
Windows that this stuff is built-in?
Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent),
Is 'apt-get' always available?
candycanearter07 <no@thanks.net> writes:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
Yes, most distros won't install -devel packages, just the binary packages.
Including ubuntu, just some of the packages we need to install
on a clean ubuntu:
apt-get update -y
apt -y install ksh
apt -y install csh
apt -y install tcsh
apt -y install nis
apt -y install autofs
apt -y install make
apt -y install libedit
apt -y install libedit-dev
apt -y install zlib1g
apt -y install zlib1g-dev
apt -y install ghostscript
apt -y install python3
apt -y install python3-config
apt -y install libelf-dev
apt -y install libboost-all-dev
apt -y install libpcap-dev
apt -y install libssl-dev
apt -y install libgmp-dev
apt -y install libattr1-dev
apt -y install environment-modules
apt -y install tclsh
apt -y install xterm
apt -y install libnss3-dev
apt -y install libatk1.0-0
apt -y install libatk-bridge-2.0-0-udeb
apt -y install libatk-bridge-2.0-0-udeb
apt -y install libatk-bridge-2.0
apt -y install libatk-bridge2.0-0
apt -y install libgtk2.0-0
apt -y install libgtk-3-0
apt -y install libgbm-dev
apt -y install libasound2
apt -y install yum-utils
apt -y install python-requests
apt -y install python-pexpect
apt -y install emacs
apt -y install vim-gtk
apt -y install numactl
apt -y install libmotif-dev
apt -y install tightvncserver
apt -y install patchelf
apt -y install p7zip-full
apt -y install meld
apt -y install ctags
apt -y install clang-format
apt -y install xfce4 xfce4-goodies
On 07/02/2024 14:09, David Brown wrote:
On 07/02/2024 10:56, Malcolm McLean wrote:I've installed Linux several times on a desktop machine. I can never
On 07/02/2024 05:41, candycanearter07 wrote:There are several hundred Linux distributions, not including the niche
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base >>>>> system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
ones or outdated ones. Have you tried them all?
Most "normal user" oriented distros do not have gcc or related tools
installed by default, nor do most server systems, or firewall systems, or
small installations. Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent), but they are not
included by default in the installation.
remember being given an option to not install gcc.
On 07/02/2024 15:27, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:
So what the hell is that? What does it mean? How am i supposed to fix it
if it goes wrong?
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
On 07/02/2024 14:09, David Brown wrote:
On 07/02/2024 10:56, Malcolm McLean wrote:I've installed Linux several times on a desktop machine. I can never
On 07/02/2024 05:41, candycanearter07 wrote:There are several hundred Linux distributions, not including the niche
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely >>>>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base >>>>>> system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
ones or outdated ones. Have you tried them all?
Most "normal user" oriented distros do not have gcc or related tools
installed by default, nor do most server systems, or firewall systems, or >>> small installations. Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent), but they are not
included by default in the installation.
remember being given an option to not install gcc.
Which is beside the point. You said you "get gcc and other development tools". Which distribution(s) did you install?
candycanearter07 <no@thanks.net> writes:
On 2/7/24 09:30, Scott Lurndal wrote:[43 lines deleted]
candycanearter07 <no@thanks.net> writes:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base >>>>> system of a distro.
Wait really?
Yes, most distros won't install -devel packages, just the binary packages. >>> Including ubuntu, just some of the packages we need to install
on a clean ubuntu:
apt-get update -y
apt -y install ksh
apt -y install clang-format
apt -y install xfce4 xfce4-goodies
Weird. IG i haven't reinstalled in a while..
When you post a followup to a long article, please delete any quoted
material that isn't relevant to your followup, as I've done here.
Thanks.
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.
And my point here is that, when "shared
objects" became popular, Unix system authors/vendors tried to mitigate
"DLL hell", often by "inventing" the same mechanism under different
names.
If you install Linux you get gcc and other development tools ...
On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:
LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.
This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
the lawyers and those with enough money to pay them. We just get on and do >our work on “*nix” systems.
The ones that used to come CDs for you to boot
on a Windows PC.
It's not uncommon to have Linux installs without gcc.
On 07/02/2024 14:09, David Brown wrote:
On 07/02/2024 10:56, Malcolm McLean wrote:
On 07/02/2024 05:41, candycanearter07 wrote:
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the
base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to
install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up
the install to say you don't want them.
There are several hundred Linux distributions, not including the niche
ones or outdated ones. Have you tried them all?
Most "normal user" oriented distros do not have gcc or related tools
installed by default, nor do most server systems, or firewall systems,
or small installations. Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent), but they are not
included by default in the installation.
I've tried a fair number. The ones that used to come CDs for you to boot
on a Windows PC. Ones installed on one or two crummy Linux notebooks.
The ones you downloaded to use with Virtual Box. The various versions
you downloaded and burned into an SD drive to plug into RPIs. And mostly recently the ones that come with WSL.
I think pretty much all of them that I remember came with a C compiler.
So it is easy to make the assumption that gcc is always available.
But isn't this also supposed to be one big advantage of Linux over
Windows that this stuff is built-in?
Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent),
Is 'apt-get' always available?
On 07/02/2024 15:27, Scott Lurndal wrote:<snip>
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:
Also, there are linker scripts that end in ".x"
which on my system live here:
/usr/lib/x86_64-linux-gnu/ldscripts/
Fascinating to read -- and way over my head. (The man page for GNU ld >>>> says they are "AT&T's Link Editor Command Language syntax".) I'm not >>>> sure how often an average programmer would look around in there.
Documentation on the script language here
<https://sourceware.org/binutils/docs/ld/Scripts.html>.
An obvious example of the need for a custom linker script would be
building the Linux kernel, where you need a special format for the
resulting binary that can be loaded by a bootloader.
Indeed, that's been my primary use of custome linker scripts since
1989. Various operating systems, hypervisors, and even today for
processor firmware. Mainly we used the .ld suffix for such
scripts.
partial example for a bare-metal hypervisor written in C++:
OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64")
OUTPUT_ARCH(i386:x86-64)
ENTRY(dvmmstart)
SECTIONS
{
. = 0xffff808000000000;
percpu.data : {
*(percpu.data)
}
. = 0xffff830000100000;
_start = .;
. = ALIGN(16);
_stext = .;
.text : {
*(inittext)
*(.text)
*(.text.*)
*(.gnu.linkonce.t*)
}
_etext = .;
So what the hell is that? What does it mean? How am i supposed to fix it
if it goes wrong?
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:
LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.
This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
the lawyers and those with enough money to pay them. We just get on and
do our work on “*nix” systems.
That's why 'you' say it. Don't speak for others.
On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:
LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.
This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
the lawyers and those with enough money to pay them. We just get on and do our work on “*nix” systems.
On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:
LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.
This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
the lawyers and those with enough money to pay them. We just get on and >>>do our work on “*nix” systems.
That's why 'you' say it. Don't speak for others.
I certainly wouldn’t speak for those who weren’t even alive when I first >started using a *nix system.
I've since long used "Unix" as a generic name ...
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark
to the lawyers and those with enough money to pay them. We just get on >>>>and do our work on “*nix” systems.
That's why 'you' say it. Don't speak for others.
I certainly wouldn’t speak for those who weren’t even alive when I first >>started using a *nix system.
I doubt you'll find many of those here. I was using computers in 1974
and unix in 1979.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
On 07/02/2024 15:27, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:
So what the hell is that? What does it mean? How am i supposed to fix it
if it goes wrong?
I suspect you've been the internet long enough to have seen the
phrase RTFM...
On Wed, 07 Feb 2024 23:15:43 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark
to the lawyers and those with enough money to pay them. We just get on >>>>>and do our work on “*nix” systems.
That's why 'you' say it. Don't speak for others.
I certainly wouldn’t speak for those who weren’t even alive when I first >>>started using a *nix system.
I doubt you'll find many of those here. I was using computers in 1974
and unix in 1979.
With such a long history of being so cavalier about the term, you must
have been cautioned at some point about the legal implications of such >trademark usage. It would have been mentioned in just about every AT&T >publication.
None of those ramifications matter in casual usage, such as here in this newsgroup.
On Wed, 07 Feb 2024 23:15:43 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark
to the lawyers and those with enough money to pay them. We just get on >>>>>and do our work on “*nix” systems.
That's why 'you' say it. Don't speak for others.
I certainly wouldn’t speak for those who weren’t even alive when I first >>>started using a *nix system.
I doubt you'll find many of those here. I was using computers in 1974
and unix in 1979.
With such a long history of being so cavalier about the term, you must
have been cautioned at some point about the legal implications of such trademark usage. It would have been mentioned in just about every AT&T publication.
On Wed, 7 Feb 2024 23:53:22 +1100[SNIP]
"Gary R. Schmidt" <grschmidt@acm.org> wrote:
On 07/02/2024 18:17, Richard Harnden wrote:
It still is. (Yes, some of us have to maintain these boxes because,
Wasn't it SHLIB_PATH on HP/UX?
although they were all amortised a decade or two ago, someone in a
bank/taxation department/insurance company/&c knows that replacing
them will be an expensive and time consuming process. So they'll be
replaced
- after they collapse into a pile of rust - in a mad panic with
Linux boxes with something written in a mad rush in Python/PHP/Perl -
by people who don't understand the requirements, briefed by people
who don't understand the requirements - that sort of does the same
job the old machines did, if you squint really, really hard. And
/don't/ get audited by anyone competent. However, that one's
*really* unlikely. :-) )
Cheers,
Gary B-)
It does not have to be replaced with new solution even after original hardware died.
https://www.stromasys.com/solution/charon-par/
For those that are currently on IPF variant of HP-UX, working hardware
is still easily available. However when it wouldn't be, I'd expect
that the same company will provide emulation solution. My theory is
that they already have it done, but as long as "real" HW is available
they are afraid to sell IPF emulators because of legal concerns.
As Janis hints at elsethread: at some point it was decided
(adjudicated?) that "Unix"
is a generic term, and UNIX(R) is the actual trademark.
So Linux is a Unix but not UNIX(R)...
(MacOS Darwin FreeBSD might be UNIX(R) -- is it certified, and do they
pay the licensing fee?)
On Wed, 07 Feb 2024 11:10:09 +0000, Ben Bacarisse wrote:
It's not uncommon to have Linux installs without gcc.
The very first non-Apple PC I bought was a Shuttle small-form-factor unit that came with a copy of Mandrake 9.1 “Discovery Edition” in the box. (Go on, look up that name and version. That should give you an idea of how
long ago it was.)
On 07/02/2024 17:34, Ben Bacarisse wrote:
Ben Bacarisse <ben.usenet@bsb.me.uk> writes:Me. Yes sorry.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Did you reply via email by accident, or would you rather not answer
On 07/02/2024 14:09, David Brown wrote:
On 07/02/2024 10:56, Malcolm McLean wrote:I've installed Linux several times on a desktop machine. I can never
On 07/02/2024 05:41, candycanearter07 wrote:There are several hundred Linux distributions, not including the niche >>>>> ones or outdated ones. Have you tried them all?
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely >>>>>>>>> rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to >>>>>> install it separately. If you install Linux you get gcc and other
development tools, and I don't think there's even a way of setting up >>>>>> the install to say you don't want them.
Most "normal user" oriented distros do not have gcc or related tools >>>>> installed by default, nor do most server systems, or firewall systems, or >>>>> small installations. Installing the tools is usually very simple
("apt-get install build-essentials", or equivalent), but they are not >>>>> included by default in the installation.
remember being given an option to not install gcc.
Which is beside the point. You said you "get gcc and other development
tools". Which distribution(s) did you install?
here?
I've lost Google groups. Thunderbird has a "reply" button whch means
"email" and its too easy to press "reply" if you're not terribly used to
it. I did that to KT as well and he wondered why I was replying via
email.
On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
This proposal comes under 'convenient' rather than 'automatic'. (I did
try an automatic scheme in the past, but that only worked for specially
written projects.)
We already had some similar topics here. I think I have sugested
pragma source.
I am using a build system that is a C program.
This is the "build" file I use to build cake. It works on
windows and linux. gcc etc..
https://github.com/thradams/cake/blob/main/src/build.c
I will call "pragma module" as automaticaly source discover.
We can break the build in sub problems, one of then is source
code discovery.
The build I am using has a manual list of sources.
#define SOURCE_FILES \
" file1.c " \
" file2.c " \
...
The other problems are for instance, settings, like flags etc.
I also have "#pragma directory" to inform where the include dir are.
I think everthing should be controled with pragmas then we have a
choice to use a separated file, for instance a file just with
pragma modulo, or include pragma module inside normal source code.
I am not sure you realized this, but it is possible to create a tool,
with a C preprocessor that can scan source and discovery all the
sources automatically.
On 08/02/2024 11:55, Ben Bacarisse wrote:
An easy mistake to make. So what Linux distributions did you installWhilst I've installed Linux many times the names of the distributions
that gave you gcc by default? The ones I've used, don't (though it's
trivial to add build tools later).
aren't very meaningful to me, the machines are mostly long since
discarded, and I couldn't rightly tell you. But one name I remember is "Ubuntu". You take what is usually an old machine which has come to the
end of its useful life as Windows computer, but still has a bit of kick
in it and can become a Linux box. So I try to go for a lightweight distribution which won't stress it out. It chugs through and gives an install. And don't think there is any tick box or option which says
"don't install gcc". Now other people have said I'm wrong about this,
and of course as programmer I need gcc and wouldn't be interested in
that tick box anyway. But I'm pretty sure you do get gcc by default and
if you had to take special action I would have remembered it.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
Whilst I've installed Linux many times the names of the distributions
aren't very meaningful to me, the machines are mostly long since
discarded, and I couldn't rightly tell you. But one name I remember
is "Ubuntu".
Ubuntu does not, as far as I can tell, install gcc by default.
On 08/02/2024 11:55, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Whilst I've installed Linux many times the names of the distributions
On 07/02/2024 17:34, Ben Bacarisse wrote:An easy mistake to make. So what Linux distributions did you install
Ben Bacarisse <ben.usenet@bsb.me.uk> writes:Me. Yes sorry.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:Did you reply via email by accident, or would you rather not answer
On 07/02/2024 14:09, David Brown wrote:
On 07/02/2024 10:56, Malcolm McLean wrote:I've installed Linux several times on a desktop machine. I can never >>>>>> remember being given an option to not install gcc.
On 07/02/2024 05:41, candycanearter07 wrote:There are several hundred Linux distributions, not including the niche >>>>>>> ones or outdated ones. Have you tried them all?
On 2/5/24 12:13, Kaz Kylheku wrote:
On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
But the tools are *still preinstalled*, so installers can definitely
rely on compiling stuff.
No, they aren't. It's common for devel tools not to be part of the base
system of a distro.
Wait really?
If you install Windows you don't get Visual Studio and you have to >>>>>>>> install it separately. If you install Linux you get gcc and other >>>>>>>> development tools, and I don't think there's even a way of setting up >>>>>>>> the install to say you don't want them.
Most "normal user" oriented distros do not have gcc or related tools >>>>>>> installed by default, nor do most server systems, or firewall systems, or
small installations. Installing the tools is usually very simple >>>>>>> ("apt-get install build-essentials", or equivalent), but they are not >>>>>>> included by default in the installation.
Which is beside the point. You said you "get gcc and other development >>>>> tools". Which distribution(s) did you install?
here?
I've lost Google groups. Thunderbird has a "reply" button whch means
"email" and its too easy to press "reply" if you're not terribly used to >>> it. I did that to KT as well and he wondered why I was replying via
email.
that gave you gcc by default? The ones I've used, don't (though it's
trivial to add build tools later).
aren't very meaningful to me, the machines are mostly long since discarded, and I couldn't rightly tell you. But one name I remember is "Ubuntu".
But I'm pretty sure you do
get gcc by default and if you had to take special action I would have remembered it.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
But I'm pretty sure you do
get gcc by default and if you had to take special action I would have
remembered it.
You remember that gcc was installed by default often enough that you
were prepared to claim it as a general rule about Linux, but you can't remember any of the distributions that did it... Oh well, we'll never
know now.
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
Whilst I've installed Linux many times the names of the distributions
aren't very meaningful to me, the machines are mostly long since discarded, >> and I couldn't rightly tell you. But one name I remember is "Ubuntu".
Ubuntu does not, as far as I can tell, install gcc by default.
BTW if gcc /isn't/ installed, do you still get a bunch of standard C
headers in /usr/include? If so, what do you have to select to not
install them?
On 08/02/2024 16:50, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
But I'm pretty sure you do
get gcc by default and if you had to take special action I would have
remembered it.
You remember that gcc was installed by default often enough that you
were prepared to claim it as a general rule about Linux, but you can't
remember any of the distributions that did it... Oh well, we'll never
know now.
You're being unfair.
Let's say I've used a dozen versions of prepackaged Linux (eg. as
monolithic image, or already installed), which have always had gcc. And another dozen that I've had to install myself.
On 08/02/2024 16:50, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
But I'm pretty sure you do
get gcc by default and if you had to take special action I would have
remembered it.
You remember that gcc was installed by default often enough that you
were prepared to claim it as a general rule about Linux, but you can't
remember any of the distributions that did it... Oh well, we'll never
know now.
You're being unfair.
Let's say I've used a dozen versions of prepackaged Linux (eg. as
monolithic image, or already installed), which have always had gcc. And another dozen that I've had to install myself.
If those asked whether I wanted gcc added, then I really can't remember. Usually there were 1000 packages to install; you just let it get on with
it and install the lot.
BTW if gcc /isn't/ installed, do you still get a bunch of standard C
headers in /usr/include? If so, what do you have to select to not
install them?
On 08/02/2024 15:35, David Brown wrote:
On 08/02/2024 13:32, Malcolm McLean wrote:Baby X was developed for Linux. I've used it seriously and not just
On 08/02/2024 11:55, Ben Bacarisse wrote:
An easy mistake to make. So what Linux distributions did you install >>>> that gave you gcc by default? The ones I've used, don't (though it's >>>> trivial to add build tools later).Whilst I've installed Linux many times the names of the distributions
aren't very meaningful to me, the machines are mostly long since
discarded, and I couldn't rightly tell you. But one name I remember
is "Ubuntu". You take what is usually an old machine which has come
to the end of its useful life as Windows computer, but still has a
bit of kick in it and can become a Linux box. So I try to go for a
lightweight distribution which won't stress it out. It chugs through
and gives an install. And don't think there is any tick box or option
which says "don't install gcc". Now other people have said I'm wrong
about this, and of course as programmer I need gcc and wouldn't be
interested in that tick box anyway. But I'm pretty sure you do get
gcc by default and if you had to take special action I would have
remembered it.
If you install Ubuntu desktop, then it might have gcc by default (it's
a long time since I've used "pure" Ubuntu). Other distributions may
be different.
People who use Linux as their preferred system usually pick their
distributions with a bit of care and thought, and use it on
appropriate computers. While it is certainly true that an old and
outdated Windows machine can be given new life when the Windows
installation is scraped and replaced by Linux, for developers using
Linux it is normally an active choice. The last three main
development machines I have had at work have never had Windows on them
- they were bought for Linux, and used only with Linux.
Basically, what you are saying is that your entire Linux experience is
a few installations long ago, to briefly play around with it on
throw-away machines. And you think that is sufficient to insist that
/you/ know details when actual long-term Linux users tell you
differently?
played around. But whilst I've been given powerful Linux machines to use
at university, I've never felt the need for a powerful Linux system for
hobby use. But you can run a lot of extremely interesting programs on
fairly low powered machines.
I don't often install Linux. Usually only when I retire a Windows
machine, though I have tried virtual Linux installations under Windows.
Sadly this doesn't work well. I don't have no experience at all. Because
of the realities of UK economic life, whilst I can easily afford to buy
a second computer I can't easily afford to buy a bigger house, and I've
only got room for one computer, and I find I can't work on laptops. So
whilst I have a Linux machine, it's not currently set up and usable.
So what Linux distributions did you install that gave you gcc by
default?
$ rpm -q -f /usr/include/stdio.h
On 08/02/2024 16:50, Ben Bacarisse wrote:
Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
But I'm pretty sure you doYou remember that gcc was installed by default often enough that you
get gcc by default and if you had to take special action I would have
remembered it.
were prepared to claim it as a general rule about Linux, but you can't
remember any of the distributions that did it... Oh well, we'll never
know now.
You're being unfair.
Let's say I've used a dozen versions of prepackaged Linux (eg. as
monolithic image, or already installed), which have always had gcc. And another dozen that I've had to install myself.
If those asked whether I wanted gcc added, then I really can't
remember. Usually there were 1000 packages to install; you just let it get
on with it and install the lot.
BTW if gcc /isn't/ installed, do you still get a bunch of standard C
headers in /usr/include? If so, what do you have to select to not install them?
bart <bc@freeuk.com> writes:
BTW if gcc /isn't/ installed, do you still get a bunch of standard C
headers in /usr/include? If so, what do you have to select to not install
them?
I can't say what happens without specifics. There are hundreds of Linux distributions.
This is exactly why I was curious about what prompted Malcolm's
confident statement about what comes with "Linux" -- it runs contrary to
my limited experience.
Kaz Kylheku <433-929-6894@kylheku.com> writes:
On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
[...]
The Glibc shared library loading mechanism doesn't implement the nice
strategy of finding libraries in the same directory as the executable.
Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.
On 09/02/2024 00:58, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
BTW if gcc /isn't/ installed, do you still get a bunch of standard C
headers in /usr/include? If so, what do you have to select to not install >>> them?
I can't say what happens without specifics. There are hundreds of Linux
distributions.
This is exactly why I was curious about what prompted Malcolm's
confident statement about what comes with "Linux" -- it runs contrary to
my limited experience.
Not to mine. I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:
On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
LD_LIBRARY_PATH isn't a distro thing, its a shell thing
interpreted by the dynamic linker. The dynamic linker has
a set of default paths that it uses, set by the distro,
which can be overridden in LD_LIBRARY_PATH by each user.
It?s a GNU thing, I think.
It's a UNIX thing. GNU supports it, as it supports other
UNIX requirements.
On 09/02/2024 00:58, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
BTW if gcc /isn't/ installed, do you still get a bunch of standard C
headers in /usr/include? If so, what do you have to select to not install >>> them?
I can't say what happens without specifics. There are hundreds of Linux
distributions.
This is exactly why I was curious about what prompted Malcolm's
confident statement about what comes with "Linux" -- it runs contrary to
my limited experience.
Not to mine. I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
On 09/02/2024 00:58, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
BTW if gcc /isn't/ installed, do you still get a bunch of standard CI can't say what happens without specifics. There are hundreds of Linux
headers in /usr/include? If so, what do you have to select to not install >>> them?
distributions.
This is exactly why I was curious about what prompted Malcolm's
confident statement about what comes with "Linux" -- it runs contrary to
my limited experience.
Not to mine.
I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
On 09/02/2024 00:58, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
BTW if gcc /isn't/ installed, do you still get a bunch of standard C
headers in /usr/include? If so, what do you have to select to not
install
them?
I can't say what happens without specifics. There are hundreds of Linux
distributions.
This is exactly why I was curious about what prompted Malcolm's
confident statement about what comes with "Linux" -- it runs contrary to
my limited experience.
Not to mine. I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
bart <bc@freeuk.com> writes:
On 09/02/2024 00:58, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
BTW if gcc /isn't/ installed, do you still get a bunch of standard CI can't say what happens without specifics. There are hundreds of Linux >>> distributions.
headers in /usr/include? If so, what do you have to select to not install >>>> them?
This is exactly why I was curious about what prompted Malcolm's
confident statement about what comes with "Linux" -- it runs contrary to >>> my limited experience.
Not to mine.
I thought you said you can't remember either? Frankly, I don't think
you can recall enough of your experience to be able to say this
honestly.
But if you can recall, then tell me -- what distributions
install gcc and the other development tools by default? Malcolm can't
help, but maybe you can.
On 05/02/2024 01:07, Malcolm McLean wrote:
On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:On Windows you can't assume that the end user will be interested in
But it does seem as though Unix was a breeding ground for multitudinous >>>> developer tools. Plus there was little demarcation between user
commands, C development tools, C libraries and OS.
Somebody who's used to that environment is surely going to have trouble >>>> on an OS like MSDOS or Windows where they have to start from nothing.
Even if most of the tools are now free.
Yet it seems like even someone like you, who is supposed to be “used to”
Windows rather than *nix, still has the same trouble. So maybe it’s not >>> about being “used to” *nix at all, there really is something inherent in
the fundamental design of that environment that makes development work
easier.
development or have any develoment tools available. Or that he'll be
able to do anything other than the most basic installation. It's a
consumer platform.
It /is/ a consumer platform, yes. And because it has no standard ways
to build software, and no one (approximately) using it wants to build software on it, the norm is to distribute code in binary form for
Windows. That works out fine for almost all Windows users. That
includes libraries - even C programmers on Windows don't want to build "libjpeg" or whatever, they want a DLL.
And thus there is much less effort put into making projects easy to
build on Windows. People on Windows fall mostly into two categories -
those that neither know nor care about building software and want ready-to-use binaries (that's almost all of them), and people who do development work and are willing and able to invest time and effort
reading the readmes and install.txt files, looking at the structure of
the code, running the makefiles or CMakes, importing the project into
their favourite IDE, and whatever else.
It's not that Linux software developers go out of their way to annoy
Windows developers (well, /some/ do, but not many). But on Linux, and widening to other modern *nix systems, there are standard ways to build software. You know the people building it will have make, and gcc (or a compatible compiler with many of the same extensions and flags, like
clang or icc), and development versions of countless libraries either installed or a quick apt-get away. On Windows, however, they might have MSVC, or cygwin, or mingw64, or TDM gcc, or lccwin, or tcc, or Borland
C++ builder. They might have a "make", but it could be MS's more
limited "nmake" version.
On 09/02/2024 01:30, Ben Bacarisse wrote:
bart <bc@freeuk.com> writes:
On 09/02/2024 00:58, Ben Bacarisse wrote:I thought you said you can't remember either? Frankly, I don't think
bart <bc@freeuk.com> writes:
BTW if gcc /isn't/ installed, do you still get a bunch of standard C >>>>> headers in /usr/include? If so, what do you have to select to not install >>>>> them?I can't say what happens without specifics. There are hundreds of Linux >>>> distributions.
This is exactly why I was curious about what prompted Malcolm's
confident statement about what comes with "Linux" -- it runs contrary to >>>> my limited experience.
Not to mine.
you can recall enough of your experience to be able to say this
honestly.
"In MY limited experience" - the bits I can remember - whenever I needed to compile any C code on Linux, then gcc was always there.
But if you can recall, then tell me -- what distributions
install gcc and the other development tools by default? Malcolm can't
help, but maybe you can.
All the various Linuxes I used on RPi1 and RPi4, 32-bit and the odd 64-bit, had gcc. I know because that was the primary reason for using those
boards.
Those OSes were downloaded in one lump, or sometimes came as plug-in SD cards.
The same for all the various Linuxes I used on my PC via VirtualBox.
The same with the pre-installed OS on a Linux notebook I once tried.
Further back, I can't remember if the Linuxes I used to install on my PC
via CDs, which were done a package at a time, definitely had gcc since I can't remember if I ever tried to compile C code on them. (I had enough trouble just doing the basics, like a working screen and keyword.)
But don't ask me exactly which distributions they are; to me Linux is Linux and they are all a blur.
So, /this/ is my limited experience. Why are you trying to accuse me of pulling a fast one?
On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:
I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
Even when you add these for Windows, you do seem to have trouble building
C programs though, don’t you, as evidenced by your past complaints? So clearly there must be a bit more to it than that.
On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:
I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
Even when you add these for Windows, you do seem to have trouble building
C programs though, don’t you, as evidenced by your past complaints? So
clearly there must be a bit more to it than that.
Linux (by which I mean all such Unix-related OSes) is a C machine.
Not only is it all implemented in C, but it won't let you forget that,
with little demarcation between the OS, C libraries, C headers, C
compilers, and a myriad assorted routines that are all amazingly chummy.
Some people here seem to think that POSIX is an essential part of C, yet windows.c is not considered part of C on Windows.
On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:
I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
Even when you add these for Windows, you do seem to have trouble building
C programs though, don’t you, as evidenced by your past complaints? So
clearly there must be a bit more to it than that.
Linux (by which I mean all such Unix-related OSes) is a C machine.
Some people here seem to think that POSIX is an essential part of C,
On 2024-02-09, bart <bc@freeuk.com> wrote:
On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:
I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
Even when you add these for Windows, you do seem to have trouble building >>> C programs though, don’t you, as evidenced by your past complaints? So >>> clearly there must be a bit more to it than that.
Linux (by which I mean all such Unix-related OSes) is a C machine.
Not only is it all implemented in C, but it won't let you forget that,
with little demarcation between the OS, C libraries, C headers, C
compilers, and a myriad assorted routines that are all amazingly chummy.
Windows is built on C. API's expressed in C with function prototypes
and data structures.
I have worked in several Windows shops as a C++ developer. The culture
was just as steeped in C and C++ as development on Unix.
The feeling
was almost as if Microsoft had invented C and Unix didn't exist.
Modern Windows now even has the equivalent of a C library: the UCRT (universal C run time), a public library, okay to use by applications,
which has your malloc, printf and all that.
On 09/02/2024 17:13, Kaz Kylheku wrote:
The feeling
was almost as if Microsoft had invented C and Unix didn't exist.
There's Windows. There's Linux. And then there are MS development tools.
People tend to mix up those tools with Windows.
Note that MS also have
languages such as VB, F# and C#, all working on top of CLI/CIL
(whichever it is, perhaps both), and something called .NET. I'm not sure
they even acknowledge the existence of C anymore.
Modern Windows now even has the equivalent of a C library: the UCRT
(universal C run time), a public library, okay to use by applications,
which has your malloc, printf and all that.
It's called now MSVCRT.DLL.
I've used that since the 90s, simply because
it was simpler than WinAPI. I was only vaguely aware then that it was
also to do with C.
On 2024-02-09, bart <bc@freeuk.com> wrote:
On 09/02/2024 17:13, Kaz Kylheku wrote:
The feeling
was almost as if Microsoft had invented C and Unix didn't exist.
There's Windows. There's Linux. And then there are MS development tools.
People tend to mix up those tools with Windows.
Yes, just like people tend to post to newsgroups saying that
everything screams "C" in Linux.
Note that MS also have
languages such as VB, F# and C#, all working on top of CLI/CIL
(whichever it is, perhaps both), and something called .NET. I'm not sure
they even acknowledge the existence of C anymore.
Modern Windows now even has the equivalent of a C library: the UCRT
(universal C run time), a public library, okay to use by applications,
which has your malloc, printf and all that.
It's called now MSVCRT.DLL.
Umm, no.
I've used that since the 90s, simply because
it was simpler than WinAPI. I was only vaguely aware then that it was
also to do with C.
You didn't use /that/ since the 90s. UCRT is a new thing that ships with Windows 10, and is available as an add on for as back as Windows 7, an
that's it.
MSVCRT.DLL is not documented for public use; when you link to it,
you're sticking a fork into the proverbial toaster. UCRT is different.
Then [Intel] decided to withdraw it; you couldn't find binaries
anywhere, although I had copies. Its replacement was buried inside a
massive 75MB developer's package (at a time when modems worked at
14.4Kbaud), and I think had to be built from source.
I remember that it was totally impractical and highly inconvenient.
And thus there is much less effort put into making projects easy to
build on Windows.
Windows works on binaries. There is a format called 'DLL' that will work
on any Windows OS and for any language that has a suitable FFI.
bart <bc@freeuk.com> writes:
On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:
I thought the big deal with Linux compared with Windows was
that it came with compilers, headers and libraries at least for C.
Now that advantage may be just by chance?
Even when you add these for Windows, you do seem to have trouble building >>> C programs though, don’t you, as evidenced by your past complaints? So >>> clearly there must be a bit more to it than that.
Linux (by which I mean all such Unix-related OSes) is a C machine.
The linux operating system is written in a mix of assembler, C (and now Rust) and supports a large and varied set of processor architectures.
The linux desktop and server applications are written in a mix of languages, including C, C++, Python, Java, APL, ADA, COBOL, Fortran, Haskell, Pascal, C#, D, and a host of other languages for which linux development
environments exist.
Some people here seem to think that POSIX is an essential part of C,
I don't recall anyone other than you thinking that.
Some people here seem to think that POSIX is an essential part of C, yet windows.c is not considered part of C on Windows.
That looks quite chummy to me, and even nepotic.
It's mentioned a LOT. Half the open source programs I try seem to use
calls like 'open' instead of 'fopen', suggesting that the author seemed
to think such a function is standard C, or that can be used as though it
was standard.
MSVCRT.DLL is not documented for public use; when you link to it,
you're sticking a fork into the proverbial toaster. UCRT is different.
What exactly to you mean by UCRT; ucrtbase.dll?
That's missing a few useful things, like '_getmainargs' (used to get
argn, argv for main()), and obscure functions like 'printf'.
Meanwhile, if I look at programs such as gcc.exe, as.exe, ld.exe, not
only do THEY import msvcrt.dll, but the EXEs produced by gcc.exe do so too.
So they didn't get the memo.
On 2024-02-09, bart <bc@freeuk.com> wrote:
MSVCRT.DLL is not documented for public use; when you link to it,
you're sticking a fork into the proverbial toaster. UCRT is different.
What exactly to you mean by UCRT; ucrtbase.dll?
That's missing a few useful things, like '_getmainargs' (used to get
argn, argv for main()), and obscure functions like 'printf'.
I believe printf is in there.
_getmainargs isn't; that's in a VC run time library.
Meanwhile, if I look at programs such as gcc.exe, as.exe, ld.exe, not
only do THEY import msvcrt.dll, but the EXEs produced by gcc.exe do so too.
Umm, no; you must be talking specifically about the MinGW ones.
So they didn't get the memo.
They got the memo. The issue is that even though MSVCRT.DLL is
undocumented, it constitutes a "system library". This is important.
The GNU Public license prohibits programs from being linked to
proprietary code --- but it has an exception for system libraries
(libraries that are part of the target platform where the program runs).
Using MSVCRT.DLL is like sticking a fork in the toaster, but all those programs being linked to MSVCRT.DLL means the GPL isn't violated.
Compilers under Cygwin don't link to MSVCRT.DLL --- including the ones
in the Cygwin MingW package. (Yes, Cygwin has a package of MinGW
compilers. If you have Cygwin, you just install that, and then you can
build MinGW programs. The built programs probably still link to
MSVCRT.DLL as far as I know. Cygwin itself uses this MinGW compiler
package for compiling some of its components, like the setup.exe program
and I think the cygwin1.dll also.)
On 09/02/2024 21:56, Kaz Kylheku wrote:
On 2024-02-09, bart <bc@freeuk.com> wrote:
MSVCRT.DLL is not documented for public use; when you link to it,
you're sticking a fork into the proverbial toaster. UCRT is different. >>>>
What exactly to you mean by UCRT; ucrtbase.dll?
That's missing a few useful things, like '_getmainargs' (used to get
argn, argv for main()), and obscure functions like 'printf'.
I believe printf is in there.
Not under my ucrtbase.dll if that's the right file. If it was there, it
would go somewhere in here:
2332 00031F90 204688 Fun pow
2333 00033630 210480 Fun powf
2334 00058740 362304 Fun putc
2335 00080BE0 527328 Fun putchar
2336 00059660 366176 Fun puts
I suspect this is used by MS programs which may have their own wrappers around 'printf'.
only do THEY import msvcrt.dll, but the EXEs produced by gcc.exe do so too. >>
Umm, no; you must be talking specifically about the MinGW ones.
I'm talking about lots of binaries, for example:
raylib.dll
opengl.dll
sdl2.dll
sqlite3_32.dll
s7.exe
tcc.exe
nim.exe
nasm.exe
So they didn't get the memo.
They got the memo. The issue is that even though MSVCRT.DLL is
undocumented, it constitutes a "system library". This is important.
If they got rid of it, half the programs that run under Windows would
stop working.
I'm no idea what the point of CYGWIN is. Never have done either. What
does it bring to the table? Presumably to make some programs that
originate on Linux feel more at home, instead of making the effort to
make them more portable.
On 2024-02-09, bart <bc@freeuk.com> wrote:
I'm no idea what the point of CYGWIN is. Never have done either. What
does it bring to the table? Presumably to make some programs that
originate on Linux feel more at home, instead of making the effort to
make them more portable.
The effort is significant. Many things have to be coded twice.
bart <bc@freeuk.com> writes:
On 09/02/2024 21:56, Kaz Kylheku wrote:[...]
On 2024-02-09, bart <bc@freeuk.com> wrote:
So they didn't get the memo.They got the memo. The issue is that even though MSVCRT.DLL is
undocumented, it constitutes a "system library". This is important.
If they got rid of it, half the programs that run under Windows would
stop working.
Who suggested getting rid of it?
[...]
I'm no idea what the point of CYGWIN is. Never have done either.
Are you asking?
What
does it bring to the table? Presumably to make some programs that
originate on Linux feel more at home, instead of making the effort to
make them more portable.
It provides an environment, running under Windows, that resembles a
typical Linux desktop environment. I use it every day myself, because
that's a valuable thing for me. If it's not valuable for you, that's
fine. (I also use WSL for some things.)
I use a lot of programs that happen to rely on a POSIX interface.
Cygwin lets those programs run under Windows.
On 09/02/2024 23:41, Keith Thompson wrote:
I use a lot of programs that happen to rely on a POSIX interface.
Cygwin lets those programs run under Windows.
Run or build? If you have a binary, then having a way to run that on a different OS under some emulation layer is fair enough.
Some people run Windows programs under Linux, or even Macs (although
I've never had much luck with 'wine' myself).
On 09/02/2024 23:12, Kaz Kylheku wrote:
On 2024-02-09, bart <bc@freeuk.com> wrote:
I'm no idea what the point of CYGWIN is. Never have done either. What
does it bring to the table? Presumably to make some programs that
originate on Linux feel more at home, instead of making the effort to
make them more portable.
The effort is significant. Many things have to be coded twice.
Well, you need to support both OSes.
My stuff uses one module in the language library which is OS-specific.
At one point I had three versions for three OS targets. It is
effectively a mini cross-platform wrapper around some OS functions.
So for example, to get the address of a function in a shared library
given an instance handle to the library, the Windows version is:
export func os_getdllprocaddr(int hinst, ichar name)ref void=
GetProcAddress(cast(hinst), name)
end
The Linux version is this:
export func os_getdllprocaddr(int hlib, ichar name)ref void=
dlsym(cast(int(hlib)), name)
end
As I understand your view, you want the Linux program to just call
dlsym(), and need all these subsystems on Windows to make it appear as
though that was natively available.
Cygwin ... [is] a POSIX environment running under Windows.
On Fri, 09 Feb 2024 16:33:04 -0800, Keith Thompson wrote:
Cygwin ... [is] a POSIX environment running under Windows.
And in some ways, more capable than Microsoft’s native-based efforts along the same lines.
On 09/02/2024 18:25, Scott Lurndal wrote:elopment
environments exist.
Some people here seem to think that POSIX is an essential part of C,
I don't recall anyone other than you thinking that.
It's mentioned a LOT. Half the open source programs I try seem to use
calls like 'open' instead of 'fopen', suggesting that the author seemed
to think such a function is standard C, or that can be used as though it
was standard.
While here (https://en.wikipedia.org/wiki/C_POSIX_library):
bart <bc@freeuk.com> writes:
On 09/02/2024 21:56, Kaz Kylheku wrote:[...]
On 2024-02-09, bart <bc@freeuk.com> wrote:
So they didn't get the memo.They got the memo. The issue is that even though MSVCRT.DLL is
undocumented, it constitutes a "system library". This is
important.
If they got rid of it, half the programs that run under Windows
would stop working.
Who suggested getting rid of it?
[...]
I'm no idea what the point of CYGWIN is. Never have done either.
Are you asking?
What
does it bring to the table? Presumably to make some programs that
originate on Linux feel more at home, instead of making the effort
to make them more portable.
It provides an environment, running under Windows, that resembles a
typical Linux desktop environment. I use it every day myself, because
that's a valuable thing for me. If it's not valuable for you, that's
fine. (I also use WSL for some things.)
I use a lot of programs that happen to rely on a POSIX interface.
Cygwin lets those programs run under Windows. Modifying those
programs to work more directly under Windows would be a tremendous
amount of work that nobody is going to do.
On 2024-02-09, bart <bc@freeuk.com> wrote:
MSVCRT.DLL is not documented for public use; when you link to it,
you're sticking a fork into the proverbial toaster. UCRT is
different.
What exactly to you mean by UCRT; ucrtbase.dll?
That's missing a few useful things, like '_getmainargs' (used to
get argn, argv for main()), and obscure functions like 'printf'.
I believe printf is in there.
_getmainargs isn't; that's in a VC run time library.
Meanwhile, if I look at programs such as gcc.exe, as.exe, ld.exe,
not only do THEY import msvcrt.dll, but the EXEs produced by
gcc.exe do so too.
Umm, no; you must be talking specifically about the MinGW ones.
So they didn't get the memo.
They got the memo. The issue is that even though MSVCRT.DLL is
undocumented, it constitutes a "system library". This is important.
The GNU Public license prohibits programs from being linked to
proprietary code --- but it has an exception for system libraries
(libraries that are part of the target platform where the program
runs).
Using MSVCRT.DLL is like sticking a fork in the toaster, but all those programs being linked to MSVCRT.DLL means the GPL isn't violated.
On Fri, 09 Feb 2024 16:33:04 -0800, Keith Thompson wrote:
Cygwin ... [is] a POSIX environment running under Windows.
And in some ways, more capable than Microsoft’s native-based efforts
along the same lines.
On Sat, 10 Feb 2024 02:26:55 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Fri, 09 Feb 2024 16:33:04 -0800, Keith Thompson wrote:
Cygwin ... [is] a POSIX environment running under Windows.
And in some ways, more capable than Microsoft’s native-based efforts
along the same lines.
Many basic things, like file I/O and creation of processes is several
time slower under cygwin than under native Windows.
By 'Build System', I mean a convenient or automatic way to tell a
compiler which source and library files comprise a project, one that
doesn't involve extra dependencies.
This proposal comes under 'convenient' rather than 'automatic'. (I did
try an automatic scheme in the past, but that only worked for specially written projects.)
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 299 |
Nodes: | 16 (2 / 14) |
Uptime: | 57:34:00 |
Calls: | 6,690 |
Files: | 12,225 |
Messages: | 5,345,227 |