• Experimental C Build System

    From bart@21:1/5 to All on Mon Jan 29 16:03:45 2024
    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    This proposal comes under 'convenient' rather than 'automatic'. (I did
    try an automatic scheme in the past, but that only worked for specially
    written projects.)

    Here, the method is straightforward: the necessary info is simply listed
    in the designated lead module, within a set of #pragma lines.

    For my go-to small project demo, which comprises the three source files cipher.c, hmac.c, sha2.c, there are two ways to do it:

    (1) Add the info to top of the lead module cipher.c:

    #pragma module "hmac.c"
    #pragma module "sha2.c"
    ....

    I wasn't intending to actually implement it, but it didn't take long,
    and it seems to work:

    c:\cx>mcc cipher
    Compiling cipher.c to cipher.exe
    Adding module hmac.c
    Adding module sha2.c

    (2) Create an extra lead module and add it to the project.

    This allows the scheme to be superimposed on an existing codebase
    without having to modify it. If I try that on the above cipher project
    in a new module demo.c, it will contain:

    #pragma module "cipher.c"
    #pragma module "hmac.c"
    #pragma module "sha2.c"

    It works like this (in the real version those "Adding" lines will be
    silent):

    c:\cx>mcc demo
    Compiling demo.c to demo.exe
    Adding module cipher.c
    Adding module hmac.c
    Adding module sha2.c

    To get the original cipher.exe output needs an override option, but see
    below.

    Method (2) is attractive as it provides a means to easily set up
    different configurations of an applications, but mix-and-matching modules.

    Pragma Directives
    -----------------

    These are the ones I had in mind:

    module "file.c" As used above. Possibly, wildcards can work here

    import "file.c Incorporate a separate project which has its own
    set of pragma directives

    link "file.dll" Any binary libraries

    header "file.h" Specify a program-wide shared header

    Possibly the 'import' one can be dispensed with; it is simple enough to manually copy and past the necessary info. However that means it is
    listed in more than one place, and the original can change.

    The idea of 'header' is to specify big headers (windows.h, sdl2.h, etc)
    which are independent of anything else, which are then processed just
    once in the compiler, rather than once for each module that includes
    them. The usual '#include's are still needed.

    (The intention is not to create a whole-program compiler, or to
    introduce a module scheme, although this provides some of the benefits.
    The C language is unchanged.)

    Possibly, there can be a directive called 'name' to specify an
    executable file name.

    Working with Other Compilers
    ----------------------------

    Clearly, my scheme will only work with a suitable modified compiler.
    Without that, then I considered doing something like this, adding this
    block to my example from (2):

    #pragma module "cipher.c"
    #pragma module "hmac.c"
    #pragma module "sha2.c"

    #ifndef __MCC__
    #include "runcc.c"

    int main(void) {
    runcc(__FILE__);
    }
    #endif

    When run a compiler that is not MCC, this builds a small program (here
    still called demo.exe), which calls a function to read from this file,
    process the relevant #pragma lines, and use that info to invoke a
    conventional compiler.

    I haven't tested it, but it would mean a two-step process that looks
    something like this (possibly, it can pick up the name of the compiler
    that /is/ used, and invoke that on the actual program):

    c:\cx\tcc demo.c
    c:\cx\demo
    ... invoke tcc to build cipher.c hmac.c sha2.c ...

    (Tcc of course also has the -run option to save that second line)

    For this to work, the pragma stuff must be cleanly written: the runcc() function will only do basic string processing, it is not a C compiler.


    Using a Makefile
    ----------------

    One use-case for this would be if /I/ supplied a multi-module C program,
    or packaged someone else's.

    But people are mad about makefiles so, sure, I can also supply a 2-line makefile to do the above.

    Dependencies and Incremental Compilation ----------------------------------------

    This project is not about that, and is for cases where compiling all
    sources in one go is viable, or where a one-off build time is not relevant.

    That can mean when using fast a compiler and/or the scale of the project allows.

    Although the 'header' directive will also help, in cases where the
    application itself is small, but has dependencies on large complex
    headers. (I haven't quite figured out how it might work though.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Tue Jan 30 00:57:07 2024
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its usefulness in today’s multilingual world.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Tue Jan 30 01:45:43 2024
    On 30/01/2024 00:57, Lawrence D'Oliveiro wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its usefulness in today’s multilingual world.

    Languages these days tend to have module schemes and built-in means of compiling assemblies of modules.

    C doesn't.

    The proposal would allow a project to be built using:

    cc file.c

    instead of cc file.c file2.c .... lib1.dll lib2.dll ...,

    or instead of having to provide a makefile or an @ filelist.

    That is significant advance on what C compilers typically do.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Tue Jan 30 09:06:43 2024
    On 30/01/2024 02:38, Chris M. Thomasson wrote:
    On 1/29/2024 4:57 PM, Lawrence D'Oliveiro wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its
    usefulness in
    today’s multilingual world.

    Huh?

    I assume he means it's common to use multiple programming languages,
    rather than multiple human languages. (The later may also be true, but
    it's the former that is relevant.)

    For my own use at least, he's right. His system is aimed at being
    simpler than make for C-only projects with limited and straightforward
    build requirements. That's fine for such projects, and if that suits
    his needs or the needs of others, great. But it would not cover more
    than a tiny proportion of my projects over the decades - at least not
    without extra help (extra commands, bash/bat files, etc.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Tue Jan 30 09:17:51 2024
    On 30/01/2024 02:45, bart wrote:
    On 30/01/2024 00:57, Lawrence D'Oliveiro wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its
    usefulness in
    today’s multilingual world.

    Languages these days tend to have module schemes and built-in means of compiling assemblies of modules.

    C doesn't.

    The proposal would allow a project to be built using:

       cc file.c

    instead of cc file.c file2.c .... lib1.dll lib2.dll ...,

    or instead of having to provide a makefile or an @ filelist.

    That is significant advance on what C compilers typically do.

    You are absolutely right that C does not have any real kind of module
    system, and that can be a big limitation compared to other languages.
    However, I don't think the build system is where the lack of modules is
    an issue - it is the scaling of namespaces and identifier clashes that
    are the key challenge for large C projects.

    Building is already solved - "make" handles everything from tiny
    projects to huge projects. When "make" isn't suitable, you need /more/,
    not less - build server support, automated build and test systems, etc.
    And for users who like simpler things and have simpler projects, IDE's
    are almost certainly a better option and will handle project builds.

    I don't doubt that your build system is simpler and easier for the type
    of project for which it can work - but I doubt that there are many
    people who work with such limited scope projects and who don't already
    have a build method that works for their needs. Still, if it is useful
    for you, and useful for some other people, then that makes it useful.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Tue Jan 30 12:09:01 2024
    On 30/01/2024 08:17, David Brown wrote:
    On 30/01/2024 02:45, bart wrote:
    On 30/01/2024 00:57, Lawrence D'Oliveiro wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its
    usefulness in
    today’s multilingual world.

    Languages these days tend to have module schemes and built-in means of
    compiling assemblies of modules.

    C doesn't.

    The proposal would allow a project to be built using:

        cc file.c

    instead of cc file.c file2.c .... lib1.dll lib2.dll ...,

    or instead of having to provide a makefile or an @ filelist.

    That is significant advance on what C compilers typically do.

    You are absolutely right that C does not have any real kind of module
    system, and that can be a big limitation compared to other languages. However, I don't think the build system is where the lack of modules is
    an issue - it is the scaling of namespaces and identifier clashes that
    are the key challenge for large C projects.

    Building is already solved - "make" handles everything from tiny
    projects to huge projects.  When "make" isn't suitable, you need /more/,
    not less - build server support, automated build and test systems, etc.
    And for users who like simpler things and have simpler projects, IDE's
    are almost certainly a better option and will handle project builds.

    I've already covered this in many posts on the subject. But 'make' deals
    with three kinds of requirements:

    (1) Specifying what the modules are to be compiled and combined into one
    binary file

    (2) Specifying dependences between all files to allow rebuilding of that
    one file with minimal recompilation

    (3) Everything else needed in a complex project: running processes to
    generate files file config.h, creating multiple binaries, specifying
    dependencies between binaries, installation etc

    My proposal tackles only (1), which is something that many languages now
    have the means to deal with themselves. I already stated that (2) is not covered.

    But you may still need makefiles to deal with (3).

    If your main requirement /is/ only (1), then my idea is to move the
    necessary info into the source code, and tackle it with the C compiler.

    Then no separate script or 'make' utility is needed.

    I also outlined a way to make this work with any existing compiler.
    (Needs an extra C module. Effectively the list of #pragmas becomes a
    script which is processed by this module. But no extra language is
    needed; only C.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Malcolm McLean on Tue Jan 30 11:52:11 2024
    On 30/01/2024 04:46, Malcolm McLean wrote:
    On 30/01/2024 01:45, bart wrote:
    On 30/01/2024 00:57, Lawrence D'Oliveiro wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its
    usefulness in
    today’s multilingual world.

    Languages these days tend to have module schemes and built-in means of
    compiling assemblies of modules.

    C doesn't.

    The proposal would allow a project to be built using:

        cc file.c

    instead of cc file.c file2.c .... lib1.dll lib2.dll ...,

    or instead of having to provide a makefile or an @ filelist.

    That is significant advance on what C compilers typically do.

    There's a desperate need for hierarchy.
    A library like ChatGTP only needs to expose one function,
    "answer_question". Maybe a few extra to give context. But of course that
    one function calls masses and masses of subroutines. Which should be
    private to the module, but not to the source file for the
    "answer_question" function.


    I'm not sure what that has to do with my proposal (which is not to add a
    module scheme as I said).

    I've now added wildcards to my test implementation. If I go to your
    resource compiler project (which I call 'BBX') and add one small C file
    called bbx.c containing:

    #pragma module "*.c"
    #pragma module "freetype/*.c"
    #pragma module "samplerate/*.c"

    then I can build it like this:

    c:\bbx\src>mcc bbx
    Compiling bbx.c to bbx.exe

    The file provides also the name of the executable:

    c:\bbx\src>bbx
    The Baby X resource compiler v1.1
    by Malcolm Mclean
    ....

    Without this feature, building wasn't exactly onerous; I used an @ file
    called 'bbx' which contained:

    *.c freetype/*.c samplerate/*.c

    and built using:

    c:\bbx\src>mcc @bbx -out:bbx
    Compiling 44 files to bbx.exe

    But this requires an extra, non-C file (effectively a script), and a
    special invocation (the @). The EXE name can be put in there was well,
    but the option for that depends on compiler. (gcc can''t use this @ file
    as it contains wildcards.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Malcolm McLean on Tue Jan 30 17:57:29 2024
    On 30/01/2024 16:50, Malcolm McLean wrote:
    On 30/01/2024 11:52, bart wrote:
    On 30/01/2024 04:46, Malcolm McLean wrote:

    There's a desperate need for hierarchy.
    A library like ChatGTP only needs to expose one function,
    "answer_question". Maybe a few extra to give context. But of course
    that one function calls masses and masses of subroutines. Which
    should be private to the module, but not to the source file for the
    "answer_question" function.


    I'm not sure what that has to do with my proposal (which is not to add
    a module scheme as I said).

    Oh you are not adding modules

    In my other language with modules, it specifically does not have a
    hierarchy of modules. It causes all sorts of problems, since it's hard
    to get away from cycles.

    And sometimes you just want to split one module M into modules A and B;
    there is no dominant one.

    But it also means it doesn't do anything clever to determine the set of
    modules comprising a project, starting from one module.

    Some languages traverse a tree of import statements. In mine, I don't
    have import statements at all littered across the program. There is just
    a shopping list of modules started at the start the lead module.

    That is the model I used for this C experiment.

    I've now added wildcards to my test implementation. If I go to your
    resource compiler project (which I call 'BBX') and add one small C
    file called bbx.c containing:

         #pragma module "*.c"
         #pragma module "freetype/*.c"
         #pragma module "samplerate/*.c"

    then I can build it like this:

         c:\bbx\src>mcc bbx
         Compiling bbx.c to bbx.exe

    So essentially we have path listing and description language.
    Which ironically is what the resource compiler basically does. You put a
    list of paths into an XML file, and it uses that to find the resources,
    and merge them together on standard output (as text, of course :-) ).

    You're doing the same, except that of course you have to compile and
    link rather than decode and lightly pre-process.

    Yes, the requirement are very simple: it's a list of files! The same
    list is usually encoded cryptically inside a make file, or that you need
    to submit to CMake, or that you put inside an @ file, or submit to the
    compiler on one long command line, or in multiple invocations.

    Here it's tidily contained within the C source code.

    But I'm wondering about one file which contains all the sources for the program. Like an IDE project file but lighter weight.

    That occurred to me too. I gave an outline to invoke a special C module
    to scan those #pragma entries in cases where my compiler was not used.

    Such an approach could also be used to unpack a set of source files concatenated into one big source file. This is tidier than having a
    sprawling set of files perhaps split across directories.

    It means you can just supply one text file.

    But there are other ways that people do the same job of turning
    multi-module C into a single file.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Harnden@21:1/5 to Malcolm McLean on Tue Jan 30 19:22:00 2024
    On 30/01/2024 16:50, Malcolm McLean wrote:

    But I'm wondering about one file which contains all the sources for the program. Like an IDE project file but lighter weight.


    In other words: a Makefile

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to bart on Tue Jan 30 16:46:56 2024
    bart <bc@freeuk.com> writes:

    [description of a rudimentary C build system]

    What was described is what I might call the easiest and
    least important part of a build system.

    Looking over one of my current projects (modest in size,
    a few thousand lines of C source, plus some auxiliary
    files adding perhaps another thousand or two), here are
    some characteristics essential for my workflow (given
    in no particular order):

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    * use different flag settings for different translation
    units

    * be able to express dependency information

    * produece generated source files, sometimes based
    on other source files

    * be able to invoke arbitrary commands, including
    user-written scripts or other programs

    * build or rebuild some outputs only when necessary

    * condition some processing steps on successful
    completion of other processing steps

    * deliver partially built as well as fully built
    program units

    * automate regression testing and project archival
    (in both cases depending on completion status)

    * produce sets of review locations for things like
    program errors or TBD items

    * express different ways of combining compiler
    outputs (such as .o files) depending on what
    is being combined and what output is being
    produced (sometimes a particular set of inputs
    will be combined in several different ways to
    produce several different outputs)

    Indeed it is the case that producing a complete program is one
    part of my overall build process. But it is only one step out
    of many, and it is easy to express without needing any special
    considerations from the build system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Wed Jan 31 00:44:57 2024
    On 30/01/2024 08:06, David Brown wrote:
    On 30/01/2024 02:38, Chris M. Thomasson wrote:
    On 1/29/2024 4:57 PM, Lawrence D'Oliveiro wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its
    usefulness in
    today’s multilingual world.

    Huh?

    I assume he means it's common to use multiple programming languages,
    rather than multiple human languages.  (The later may also be true, but
    it's the former that is relevant.)

    For my own use at least, he's right.  His system is aimed at being
    simpler than make for C-only projects with limited and straightforward
    build requirements.  That's fine for such projects, and if that suits
    his needs or the needs of others, great.  But it would not cover more
    than a tiny proportion of my projects over the decades - at least not
    without extra help (extra commands, bash/bat files, etc.)

    It would cover most open source C projects that I have tried to build.
    All the following examples came down to a list of C files to be
    submitted to a compiler:

    Lua
    Seed7* (a version from 5 years ago)
    Tcc*
    PicoC
    LibJPEG*
    'BBX' (Malcolm's resource compiler)
    A68G (An older version; current one is Linux-only)

    The ones marked * I believe required some process first to synthesised
    some essential header, eg. config.h, often only a few lines long. But
    once out of the way, then yes it was just N C files to plough through.

    Tcc also had other things going on (once tcc.exe was built, it was used
    to prepare some libraries).

    LibJPEG had more than one executable, which shared a lot of common
    modules. The makefile put those into one .a file, which was then
    included in all programs. But since it was statically linked, it did not
    save space.

    Once I knew what was going on, I just put the common modules in the list
    for each program. Or /I/ could choose to put those into a shared library.

    It's a question of extracting the easy parts of a project. Once I know
    that, I can work my way around anything else and devise my own solutions.

    --------------------

    In terms of my own real applications, they involved compiled modules; interpreted modules (that needed compiling to bytecode); processing
    source files to derive/update message files for internationalisation;
    packaging the numerous files into tidy containers; uploading to
    distribution disks, or via FTP; scripts to generate the new index.html
    for downloads...

    I understand all that part of it. The necessary scripting is utterly
    trivial. The above was a process to go through for each release. It
    wasn't time-critical, and there were no dependencies to deal with. It
    wasn't makefile territory.

    The build system described in my OP is that needed to build one binary
    file in one language, which is 95% of what I had trouble with in that
    list above, /because/ the supplied build process revolved around
    makefiles and configure scripts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Tim Rentsch on Wed Jan 31 03:13:20 2024
    On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    Just as an example, the man page for Blender is generated by a Python
    script that runs the built executable with the “--help” option and wraps that output in some troff markup.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Wed Jan 31 03:14:34 2024
    On Tue, 30 Jan 2024 09:17:51 +0100, David Brown wrote:

    You are absolutely right that C does not have any real kind of module
    system ...

    Guess which language, which was already considered a bit ancient when C
    became popular, has a module system now?

    Fortran.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Lawrence D'Oliveiro on Wed Jan 31 03:23:46 2024
    On 2024-01-31, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    Just as an example, the man page for Blender is generated by a Python
    script that runs the built executable with the “--help” option and wraps that output in some troff markup.

    That's the sort of stunt why distros have given up on clean cross
    compiling, and resorted to Qemu.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Wed Jan 31 08:36:36 2024
    On 31/01/2024 00:23, Chris M. Thomasson wrote:
    On 1/30/2024 12:06 AM, David Brown wrote:
    On 30/01/2024 02:38, Chris M. Thomasson wrote:
    On 1/29/2024 4:57 PM, Lawrence D'Oliveiro wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that >>>>> doesn't involve extra dependencies.

    If it only works for C code, then that is going to limit its
    usefulness in
    today’s multilingual world.

    Huh?

    I assume he means it's common to use multiple programming languages,
    rather than multiple human languages.  (The later may also be true,
    but it's the former that is relevant.)

    For my own use at least, he's right.  His system is aimed at being
    simpler than make for C-only projects with limited and straightforward
    build requirements.

    When you say his, you mean, Bart's system, right?


    Yes.


    That's fine for such projects, and if that suits his needs or the
    needs of others, great.  But it would not cover more than a tiny
    proportion of my projects over the decades - at least not without
    extra help (extra commands, bash/bat files, etc.)



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Kaz Kylheku on Wed Jan 31 08:47:20 2024
    On 31/01/2024 04:23, Kaz Kylheku wrote:
    On 2024-01-31, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    Just as an example, the man page for Blender is generated by a Python
    script that runs the built executable with the “--help” option and wraps >> that output in some troff markup.

    That's the sort of stunt why distros have given up on clean cross
    compiling, and resorted to Qemu.


    It is also the sort of stunt that reduces development effort and ensures
    that you minimise the risk of documentation being out of sync with the
    program. I have never tried to build Blender, so I can't comment on
    this particular project, but if it is done right then I don't see a big problem. (If it is done wrong, requiring multiple "make" invocations or something like that, then it can be annoying.)

    For distros trying to make good meta-build systems, something like that
    is minor compared to C source files using __DATE__ and __TIME__ (or even
    worse, $Id$) to generate version numbers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Wed Jan 31 12:19:10 2024
    On 31/01/2024 03:13, Lawrence D'Oliveiro wrote:
    On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    Just as an example, the man page for Blender is generated by a Python
    script that runs the built executable with the “--help” option and wraps that output in some troff markup.

    I do things a bit more simply. The '-help' text for my MCC program is implemented with this line in the original source:

    println strinclude("help.txt")

    The help text is just a regular text file. So often you see reams of
    printf statements containing strings full of escape codes...


    But I can do complex too, if this is what we're trying to show off.

    I needed to produce (some time ago...) a printed manual for my
    application (CAD software), which contained a mix of text, tables,
    images and vector diagrams:

    - The text was created with an ordinary text editor

    - It incorporated runoff-like commands that I'd devised

    - It was processed by a program I'd written in a scripting language.

    - That program ran under the application in question

    - It rendered it a page at a time, which was then processed by the app's PostScript driver, then sent it to an actual PostScript printer

    - I also wrote the manual

    - I also wrote the whole CAD application

    - I created both the language used for the app, and the scripting language

    - I /implemented/ both languages, one of them in itself

    - Oh, and I wrote the text editor!

    - I believe this was done pre-Windows, which meant also writing various
    drivers for graphics adaptors, all the libraries needed to draw stuff
    including GUI, providing bitmap and vector fonts, writing printer and
    plotter drivers (of which the PS/EPS driver was one), etc etc


    So this was not just orchestrating various bits of pre-existing
    software, which makes Unix people feel so superior because they can do:

    x | a | b | c > y

    instead of (using default file extensions):

    a x
    b x
    c x

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Spiros Bousbouras on Wed Jan 31 15:31:29 2024
    On 31/01/2024 12:02, Spiros Bousbouras wrote:
    On Wed, 31 Jan 2024 08:47:20 +0100
    David Brown <david.brown@hesbynett.no> wrote:
    On 31/01/2024 04:23, Kaz Kylheku wrote:
    On 2024-01-31, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    Just as an example, the man page for Blender is generated by a Python
    script that runs the built executable with the “--help” option and wraps
    that output in some troff markup.

    That's the sort of stunt why distros have given up on clean cross
    compiling, and resorted to Qemu.


    It is also the sort of stunt that reduces development effort and ensures
    that you minimise the risk of documentation being out of sync with the
    program.

    I don't see how it achieves such tasks. For preventing loss of agreement between behaviour and documentation , the developers must have the necessary self-discipline to modify the documentation when they make changes in the behaviour. If they have such self-discipline then it's no harder to modify a separate documentation file than it is to modify the part of the source code which prints the --help output. Personally , I have the file(s) with the documentation as additional tabs in the same vim session where other tabs have the source code.

    They must document the user-visible features in (at least) two places -
    the "man" page, and the "--help" output. By using automation to
    generate one of these from the other, they reduce the duplicated effort.


    Also , the output of --help should be a short reminder whereas documentation should be longer , possibly much longer , possibly containing a tutorial , depending on how complex the application is.

    The same applies to "man" pages. Sometimes it makes sense to have short "--help" outputs and longer "man" pages, but if the documentation is
    longer than perhaps a dozen pages/screenfuls, "man" is unsuitable. And
    I imagine that the documentation for blender, along with its tutorials
    (as you say), is many orders of magnitude more than that. Keeping the
    "man" page and "--help" output the same seems sensible here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Wed Jan 31 15:13:20 2024
    David Brown <david.brown@hesbynett.no> writes:
    On 31/01/2024 12:02, Spiros Bousbouras wrote:
    On Wed, 31 Jan 2024 08:47:20 +0100
    David Brown <david.brown@hesbynett.no> wrote:
    On 31/01/2024 04:23, Kaz Kylheku wrote:
    On 2024-01-31, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Tue, 30 Jan 2024 16:46:56 -0800, Tim Rentsch wrote:

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    Just as an example, the man page for Blender is generated by a Python >>>>> script that runs the built executable with the “--help” option and wraps
    that output in some troff markup.

    That's the sort of stunt why distros have given up on clean cross
    compiling, and resorted to Qemu.


    It is also the sort of stunt that reduces development effort and ensures >>> that you minimise the risk of documentation being out of sync with the
    program.

    I don't see how it achieves such tasks. For preventing loss of agreement
    between behaviour and documentation , the developers must have the necessary >> self-discipline to modify the documentation when they make changes in the
    behaviour. If they have such self-discipline then it's no harder to modify a >> separate documentation file than it is to modify the part of the source code >> which prints the --help output. Personally , I have the file(s) with the >> documentation as additional tabs in the same vim session where other tabs >> have the source code.

    They must document the user-visible features in (at least) two places -
    the "man" page, and the "--help" output. By using automation to
    generate one of these from the other, they reduce the duplicated effort.

    Indeed. In our case, we generate the manpages using nroff and
    the simulator 'help' command will call system("man ${INSTALL_LOC}/man/topic.man")
    to display the help text. We also process the manpage source files with troff to generate pages appended to the end of the users guide (troff MOM
    macro set) PDF.

    Only one place (the manpage source file) need be updated.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to richard.nospam@gmail.invalid on Wed Jan 31 16:41:21 2024
    XPost: comp.unix.programmer

    On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:

    On 30/01/2024 16:50, Malcolm McLean wrote:

    But I'm wondering about one file which contains all the sources for the
    program. Like an IDE project file but lighter weight.


    In other words: a Makefile

    Agreed; it's a solution looking for a problem.

    $ make -j # how does Bart's new build manager handle this case?

    ("-j" engages parallel compilation.)

    ObC:
    $ cat try.c
    #include <stdlib.h>
    int main(void) {
    return(system("make -j 16"));
    }
    _ _ _ _ _ _ _

    $ cat Makefile
    CFLAGS=-g -O2 -std=c90 -pedantic
    _ _ _ _ _ _ _

    $ make try
    cc -g -O2 -std=c90 -pedantic try.c -o try

    $ ./try
    make: 'try' is up to date.

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to All on Wed Jan 31 19:01:01 2024
    XPost: comp.unix.programmer

    On Wed, 31 Jan 2024 16:41:21 -0000 (UTC), vallor <vallor@cultnix.org>
    wrote in <updt7h$1jc8a$1@dont-email.me>:

    On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:

    On 30/01/2024 16:50, Malcolm McLean wrote:

    But I'm wondering about one file which contains all the sources for the
    program. Like an IDE project file but lighter weight.


    In other words: a Makefile

    Agreed; it's a solution looking for a problem.

    $ make -j # how does Bart's new build manager handle this case?

    ("-j" engages parallel compilation.)

    ObC:
    $ cat try.c
    #include <stdlib.h>
    int main(void) {
    return(system("make -j 16"));
    }
    _ _ _ _ _ _ _

    $ cat Makefile
    CFLAGS=-g -O2 -std=c90 -pedantic
    _ _ _ _ _ _ _

    $ make try
    cc -g -O2 -std=c90 -pedantic try.c -o try

    $ ./try
    make: 'try' is up to date.

    I also had "try:" in my Makefile.

    _ _ _ _ _ _ _
    CFLAGS=-g -O2 -std=c90 -pedantic

    try:
    _ _ _ _ _ _ _

    But I changed the source to make it
    explicitely:

    $ cat try.c
    #include <stdlib.h>
    int main(void) {
    return(system("make -j 16 try"));
    }

    $ ./try
    cc -g -O2 -std=c90 -pedantic try.c -o try

    $ ./try
    make: 'try' is up to date.

    (Beats trying to learn COBOL to keep up with
    comp.lang.c... ;)
    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to vallor on Wed Jan 31 20:25:07 2024
    XPost: comp.unix.programmer

    On 31/01/2024 16:41, vallor wrote:
    On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:

    On 30/01/2024 16:50, Malcolm McLean wrote:

    But I'm wondering about one file which contains all the sources for the
    program. Like an IDE project file but lighter weight.


    In other words: a Makefile

    Agreed; it's a solution looking for a problem.

    Why do you think languages come with modules? That allows them to
    discover their own modules, rather than rely on external apps where the
    details are buried under appalling syntax and mixed up with a hundred
    other matters.



    $ make -j # how does Bart's new build manager handle this case?

    ("-j" engages parallel compilation.)

    ObC:
    $ cat try.c
    #include <stdlib.h>
    int main(void) {
    return(system("make -j 16"));
    }
    _ _ _ _ _ _ _

    $ cat Makefile
    CFLAGS=-g -O2 -std=c90 -pedantic
    _ _ _ _ _ _ _

    $ make try
    cc -g -O2 -std=c90 -pedantic try.c -o try

    $ ./try
    make: 'try' is up to date.


    This on the other hand looks EXACTLY like a solution looking a problem.


    BTW that 'make' only works on my machine because it happens to be part
    of mingw; none of my other C compilers have make.

    And as written, it only works for 'cc' which comes with 'gcc'. If I use
    CC to set another compiler, then the -o option is wrong for tcc. The
    other options are not recognised with two other compilers.

    Look at the follow-up to my OP that I will shortly post.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to bart on Wed Jan 31 20:36:22 2024
    On 29/01/2024 16:03, bart wrote:

    Working with Other Compilers
    ----------------------------

    Clearly, my scheme will only work with a suitable modified compiler.
    Without that, then I considered doing something like this, adding this
    block to my example from (2):

        #pragma module "cipher.c"
        #pragma module "hmac.c"
        #pragma module "sha2.c"

        #ifndef __MCC__
            #include "runcc.c"

            int main(void) {
                runcc(__FILE__);
            }
        #endif

    I tried to do a proof of concept today. But there's one problem I'm not
    sure how to get around yet. However, the odd behaviour of gcc comes to
    the rescue here.

    Going with the same 3-file test project, I created this version of the
    above:

    #pragma module "cipher.c"
    #pragma module "hmac.c"
    #pragma module "sha2.c"

    #ifndef __MCC__
    #include "runcc.c"

    int main(int n, char** args) {
    char* compiler = (n>=2 ? args[1] : "tcc");

    runcc(compiler, __FILE__);
    }
    #endif

    runcc.c is 100 lines of code, but it is only to test the idea works.


    First build this short program with any compiler, here using gcc:

    c:\c>gcc demo.c

    Now run the a.exe file produced, here shown in two different ways:

    c:\c>a
    Invoking compiler: tcc -o demo.exe cipher.c hmac.c sha2.c
    Finished building: demo.exe

    c:\c>a gcc
    Invoking compiler: gcc -o demo.exe cipher.c hmac.c sha2.c
    Finished building: demo.exe

    c:\c>demo
    argument count incorrect! ...

    It defaults to using tcc to build, but a compiler can be provided as
    shown. It wasn't possible to pick up the compiler used to build 'demo.c'.

    The main problem is that if demo.c is compiled to demo.exe (the stub
    program that reads the #pragmas from demo.c and invokes the compiler),
    it is not possible for demo.exe to then build the application as
    'demo.exe'; they will clash, Windows doesn't allow it anyway.

    So gcc's a.exe helps for this demo.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to vallor on Wed Jan 31 21:17:37 2024
    XPost: comp.unix.programmer

    On Wed, 31 Jan 2024 16:41:21 -0000 (UTC), vallor wrote:

    $ make -j

    The last time I tried that on an FFmpeg build, it brought my machine to
    its knees. ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Wed Jan 31 23:02:32 2024
    On Wed, 31 Jan 2024 15:31:29 +0100, David Brown wrote:

    ... but if the documentation is
    longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.

    So it is your considered opinion, then, that the bash man page is “unsuitable”?

    ldo@theon:~> man bash | wc -l
    5276

    Actually I refer to it quite a lot. Being able to use search functions
    helps.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Wed Jan 31 23:00:58 2024
    On Wed, 31 Jan 2024 15:13:20 GMT, Scott Lurndal wrote:

    ... and the simulator 'help' command will call system("man ${INSTALL_LOC}/man/topic.man")

    Agh! Why do people feel the need to go through a shell where a shell is
    not needed?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Thu Feb 1 00:29:23 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 31 Jan 2024 15:13:20 GMT, Scott Lurndal wrote:

    ... and the simulator 'help' command will call system("man
    ${INSTALL_LOC}/man/topic.man")

    Agh! Why do people feel the need to go through a shell where a shell is
    not needed?

    Because 'system()' works and it a lot less code than
    fork and exec?

    How would you display an manpage using nroff markup
    from an application?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Thu Feb 1 00:33:52 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 31 Jan 2024 15:31:29 +0100, David Brown wrote:

    ... but if the documentation is
    longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.

    So it is your considered opinion, then, that the bash man page is >“unsuitable”?

    ldo@theon:~> man bash | wc -l
    5276

    Actually I refer to it quite a lot. Being able to use search functions
    helps.

    When working with the ksh man page, I use vim.

    function viman
    {
    a=$(mktemp absXXXXXXX)
    man "$1" | col -b > ${a}
    vim ${a}
    rm ${a}
    }


    $ viman ksh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Thu Feb 1 03:07:04 2024
    On Thu, 01 Feb 2024 00:29:23 GMT, Scott Lurndal wrote:

    How would you display an manpage using nroff markup from an application?

    Much safer:

    subprocess.run \
    (
    args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
    )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Thu Feb 1 09:48:45 2024
    XPost: comp.unix.programmer

    On 31/01/2024 22:17, Lawrence D'Oliveiro wrote:
    On Wed, 31 Jan 2024 16:41:21 -0000 (UTC), vallor wrote:

    $ make -j

    The last time I tried that on an FFmpeg build, it brought my machine to
    its knees. ;)

    Sometimes "make -j" can be a bit enthusiastic about the number of
    processes it starts. If there are many operations it /could/ do, trying
    to run them all can chew through a lot more memory than you'd like. I
    usually use something like "make -j 8", though the ideal number of
    parallel tasks depends on the number of cpu cores you have, their type
    (SMT threads or real cores, "big" cores or "little" cores), memory,
    speed of disks, additional tools like ccache or distcc, etc.

    I'd rather "make -j" (without a number) defaulted to using the number of
    cpu cores, as that is a reasonable guess for most compilations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Feb 1 09:39:15 2024
    XPost: comp.unix.programmer

    On 31/01/2024 21:25, bart wrote:
    On 31/01/2024 16:41, vallor wrote:
    On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
    <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:

    On 30/01/2024 16:50, Malcolm McLean wrote:

    But I'm wondering about one file which contains all the sources for the >>>> program. Like an IDE project file but lighter weight.


    In other words: a Makefile

    Agreed; it's a solution looking for a problem.

    Why do you think languages come with modules? That allows them to
    discover their own modules, rather than rely on external apps where the details are buried under appalling syntax and mixed up with a hundred
    other matters.


    No, that is not at all the purpose of modules in programming. Note that
    there is no specific meaning of "module", and different languages use
    different for similar concepts. There are many features that a
    language's "module" system might have - some have all, some have few:

    1. It lets you split the program into separate parts - generally
    separate files. This is essential for scalability for large programs.

    2. You can compile modules independently to allow partial builds.

    3. Modules generally have some way to specify exported symbols and
    facilities that can be used by other modules.

    4. Modules can "import" other modules, gaining access to those modules' exported symbols.

    5. Modules provide encapsulation of data, code and namespaces.

    6. Modules can be used in a hierarchical system, building big modules
    from smaller ones to support larger libraries with many files.

    7. Modules provide a higher level concept that can be used by language
    tools to see how the whole program fits together or interact with
    package managers and librarian tools.


    C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation. It provides a limited form of 5 (everything that is not exported is
    "static"), but scaling to larger systems is dependent on identifier
    prefixes.

    You seem to be thinking purely about item 7 above. This is, I think,
    common in interpreted languages (where modules have to be found at
    run-time, where the user is there but the developer is not). Compiled languages don't usually have such a thing, because developers (as
    distinct from users) have build tools available that do a better job.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Thu Feb 1 11:31:14 2024
    XPost: comp.unix.programmer

    On 01/02/2024 08:39, David Brown wrote:
    On 31/01/2024 21:25, bart wrote:
    On 31/01/2024 16:41, vallor wrote:
    On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
    <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>:

    On 30/01/2024 16:50, Malcolm McLean wrote:

    But I'm wondering about one file which contains all the sources for
    the
    program. Like an IDE project file but lighter weight.


    In other words: a Makefile

    Agreed; it's a solution looking for a problem.

    Why do you think languages come with modules? That allows them to
    discover their own modules, rather than rely on external apps where
    the details are buried under appalling syntax and mixed up with a
    hundred other matters.


    No, that is not at all the purpose of modules in programming.  Note that there is no specific meaning of "module", and different languages use different for similar concepts.  There are many features that a
    language's "module" system might have - some have all, some have few:

    1. It lets you split the program into separate parts - generally
    separate files.  This is essential for scalability for large programs.

    2. You can compile modules independently to allow partial builds.

    3. Modules generally have some way to specify exported symbols and
    facilities that can be used by other modules.

    4. Modules can "import" other modules, gaining access to those modules' exported symbols.

    5. Modules provide encapsulation of data, code and namespaces.

    6. Modules can be used in a hierarchical system, building big modules
    from smaller ones to support larger libraries with many files.

    7. Modules provide a higher level concept that can be used by language
    tools to see how the whole program fits together or interact with
    package managers and librarian tools.


    C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.  It provides a limited form of 5 (everything that is not exported is
    "static"), but scaling to larger systems is dependent on identifier
    prefixes.

    You seem to be thinking purely about item 7 above.  This is, I think,
    common in interpreted languages (where modules have to be found at
    run-time, where the user is there but the developer is not).
    I've been implementing languages with language-supported modules for
    about 12 years.

    They generally provide 1, 2, 4, 5, and 7 from your list, and partial
    support of 6.

    They don't provide 2 (compiling individual modules) because the aim is a
    very fast, whole-program compler.

    While for 6, there is only a hierarchy between groups of modules, each
    forming an independent sub-program or library. I tried a strict full
    per-module hierarchy early on, mixed up with independent compilation; it
    worked poorly.

    The two levels allow you to assemble one binary out of groups of modules
    that each represent an independent component or library.

    Compiled
    languages don't usually have such a thing, because developers (as
    distinct from users) have build tools available that do a better job.

    Given a module scheme, the tool needed to build a whole program should
    not need to be told about the names and location of every constituent
    module; it should be able to determine that from what's already in the
    source code, given only a start point.

    Even with independent compilation, you might be able to use that info to determine dependencies, but you will need that module hierarchy if you
    want to compile individual modules.

    My view is that that tool only needs to be the compiler (a program that
    does the 'full stack' from source files to executable binary) working
    purely from the source code.

    Yours is to have compilers, assemblers, linkers and make programs,
    working with auxiliary data in makefiles, that itself have to be
    generated by extra tools or special options, or built by hand.

    I see that as old-fashioned and error-prone. Also complex and limited
    (eg. they will not support my compiler.)

    The experiment in my OP is intended to bring part of my module scheme to C.

    However, that will of course be poorly received. Why? Because when a
    language doesn't provide a certain feature (eg. module management), then
    people are free to do all sorts of wild and whacky things to achieve
    some result.

    Approaches that don't fit in to the disciplined requirements of a language-stipulated module scheme.

    A good example is the header-based module scheme of my BCC compiler;
    this required modules to be implemented as tidy .h/.c pairs of files. Of course, real C code is totally chaotic in its use of headers.

    In other words, you can're retro-fit a real module-scheme to C, not one
    that will work with existing code.

    But for all my projects and all the ones /I/ want to build, they do come
    down to just knowing what source files need to be submitted to the
    compiler. It really can be that simple. That CAN be trivially
    retro-fitted to existing projects.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Spiros Bousbouras on Thu Feb 1 14:42:13 2024
    On 31.01.2024 12:02, Spiros Bousbouras wrote:
    On Wed, 31 Jan 2024 08:47:20 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    It is also the sort of stunt that reduces development effort and ensures
    that you minimise the risk of documentation being out of sync with the
    program.

    This statement was also to me an initiator of some neurons firing
    and make me recall the development processes we used (in some large
    projects, not in one-man-shows)...


    I don't see how it achieves such tasks. For preventing loss of agreement between behaviour and documentation , the developers must have the necessary self-discipline to modify the documentation when they make changes in the behaviour.

    This self-discipline can be supported by tools. If we had to change
    things (due to feature request or bug) we opened a 'feature or bug
    request', established a 'track' and associated some 'fix-records'
    to the track. The 'request' contained the description, requirements,
    or specification, the individual 'fix-records' were, e.g., for code,
    or documentation, or dependent parts, and separately assigned.
    A (typical?) method to organize things and not forget an important
    step or product part.

    (Note: The used 'keywords' are approximations and may not actually
    match the tool's literals.)

    If they have such self-discipline then it's no harder to modify a
    separate documentation file than it is to modify the part of the source code which prints the --help output. Personally , I have the file(s) with the documentation as additional tabs in the same vim session where other tabs have the source code.

    [...]

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Thu Feb 1 14:55:20 2024
    On 31.01.2024 15:31, David Brown wrote:
    On 31/01/2024 12:02, Spiros Bousbouras wrote:

    They must document the user-visible features in (at least) two places -
    the "man" page, and the "--help" output. By using automation to
    generate one of these from the other, they reduce the duplicated effort.

    Also , the output of --help should be a short reminder whereas
    documentation should be longer , possibly much longer , possibly
    containing a
    tutorial , depending on how complex the application is.

    The same applies to "man" pages. Sometimes it makes sense to have short "--help" outputs and longer "man" pages, but if the documentation is
    longer than perhaps a dozen pages/screenfuls, "man" is unsuitable. And
    I imagine that the documentation for blender, along with its tutorials
    (as you say), is many orders of magnitude more than that. Keeping the
    "man" page and "--help" output the same seems sensible here.

    Ksh93 has chosen an interesting path here; they have a powerful getopts
    command (to parse the command line options), and have extended the well
    known simple option-string format to allow to specify with actually an
    own language all about options (type, defaults, forms, purpose, etc.).
    This allows an automated generation of output in some forms (HTML, man,
    etc.) with every command that uses ksh93 getopts to parse the options
    (try 'getopts --man' [in ksh93] for details).

    There are a couple approaches (Eiffel extracts some properties inherent
    to the language from the source, Javs extracts user defined doxygen
    entries, etc.).

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Scott Lurndal on Thu Feb 1 15:11:48 2024
    On 01.02.2024 01:33, Scott Lurndal wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 31 Jan 2024 15:31:29 +0100, David Brown wrote:

    ... but if the documentation is
    longer than perhaps a dozen pages/screenfuls, "man" is unsuitable.

    So it is your considered opinion, then, that the bash man page is
    “unsuitable”?

    ldo@theon:~> man bash | wc -l
    5276

    Actually I refer to it quite a lot. Being able to use search functions
    helps.

    When working with the ksh man page, I use vim.

    function viman
    {
    a=$(mktemp absXXXXXXX)
    man "$1" | col -b > ${a}
    vim ${a}
    rm ${a}
    }


    $ viman ksh


    In some modern shells (ksh, bash, zsh) you may use process substitution
    and avoid creating a temporary file (it simplifies things)...

    vim <(man "$1" | col -b)


    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Tim Rentsch on Thu Feb 1 14:29:06 2024
    On 31/01/2024 00:46, Tim Rentsch wrote:
    bart <bc@freeuk.com> writes:

    [description of a rudimentary C build system]

    What was described is what I might call the easiest and
    least important part of a build system.

    Looking over one of my current projects (modest in size,
    a few thousand lines of C source, plus some auxiliary
    files adding perhaps another thousand or two), here are
    some characteristics essential for my workflow (given
    in no particular order):

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    * use different flag settings for different translation
    units

    * be able to express dependency information

    * produece generated source files, sometimes based
    on other source files

    * be able to invoke arbitrary commands, including
    user-written scripts or other programs

    * build or rebuild some outputs only when necessary

    * condition some processing steps on successful
    completion of other processing steps

    * deliver partially built as well as fully built
    program units

    * automate regression testing and project archival
    (in both cases depending on completion status)

    * produce sets of review locations for things like
    program errors or TBD items

    * express different ways of combining compiler
    outputs (such as .o files) depending on what
    is being combined and what output is being
    produced (sometimes a particular set of inputs
    will be combined in several different ways to
    produce several different outputs)

    Indeed it is the case that producing a complete program is one
    part of my overall build process. But it is only one step out
    of many, and it is easy to express without needing any special
    considerations from the build system.


    Looking over one of my current projects (modest in size,
    a few thousand lines of C source, plus some auxiliary
    files adding perhaps another thousand or two),

    So, will a specific build of such a project produce a single EXE/DLL//SO
    file? (The // includes the typical file extension of Linux executables.)

    This is all I want for a build.

    I guess if you wrote your program in a language XXX that provided this
    build process for example:

    xxxc -build leadmodule.xxx

    you would find it equally unusable because it doesn't provide the
    flexibility you're accustomed to from the chaotic, DIY nature of your
    current methods.

    The idea is that you have a a tool that provides the basic build process
    as illustrated with the xxxc example, and you superimpose any custom requirements on top of that, making use of whatever customisation
    abilities it does provide.

    An analogy would be switching to a language that doesn't have C's
    preprocessor. If your coding style depends on macros that yield random
    bits of syntax, or to use conditional blocks to arbitrarily choose which
    lines to process, then you can also dismiss it as unusable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Thu Feb 1 15:00:03 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 01 Feb 2024 00:29:23 GMT, Scott Lurndal wrote:

    How would you display an manpage using nroff markup from an application?

    Much safer:

    subprocess.run \
    (
    args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
    )

    WTF?

    You are aware you are posting to comp.lang.c, right?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Feb 1 16:11:50 2024
    XPost: comp.unix.programmer

    On 01/02/2024 12:31, bart wrote:
    On 01/02/2024 08:39, David Brown wrote:
    On 31/01/2024 21:25, bart wrote:
    On 31/01/2024 16:41, vallor wrote:
    On Tue, 30 Jan 2024 19:22:00 +0000, Richard Harnden
    <richard.nospam@gmail.invalid> wrote in <upbi8o$14443$1@dont-email.me>: >>>>
    On 30/01/2024 16:50, Malcolm McLean wrote:

    But I'm wondering about one file which contains all the sources
    for the
    program. Like an IDE project file but lighter weight.


    In other words: a Makefile

    Agreed; it's a solution looking for a problem.

    Why do you think languages come with modules? That allows them to
    discover their own modules, rather than rely on external apps where
    the details are buried under appalling syntax and mixed up with a
    hundred other matters.


    No, that is not at all the purpose of modules in programming.  Note
    that there is no specific meaning of "module", and different languages
    use different for similar concepts.  There are many features that a
    language's "module" system might have - some have all, some have few:

    1. It lets you split the program into separate parts - generally
    separate files.  This is essential for scalability for large programs.

    2. You can compile modules independently to allow partial builds.

    3. Modules generally have some way to specify exported symbols and
    facilities that can be used by other modules.

    4. Modules can "import" other modules, gaining access to those
    modules' exported symbols.

    5. Modules provide encapsulation of data, code and namespaces.

    6. Modules can be used in a hierarchical system, building big modules
    from smaller ones to support larger libraries with many files.

    7. Modules provide a higher level concept that can be used by language
    tools to see how the whole program fits together or interact with
    package managers and librarian tools.


    C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
    It provides a limited form of 5 (everything that is not exported is
    "static"), but scaling to larger systems is dependent on identifier
    prefixes.

    You seem to be thinking purely about item 7 above.  This is, I think,
    common in interpreted languages (where modules have to be found at
    run-time, where the user is there but the developer is not).
    I've been implementing languages with language-supported modules for
    about 12 years.

    They generally provide 1, 2, 4, 5, and 7 from your list, and partial
    support of 6.

    Sure. Programming languages need that if they are to scale at all.


    They don't provide 2 (compiling individual modules) because the aim is a
    very fast, whole-program compler.

    Okay.


    But what you are talking about to add to C is item 7, nothing more.
    That is not adding "modules" to C. Your suggestion might be useful to
    some people for some projects, but that doesn't make it "modules" in any
    real sense.



    While for 6, there is only a hierarchy between groups of modules, each forming an independent sub-program or library. I tried a strict full per-module hierarchy early on, mixed up with independent compilation; it worked poorly.

    The two levels allow you to assemble one binary out of groups of modules
    that each represent an independent component or library.

    Compiled
    languages don't usually have such a thing, because developers (as
    distinct from users) have build tools available that do a better job.

    Given a module scheme, the tool needed to build a whole program should
    not need to be told about the names and location of every constituent
    module; it should be able to determine that from what's already in the
    source code, given only a start point.

    Why?

    You can't just take some idea that you like, and that is suitable for
    the projects you use, and assume it applies to everyone else.

    I have no problem telling my build system, or compilers, where the files
    are. In fact, I'd have a lot of problems if I couldn't do that. It is
    not normal development practice to have the source files in the same
    directory that you use for building the object code and binaries.


    Even with independent compilation, you might be able to use that info to determine dependencies, but you will need that module hierarchy if you
    want to compile individual modules.

    I already have tools for determining dependencies. What can your
    methods do that mine can't?

    (And don't bother saying that you can do it without extra tools -
    everyone who wants "make" and "gcc" has them on hand. And those who
    want an IDE that figures out dependencies for them have a dozen free
    options there too. These are all standard tools available to everyone.)


    My view is that that tool only needs to be the compiler (a program that
    does the 'full stack' from source files to executable binary) working
    purely from the source code.

    Yours is to have compilers, assemblers, linkers and make programs,
    working with auxiliary data in makefiles, that itself have to be
    generated by extra tools or special options, or built by hand.


    You want a limited little built-in tool. I want a toolbox that I can
    use in all sorts of ways - for things you have never imagined. I can
    see how your personal tools can be useful for you, as a single developer
    on your own - if you want something else you can add it to those tools.
    For others, they are useless.

    Perhaps I would find your tools worked for a "Hello, world" project.
    Maybe they were still okay as it got slightly bigger. Then I'd have
    something that they could not handle, and I'd reach for make. What
    would be the point of using "make" to automate - for example -
    post-processing of a binary to add a CRC check, but using your tools to
    handle the build? It's much easier just to use "make" for the whole thing.

    You are offering me a fish. I am offering to teach you to fish,
    including where to go to catch different kinds of fish. This is really
    a no-brainer choice.



    In other words, you can're retro-fit a real module-scheme to C, not one
    that will work with existing code.


    We know that. Otherwise it would have happened, long ago.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Feb 1 16:43:58 2024
    On 01/02/2024 15:29, bart wrote:
    On 31/01/2024 00:46, Tim Rentsch wrote:


    Looking over one of my current projects (modest in size,
    a few thousand lines of C source, plus some auxiliary
    files adding perhaps another thousand or two),

    So, will a specific build of such a project produce a single EXE/DLL//SO file? (The // includes the typical file extension of Linux executables.)

    This is all I want for a build.

    I my current project, when I run "make" it builds 5 different
    executables, each in three formats with different post-processing by
    other programs (not the compiler or linker). Most of my projects have
    fewer, but four or five outputs is not at all uncommon. It is also
    common that a few of the source files are generated by other programs as
    part of the build. So if I have an embedded web server in the program,
    I can change an html file and "make" will result in that being in the
    encrypted download image ready for deployment.

    Your tools can't do what I need for a lot of my work. Maybe they could
    be useable for some projects or programs. But why would I bother with
    them when I already need more serious and flexible tools for other
    things, already have these better tools, and those better tools work
    simply and easily for the simple and easy projects that your ones could
    handle?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Harnden@21:1/5 to bart on Thu Feb 1 16:09:53 2024
    XPost: comp.unix.programmer

    On 31/01/2024 20:25, bart wrote:
    BTW that 'make' only works on my machine because it happens to be part
    of mingw; none of my other C compilers have make.

    And as written, it only works for 'cc' which comes with 'gcc'

    Doesn't dos/windows have nmake and cl?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to David Brown on Thu Feb 1 16:20:24 2024
    XPost: comp.unix.programmer

    On 2024-02-01, David Brown <david.brown@hesbynett.no> wrote:
    5. Modules provide encapsulation of data, code and namespaces.

    Case study: C++ originally had only classes which provie this. Then it
    acquired the namespace construct which also provides it.. In spite of
    that, someone decided it needs modules also.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Richard Harnden on Thu Feb 1 17:32:01 2024
    XPost: comp.unix.programmer

    On 01/02/2024 16:09, Richard Harnden wrote:
    On 31/01/2024 20:25, bart wrote:
    BTW that 'make' only works on my machine because it happens to be part
    of mingw; none of my other C compilers have make.

    And as written, it only works for 'cc' which comes with 'gcc'

    Doesn't dos/windows have nmake and cl?

    No.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Thu Feb 1 18:34:08 2024
    XPost: comp.unix.programmer

    On 01/02/2024 15:11, David Brown wrote:

    1. It lets you split the program into separate parts - generally
    separate files.  This is essential for scalability for large programs.

    2. You can compile modules independently to allow partial builds.

    3. Modules generally have some way to specify exported symbols and
    facilities that can be used by other modules.

    4. Modules can "import" other modules, gaining access to those
    modules' exported symbols.

    5. Modules provide encapsulation of data, code and namespaces.

    6. Modules can be used in a hierarchical system, building big modules
    from smaller ones to support larger libraries with many files.

    7. Modules provide a higher level concept that can be used by
    language tools to see how the whole program fits together or interact
    with package managers and librarian tools.


    C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
    It provides a limited form of 5 (everything that is not exported is
    "static"), but scaling to larger systems is dependent on identifier
    prefixes.

    You seem to be thinking purely about item 7 above.  This is, I think,
    common in interpreted languages (where modules have to be found at
    run-time, where the user is there but the developer is not).
    I've been implementing languages with language-supported modules for
    about 12 years.

    They generally provide 1, 2, 4, 5, and 7 from your list, and partial
    support of 6.

    Sure.  Programming languages need that if they are to scale at all.


    They don't provide 2 (compiling individual modules) because the aim is
    a very fast, whole-program compler.

    Okay.


    But what you are talking about to add to C is item 7, nothing more. That
    is not adding "modules" to C.  Your suggestion might be useful to some people for some projects, but that doesn't make it "modules" in any real sense.

    Item 7 is my biggest stumbling to building open source C projects.

    While the developer (say you), knows the necessary info, and can somehow
    import into the build system, my job is trying to get it out.

    I can't use the intended build system because for one reason or another
    it doesn't work, or requires complex dependencies (MSYS, CMake, MSTOOLS, .configure), or I want to run mcc on it.

    That info could trivially be added to the C source code. Nobody actually
    needs to use my #pragma scheme; it could simply be a block comment on
    one of the modules.

    I'm sure with all your complicated tools, they could surely dump some
    text that looks like:

    // List of source files to build the binary cipher.c:
    // cipher.c
    // hmac.c
    // sha2.c

    and prepend it to one of the files. Even a README will do.

    That wouldn't hurt would it?

    Given a module scheme, the tool needed to build a whole program should
    not need to be told about the names and location of every constituent
    module; it should be able to determine that from what's already in the
    source code, given only a start point.

    Why?

    You can't just take some idea that you like, and that is suitable for
    the projects you use, and assume it applies to everyone else.

    I have no problem telling my build system, or compilers, where the files are.  In fact, I'd have a lot of problems if I couldn't do that.  It is
    not normal development practice to have the source files in the same directory that you use for building the object code and binaries.


    Even with independent compilation, you might be able to use that info
    to determine dependencies, but you will need that module hierarchy if
    you want to compile individual modules.

    I already have tools for determining dependencies.  What can your
    methods do that mine can't?

    (And don't bother saying that you can do it without extra tools -
    everyone who wants "make" and "gcc" has them on hand.  And those who
    want an IDE that figures out dependencies for them have a dozen free
    options there too.  These are all standard tools available to everyone.)

    So, if C were to acquire modules, so that a C compiler could determine
    that all for it itself (maybe even work out for itself which need
    recompiling), would you just ignore that feature and use the same
    auxiliary methods you have always done?

    You don't see that the language taking over task (1) of the things that makefiles do, and possibly (2) (of the list I posted; repeated below),
    can streamline makefiles to make them shorter, simpler, easier to write
    and to read, and with fewer opportunities to get stuff wrong?

    That was a rhetorical question. Obviously not.


    Perhaps I would find your tools worked for a "Hello, world" project.
    Maybe they were still okay as it got slightly bigger.  Then I'd have something that they could not handle, and I'd reach for make.  What
    would be the point of using "make" to automate - for example - post-processing of a binary to add a CRC check, but using your tools to handle the build?  It's much easier just to use "make" for the whole thing.


    Because building one binary is a process should be the job of a
    compiler, not some random external tool that knows nothing of the
    language or compiler.

    Maybe you think makefiles should individually list all the 1000s of
    functions of a project too?

    You are offering me a fish.  I am offering to teach you to fish,
    including where to go to catch different kinds of fish.  This is really
    a no-brainer choice.

    That analogy makes no sense.

    Let me try and explain what I do: I write whole-program compilers. That
    means that, each time you do a new build, it will reprocess each file
    from source. They use the language's module scheme to know which files
    to process.

    I tend to build C programs by recompiling all modules too. So I want to introduce the same convenience I have elsewhere.

    It works for me, and I'm sure could work for others if they didn't have makefiles forced down their throats and hardwired into their brains.

    ----------------------------
    (Repost)

    I've already covered this in many posts on the subject. But 'make' deals
    with three kinds of requirements:

    (1) Specifying what the modules are to be compiled and combined into one
    binary file

    (2) Specifying dependences between all files to allow rebuilding of that
    one file with minimal recompilation

    (3) Everything else needed in a complex project: running processes to
    generate files file config.h, creating multiple binaries, specifying
    dependencies between binaries, installation etc

    My proposal tackles only (1), which is something that many languages now
    have the means to deal with themselves. I already stated that (2) is not covered.

    But you may still need makefiles to deal with (3).

    If your main requirement /is/ only (1), then my idea is to move the
    necessary info into the source code, and tackle it with the C compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Thu Feb 1 19:25:12 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 01/02/2024 16:09, Richard Harnden wrote:
    On 31/01/2024 20:25, bart wrote:
    BTW that 'make' only works on my machine because it happens to be part
    of mingw; none of my other C compilers have make.

    And as written, it only works for 'cc' which comes with 'gcc'

    Doesn't dos/windows have nmake and cl?

    No.

    You sure about that? They sure used to have them
    as an add-on. IIRC, they're still part of visual studio.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Thu Feb 1 19:51:53 2024
    XPost: comp.unix.programmer

    On 01/02/2024 19:25, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 01/02/2024 16:09, Richard Harnden wrote:
    On 31/01/2024 20:25, bart wrote:
    BTW that 'make' only works on my machine because it happens to be part >>>> of mingw; none of my other C compilers have make.

    And as written, it only works for 'cc' which comes with 'gcc'

    Doesn't dos/windows have nmake and cl?

    No.

    You sure about that? They sure used to have them
    as an add-on. IIRC, they're still part of visual studio.

    Visual Studio is a 10,000MB monster. It might well have it around, but
    it's so complex, it's been years since I've even seen discrete cl.exe
    and link.exe programs, despite scouring massive, 11-deep directory
    structures.

    Meanwhile my everyday compilers are 0.4MB for my language and 0.3MB for C.

    I like to keep things simple. Everybody else likes to keep things
    complicated, and the more the better.

    Anyway, acquiring VS just to build one small program would be like just
    a giant sledgehammer, 1000 times normal size, to crack a tiny nut.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Thu Feb 1 22:36:47 2024
    XPost: comp.unix.programmer

    On Thu, 1 Feb 2024 19:51:53 +0000
    bart <bc@freeuk.com> wrote:

    On 01/02/2024 19:25, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 01/02/2024 16:09, Richard Harnden wrote:
    On 31/01/2024 20:25, bart wrote:
    BTW that 'make' only works on my machine because it happens to
    be part of mingw; none of my other C compilers have make.

    And as written, it only works for 'cc' which comes with 'gcc'

    Doesn't dos/windows have nmake and cl?

    No.

    You sure about that? They sure used to have them
    as an add-on. IIRC, they're still part of visual studio.

    Visual Studio is a 10,000MB monster. It might well have it around,
    but it's so complex, it's been years since I've even seen discrete
    cl.exe and link.exe programs, despite scouring massive, 11-deep
    directory structures.


    If you only download command-line build tools then it's somewhat less
    huge.
    2022 version is 3,152,365,436 bytes.
    I don't know the size of installation package. It looks like on my home
    PC I used online installer.

    Meanwhile my everyday compilers are 0.4MB for my language and 0.3MB
    for C.

    I like to keep things simple. Everybody else likes to keep things complicated, and the more the better.

    Anyway, acquiring VS just to build one small program would be like
    just a giant sledgehammer, 1000 times normal size, to crack a tiny
    nut.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Michael S on Thu Feb 1 20:55:53 2024
    XPost: comp.unix.programmer

    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 1 Feb 2024 18:34:08 +0000
    bart <bc@freeuk.com> wrote:

    On 01/02/2024 15:11, David Brown wrote:

    But you may still need makefiles to deal with (3).
    =20
    If your main requirement /is/ only (1), then my idea is to move the=20
    necessary info into the source code, and tackle it with the C
    compiler.
    =20


    You proposal and needs of David Brown are not necessarily
    contradictory.=20

    Although David (and I) aren't particularly interested in
    changing something that already works quite well.

    All you need to do to satisfy him is to add to your compiler an option
    for export of dependencies in make-compatible format, i.e. something
    very similar to -MD option of gcc.

    I suspect he may be much more difficult to satisfy on this topic.

    Nobody is going to switch production software to a one-off
    unsupported compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Thu Feb 1 22:23:28 2024
    XPost: comp.unix.programmer

    On Thu, 1 Feb 2024 18:34:08 +0000
    bart <bc@freeuk.com> wrote:

    On 01/02/2024 15:11, David Brown wrote:

    1. It lets you split the program into separate parts - generally
    separate files. This is essential for scalability for large
    programs.

    2. You can compile modules independently to allow partial builds.

    3. Modules generally have some way to specify exported symbols
    and facilities that can be used by other modules.

    4. Modules can "import" other modules, gaining access to those
    modules' exported symbols.

    5. Modules provide encapsulation of data, code and namespaces.

    6. Modules can be used in a hierarchical system, building big
    modules from smaller ones to support larger libraries with many
    files.

    7. Modules provide a higher level concept that can be used by
    language tools to see how the whole program fits together or
    interact with package managers and librarian tools.


    C provides 1, 2, 3, and 4 if you use a "file.c/file.h"
    organisation. It provides a limited form of 5 (everything that is
    not exported is "static"), but scaling to larger systems is
    dependent on identifier prefixes.

    You seem to be thinking purely about item 7 above. This is, I
    think, common in interpreted languages (where modules have to be
    found at run-time, where the user is there but the developer is
    not).
    I've been implementing languages with language-supported modules
    for about 12 years.

    They generally provide 1, 2, 4, 5, and 7 from your list, and
    partial support of 6.

    Sure. Programming languages need that if they are to scale at all.


    They don't provide 2 (compiling individual modules) because the
    aim is a very fast, whole-program compler.

    Okay.


    But what you are talking about to add to C is item 7, nothing more.
    That is not adding "modules" to C. Your suggestion might be useful
    to some people for some projects, but that doesn't make it
    "modules" in any real sense.

    Item 7 is my biggest stumbling to building open source C projects.

    While the developer (say you), knows the necessary info, and can
    somehow import into the build system, my job is trying to get it out.

    I can't use the intended build system because for one reason or
    another it doesn't work, or requires complex dependencies (MSYS,
    CMake, MSTOOLS, .configure), or I want to run mcc on it.

    That info could trivially be added to the C source code. Nobody
    actually needs to use my #pragma scheme; it could simply be a block
    comment on one of the modules.

    I'm sure with all your complicated tools, they could surely dump some
    text that looks like:

    // List of source files to build the binary cipher.c:
    // cipher.c
    // hmac.c
    // sha2.c

    and prepend it to one of the files. Even a README will do.

    That wouldn't hurt would it?

    Given a module scheme, the tool needed to build a whole program
    should not need to be told about the names and location of every
    constituent module; it should be able to determine that from
    what's already in the source code, given only a start point.

    Why?

    You can't just take some idea that you like, and that is suitable
    for the projects you use, and assume it applies to everyone else.

    I have no problem telling my build system, or compilers, where the
    files are. In fact, I'd have a lot of problems if I couldn't do
    that. It is not normal development practice to have the source
    files in the same directory that you use for building the object
    code and binaries.

    Even with independent compilation, you might be able to use that
    info to determine dependencies, but you will need that module
    hierarchy if you want to compile individual modules.

    I already have tools for determining dependencies. What can your
    methods do that mine can't?

    (And don't bother saying that you can do it without extra tools -
    everyone who wants "make" and "gcc" has them on hand. And those
    who want an IDE that figures out dependencies for them have a dozen
    free options there too. These are all standard tools available to everyone.)

    So, if C were to acquire modules, so that a C compiler could
    determine that all for it itself (maybe even work out for itself
    which need recompiling), would you just ignore that feature and use
    the same auxiliary methods you have always done?

    You don't see that the language taking over task (1) of the things
    that makefiles do, and possibly (2) (of the list I posted; repeated
    below), can streamline makefiles to make them shorter, simpler,
    easier to write and to read, and with fewer opportunities to get
    stuff wrong?

    That was a rhetorical question. Obviously not.


    Perhaps I would find your tools worked for a "Hello, world"
    project. Maybe they were still okay as it got slightly bigger.
    Then I'd have something that they could not handle, and I'd reach
    for make. What would be the point of using "make" to automate -
    for example - post-processing of a binary to add a CRC check, but
    using your tools to handle the build? It's much easier just to use
    "make" for the whole thing.


    Because building one binary is a process should be the job of a
    compiler, not some random external tool that knows nothing of the
    language or compiler.

    Maybe you think makefiles should individually list all the 1000s of functions of a project too?

    You are offering me a fish. I am offering to teach you to fish,
    including where to go to catch different kinds of fish. This is
    really a no-brainer choice.

    That analogy makes no sense.

    Let me try and explain what I do: I write whole-program compilers.
    That means that, each time you do a new build, it will reprocess each
    file from source. They use the language's module scheme to know which
    files to process.

    I tend to build C programs by recompiling all modules too. So I want
    to introduce the same convenience I have elsewhere.

    It works for me, and I'm sure could work for others if they didn't
    have makefiles forced down their throats and hardwired into their
    brains.

    ----------------------------
    (Repost)

    I've already covered this in many posts on the subject. But 'make'
    deals with three kinds of requirements:

    (1) Specifying what the modules are to be compiled and combined into
    one binary file

    (2) Specifying dependences between all files to allow rebuilding of
    that one file with minimal recompilation

    (3) Everything else needed in a complex project: running processes to
    generate files file config.h, creating multiple binaries,
    specifying dependencies between binaries, installation etc

    My proposal tackles only (1), which is something that many languages
    now have the means to deal with themselves. I already stated that (2)
    is not covered.

    But you may still need makefiles to deal with (3).

    If your main requirement /is/ only (1), then my idea is to move the necessary info into the source code, and tackle it with the C
    compiler.



    You proposal and needs of David Brown are not necessarily
    contradictory.
    All you need to do to satisfy him is to add to your compiler an option
    for export of dependencies in make-compatible format, i.e. something
    very similar to -MD option of gcc.

    Then David could write in his makefile:

    out/foo.elf : main_foo.c
    mcc -MD $< -o $@

    -include out/foo.d

    And then to proceed with automatiion of his pre and post-processing needs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Thu Feb 1 21:34:31 2024
    XPost: comp.unix.programmer

    On Thu, 1 Feb 2024 09:39:15 +0100, David Brown wrote:

    2. You can compile modules independently to allow partial builds.

    In our Comp Sci classes we were careful to draw a distinction between “separate” and “independent” compilation. The latter is exemplified by (old-style) Fortran and C, where the same name may be declared in multiple units, and the linker will happily tie them together, but without any
    actual checking that the declarations match.

    “Separate” compilation, on the other hand, means that there is some consistency checking done between the declarations, and the program will
    fail to build if there are mismatches. Ada has this. And it looks like
    Fortran has acquired it, too, since the Fortran 90 spec.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Thu Feb 1 22:38:13 2024
    XPost: comp.unix.programmer

    On 01/02/2024 21:23, Michael S wrote:
    On Thu, 1 Feb 2024 18:34:08 +0000

    I've already covered this in many posts on the subject. But 'make'
    deals with three kinds of requirements:

    (1) Specifying what the modules are to be compiled and combined into
    one binary file

    (2) Specifying dependences between all files to allow rebuilding of
    that one file with minimal recompilation

    (3) Everything else needed in a complex project: running processes to
    generate files file config.h, creating multiple binaries,
    specifying dependencies between binaries, installation etc

    My proposal tackles only (1), which is something that many languages
    now have the means to deal with themselves. I already stated that (2)
    is not covered.

    But you may still need makefiles to deal with (3).

    If your main requirement /is/ only (1), then my idea is to move the
    necessary info into the source code, and tackle it with the C
    compiler.



    You proposal and needs of David Brown are not necessarily
    contradictory.
    All you need to do to satisfy him is to add to your compiler an option
    for export of dependencies in make-compatible format, i.e. something
    very similar to -MD option of gcc.

    Then David could write in his makefile:

    out/foo.elf : main_foo.c
    mcc -MD $< -o $@

    -include out/foo.d

    And then to proceed with automatiion of his pre and post-processing needs.


    But then I'd still be using "make", and Bart would not be happy.

    And "gcc -MD" does not need any extra #pragmas, so presumably neither
    would an implementation of that feature in bcc (or mcc or whatever). So
    Bart's new system would disappear entirely.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Thu Feb 1 21:39:40 2024
    XPost: comp.unix.programmer

    On Thu, 01 Feb 2024 11:49:36 -0800, Keith Thompson wrote:

    David Brown <david.brown@hesbynett.no> writes:

    I'd rather "make -j" (without a number) defaulted to using the number
    of cpu cores, as that is a reasonable guess for most compilations.

    Agreed, but there might not be a sufficiently portable way to determine
    that number.

    nproc(1) is part of the GNU Core Utilities <manpages.debian.org/1/nproc.1.html>.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Feb 1 22:34:36 2024
    XPost: comp.unix.programmer

    On 01/02/2024 19:34, bart wrote:
    On 01/02/2024 15:11, David Brown wrote:

    1. It lets you split the program into separate parts - generally
    separate files.  This is essential for scalability for large programs. >>>>
    2. You can compile modules independently to allow partial builds.

    3. Modules generally have some way to specify exported symbols and
    facilities that can be used by other modules.

    4. Modules can "import" other modules, gaining access to those
    modules' exported symbols.

    5. Modules provide encapsulation of data, code and namespaces.

    6. Modules can be used in a hierarchical system, building big
    modules from smaller ones to support larger libraries with many files. >>>>
    7. Modules provide a higher level concept that can be used by
    language tools to see how the whole program fits together or
    interact with package managers and librarian tools.


    C provides 1, 2, 3, and 4 if you use a "file.c/file.h" organisation.
    It provides a limited form of 5 (everything that is not exported is
    "static"), but scaling to larger systems is dependent on identifier
    prefixes.

    You seem to be thinking purely about item 7 above.  This is, I
    think, common in interpreted languages (where modules have to be
    found at run-time, where the user is there but the developer is not).
    I've been implementing languages with language-supported modules for
    about 12 years.

    They generally provide 1, 2, 4, 5, and 7 from your list, and partial
    support of 6.

    Sure.  Programming languages need that if they are to scale at all.


    They don't provide 2 (compiling individual modules) because the aim
    is a very fast, whole-program compler.

    Okay.


    But what you are talking about to add to C is item 7, nothing more.
    That is not adding "modules" to C.  Your suggestion might be useful to
    some people for some projects, but that doesn't make it "modules" in
    any real sense.

    Item 7 is my biggest stumbling to building open source C projects.

    While the developer (say you), knows the necessary info, and can somehow import into the build system, my job is trying to get it out.

    I can't use the intended build system because for one reason or another
    it doesn't work, or requires complex dependencies (MSYS, CMake, MSTOOLS, .configure), or I want to run mcc on it.

    That info could trivially be added to the C source code. Nobody actually needs to use my #pragma scheme; it could simply be a block comment on
    one of the modules.

    I'm sure with all your complicated tools, they could surely dump some
    text that looks like:

       // List of source files to build the binary cipher.c:
       // cipher.c
       // hmac.c
       // sha2.c

    and prepend it to one of the files.  Even a README will do.

    That wouldn't hurt would it?

    Complain to the people that made that open source software, not me. But
    don't be surprised if they tell you "There's a makefile. It works for
    everyone else." Or maybe they will say they can't cater for every
    little problem with everyone's unusual computer setup. Maybe they will
    try to be helpful, maybe they will be rude and arrogant. Maybe they
    will point out that their makefile /is/ just a list of the files needed,
    along with the compiler options. Usually projects of any size /do/ have readme's and build instructions - but some won't.

    No matter what, it is not the fault of anyone here, it is not the fault
    of "make" or Linux or C, and there is nothing that any of us can do to
    help you. (And $DEITY knows, we have tried.)


    I already have tools for determining dependencies.  What can your
    methods do that mine can't?

    (And don't bother saying that you can do it without extra tools -
    everyone who wants "make" and "gcc" has them on hand.  And those who
    want an IDE that figures out dependencies for them have a dozen free
    options there too.  These are all standard tools available to everyone.)

    So, if C were to acquire modules, so that a C compiler could determine
    that all for it itself (maybe even work out for itself which need recompiling), would you just ignore that feature and use the same
    auxiliary methods you have always done?

    That's not unlikely. Why would I change? You still haven't given any
    reasons why your tools would be /better/. Even if they could do all I
    needed to do for a particular project, "just as good" is not "better",
    and does not encourage change.

    I would still need "make" for everything else. I would, however, be
    quite happy if there were some standard way to get the list of include
    files needed by a C file, rather than using gcc-specific flags.


    You don't see that the language taking over task (1) of the things that makefiles do, and possibly (2) (of the list I posted; repeated below),
    can streamline makefiles to make them shorter, simpler, easier to write
    and to read, and with fewer opportunities to get stuff wrong?

    That was a rhetorical question. Obviously not.

    I've nothing against shorter or simpler makefiles. But as far as I can
    see, you are just moving the same information from a makefile into the C
    files.

    Indeed, you are duplicating things - now your C files have to have
    "#pragma module this, #pragma module that" in addition to having
    "#include this.h, #include that.h". With my makefiles, all the "this"
    and "that" is found automatically - writing the includes in the C code
    is sufficient.



    Perhaps I would find your tools worked for a "Hello, world" project.
    Maybe they were still okay as it got slightly bigger.  Then I'd have
    something that they could not handle, and I'd reach for make.  What
    would be the point of using "make" to automate - for example -
    post-processing of a binary to add a CRC check, but using your tools
    to handle the build?  It's much easier just to use "make" for the
    whole thing.


    Because building one binary is a process should be the job of a
    compiler, not some random external tool that knows nothing of the
    language or compiler.

    No, it is the job of the linker. Compiling is the job of the compiler. Controlling the build is the job of the build system. I don't see
    monolithic applications as an advantage.


    Maybe you think makefiles should individually list all the 1000s of
    functions of a project too?

    You are offering me a fish.  I am offering to teach you to fish,
    including where to go to catch different kinds of fish.  This is
    really a no-brainer choice.

    That analogy makes no sense.

    Let me try and explain what I do: I write whole-program compilers. That
    means that, each time you do a new build, it will reprocess each file
    from source. They use the language's module scheme to know which files
    to process.

    Surely most sensibly organised projects could then be built with :

    bcc *.c -o prog.exe

    I mean, that's what I can do with gcc if I had something that doesn't
    need other flags (which is utterly impractical for my work).

    Or if I had lots of files, each with their own c file :

    for f in *.c; do gcc $i -o ${f%.c}; done


    It works for me, and I'm sure could work for others if they didn't have makefiles forced down their throats and hardwired into their brains.

    /Nobody/ has makefiles forced on them. People use "make" because it is convenient, and it works. If something better comes along, and it is
    better enough to overcome the familiarity momentum, people will use that.

    I do a round of checking the state of the art of build tools on a
    regular basis - perhaps every year or so. I look at what's popular and
    what's new, to see if there's anything that would work for me and be a
    step up from what I have. So far, I've not found anything that comes
    very close to "make" for my needs. There's some tools that are pretty
    good in many ways, but none that I can see as being a better choice for
    me than "make". I am, however, considering CMake (which works at a
    higher level, and outputs makefiles, ninja files or other project
    files). It appears to have some disadvantages compared to my makefiles,
    such as needed to be run as an extra step when files are added or
    removed to a project or dependencies are changed, but that doesn't
    happen too often, and it's integration with other tools and projects
    might make it an overall win. I'll need some time to investigate and
    study it.

    So I will happily move from "make" when I find something better - enough
    better to make it worth the effort. I'll happily move from gcc, or
    Linux, if I find something enough better to make it worth changing. I regularly look at alternatives and consider them - clang is the key
    challenger to gcc for my purposes.

    But I have no interest in changing to something vastly more limited and
    which adds nothing at all.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Richard Harnden on Thu Feb 1 23:09:48 2024
    XPost: comp.unix.programmer

    On 01/02/2024 17:09, Richard Harnden wrote:
    On 31/01/2024 20:25, bart wrote:
    BTW that 'make' only works on my machine because it happens to be part
    of mingw; none of my other C compilers have make.

    And as written, it only works for 'cc' which comes with 'gcc'

    Doesn't dos/windows have nmake and cl?

    Those are part of MSVC, which runs on Windows but does not come with it.
    "nmake" is MS's version of "make", and has been shipped with most MS development tools for many decades.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Thu Feb 1 23:14:05 2024
    XPost: comp.unix.programmer

    On 01/02/2024 20:49, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    I'd rather "make -j" (without a number) defaulted to using the number
    of cpu cores, as that is a reasonable guess for most compilations.

    Agreed, but there might not be a sufficiently portable way to determine
    that number.


    gcc manages to figure it out for parallel tasks, such as LTO linking. I
    think it would be reasonable enough to have it use the number of cores
    when it was able to figure it out, and a default (say, 4) when it could not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Thu Feb 1 22:29:13 2024
    XPost: comp.unix.programmer

    On 01/02/2024 21:34, David Brown wrote:
    On 01/02/2024 19:34, bart wrote:

    You don't see that the language taking over task (1) of the things
    that makefiles do, and possibly (2) (of the list I posted; repeated
    below), can streamline makefiles to make them shorter, simpler, easier
    to write and to read, and with fewer opportunities to get stuff wrong?

    That was a rhetorical question. Obviously not.

    I've nothing against shorter or simpler makefiles.  But as far as I can
    see, you are just moving the same information from a makefile into the C files.

    Indeed, you are duplicating things - now your C files have to have
    "#pragma module this, #pragma module that" in addition to having
    "#include this.h, #include that.h".  With my makefiles, all the "this"
    and "that" is found automatically - writing the includes in the C code
    is sufficient.

    I don't think so. Seeing:

    #include "file.h"

    doesn't necessarily mean there is a matching "file.c". It might not
    exist, or the header might be for some external library, or maybe it
    does exist but in a different location.

    Or maybe some code may use a file "fred.c", which needs to be submitted
    to the compiler, but for which there is either no header used, or uses a
    header file with a different name.

    As I said, C's uses of .h and .c files are chaotic.

    Did you have in mind using gcc's -MM option? For my 'cipher.c' demo,
    that only gives a set of header names. Missing are hmac.c and sha2.c.

    If I try it on lua.c, it gives me only 5 header files; the project
    comprises 33 .c files and 27 .h files.



    Perhaps I would find your tools worked for a "Hello, world" project.
    Maybe they were still okay as it got slightly bigger.  Then I'd have
    something that they could not handle, and I'd reach for make.  What
    would be the point of using "make" to automate - for example -
    post-processing of a binary to add a CRC check, but using your tools
    to handle the build?  It's much easier just to use "make" for the
    whole thing.


    Because building one binary is a process should be the job of a
    compiler, not some random external tool that knows nothing of the
    language or compiler.

    No, it is the job of the linker.

    There is where you're still stuck in the past.

    I first got rid of a formal 'linker' about 40 years ago. I got rid of
    the notion of combining independently compiled modules into an
    executable a decade ago.

    Linking would only come up for me if I wanted to statically combine the
    outputs of several languages. Since I can't process object files, I need
    to generate an object file (in my case, it represents ALL my modules),
    and a traditional linker. That would be someone else's job.


      Compiling is the job of the compiler.
    Controlling the build is the job of the build system.  I don't see monolithic applications as an advantage.

    I do. You type:

    cc prog

    without knowing or caring whether the contains that one module, or there
    are 99 more.

    In any case, your linker will generate a monolithic binary whether you
    like it or not.

    But I suspect you don't understand what a 'whole-program compiler' does:

    * It means that for each binary, all sources are recompiled at the same
    time to create it

    * It doesn't mean that an application can only comprise one binary

    * It moves the compilation unit granularity from a module to a single
    EXE or DLL file

    * Interfaces (in the case of a lower level language), are moved inter-
    module to inter-program. The boundaries are between one program or
    library and another, not between modules.

    A language which claims to have a module system, but still compiles a
    module at a time, will probably still have discrete inter-module
    interfaces, although they may be handled automatically.



    Maybe you think makefiles should individually list all the 1000s of
    functions of a project too?

    You are offering me a fish.  I am offering to teach you to fish,
    including where to go to catch different kinds of fish.  This is
    really a no-brainer choice.

    That analogy makes no sense.

    Let me try and explain what I do: I write whole-program compilers.
    That means that, each time you do a new build, it will reprocess each
    file from source. They use the language's module scheme to know which
    files to process.

    Surely most sensibly organised projects could then be built with :

        bcc *.c -o prog.exe

    I mean, that's what I can do with gcc if I had something that doesn't
    need other flags (which is utterly impractical for my work).

    Yes, that's one technique that can be used. But few projects are like
    that one. One or two, you can try *.c and it will work.

    Malcolm's resource compiler is like that, but it still benefits from a
    file like this:

    #pragma module "*.c"
    #pragma module "freetype/*.c"
    #pragma module "samplerate/*.c"

    here called bbx.c. I can build it like this:

    c:\bbx\src>mcc bbx
    Compiling bbx.c to bbx.exe


    /Nobody/ has makefiles forced on them.  People use "make" because it is convenient, and it works.

    BUT IT DOESN'T. It fails a lot of the time on Windows, but they are too complicated to figure out why. From a recent thread I made about trying
    to build piet.c, it failed on extra programs that weren't needed (that
    was on Linux; it didn't work at all on Windows).

    This is a program which actually only needed:

    cc piet.c

    (Here cc *.c wouldn't work.) This mirrors pretty much what I see in most
    C projects; needless complexity that muddies the waters and creates
    failures.

    ALL I WANT IS A LIST OF FILES. Why doesn't anybody get that? And why is
    it so hard?

    Apparently makefiles are superior because you don't even need to know
    the name of the program (and will have to hunt for where it put the
    executable because it won't tell you!).

    But I have no interest in changing to something vastly more limited and
    which adds nothing at all.

    That's right; it adds nothing, but it takes a lot away! Like a lot of
    failure points.

    (Look at the Monty Hall problem, but instead of 3 doors, try it with
    100, of which 98 will be opened. Then it will easy to make the right
    decision because nearly all the wrong ones have been eliminated.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Fri Feb 2 00:55:38 2024
    XPost: comp.unix.programmer

    On Thu, 1 Feb 2024 22:38:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 01/02/2024 21:23, Michael S wrote:
    On Thu, 1 Feb 2024 18:34:08 +0000

    I've already covered this in many posts on the subject. But 'make'
    deals with three kinds of requirements:

    (1) Specifying what the modules are to be compiled and combined
    into one binary file

    (2) Specifying dependences between all files to allow rebuilding of
    that one file with minimal recompilation

    (3) Everything else needed in a complex project: running processes
    to generate files file config.h, creating multiple binaries,
    specifying dependencies between binaries, installation etc

    My proposal tackles only (1), which is something that many
    languages now have the means to deal with themselves. I already
    stated that (2) is not covered.

    But you may still need makefiles to deal with (3).

    If your main requirement /is/ only (1), then my idea is to move the
    necessary info into the source code, and tackle it with the C
    compiler.



    You proposal and needs of David Brown are not necessarily
    contradictory.
    All you need to do to satisfy him is to add to your compiler an
    option for export of dependencies in make-compatible format, i.e.
    something very similar to -MD option of gcc.

    Then David could write in his makefile:

    out/foo.elf : main_foo.c
    mcc -MD $< -o $@

    -include out/foo.d

    And then to proceed with automatiion of his pre and post-processing
    needs.

    But then I'd still be using "make", and Bart would not be happy.

    And "gcc -MD" does not need any extra #pragmas, so presumably neither
    would an implementation of that feature in bcc (or mcc or whatever).
    So Bart's new system would disappear entirely.




    Bart spares you from managing list(s) of objects in your makefile and
    from writing arcan helper macros.
    Yes, I know, you copy&past arcan macros from project to project, but
    you had to write them n years ago and that surely was not easy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Thu Feb 1 23:30:14 2024
    XPost: comp.unix.programmer

    On Thu, 1 Feb 2024 22:34:36 +0100, David Brown wrote:

    I am, however, considering CMake (which works at a
    higher level, and outputs makefiles, ninja files or other project
    files).

    Ninja was created as an alternative to Make. Basically, if your Makefiles
    are going to be generated by a meta-build system like CMake or Meson, then
    they don’t need to support the kinds of niceties that facilitate writing
    them by hand. So you strip it write down to the bare-bones functionality,
    which makes your builds fast while consuming minimal resources, and that
    is Ninja.

    It appears to have some disadvantages compared to my makefiles,
    such as needed to be run as an extra step when files are added or
    removed to a project or dependencies are changed, but that doesn't
    happen too often, and it's integration with other tools and projects
    might make it an overall win.

    Some are proposing Meson as an alternative to CMake. I think they are
    saying that the fact that its scripting language is not fully Turing- equivalent is an advantage.

    Me, while I think the CMake language can be a little clunky in places, I
    still think having Turing-equivalence is better than not having it. ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Thu Feb 1 23:31:36 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 00:55:38 +0200, Michael S wrote:

    Yes, I know, you copy&past arcan macros from project to project, but you
    had to write them n years ago and that surely was not easy.

    And maybe you discover bugs in them in certain situations, and have to
    track down all the places you copied/pasted them and fix them.

    My code-reuse OCD reflex is twitching at this point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Thu Feb 1 23:32:46 2024
    XPost: comp.unix.programmer

    On Thu, 1 Feb 2024 23:09:48 +0100, David Brown wrote:

    "nmake" is MS's version of "make" ...

    I think they did originally have a tool called “make”. But this was so
    crap in comparison to the GNU/POSIX equivalent that they changed the name
    in the new version to try to distance themselves from the bad taste the
    old version left in people’s mouths.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Thu Feb 1 23:38:17 2024
    XPost: comp.unix.programmer

    On Thu, 01 Feb 2024 15:24:00 -0800, Keith Thompson wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    nproc(1) is part of the GNU Core Utilities
    <manpages.debian.org/1/nproc.1.html>.

    And GNU make is not, so it's possible that a system might have make but
    not nproc.

    While that is theoretically possible, I somehow think such an installation would feel to the typical *nix user somewhat ... crippled.

    Particularly since the “install” command is part of coreutils.

    Also imagine trying to do builds, or any kind of development, on a system without the “mkdir” command--another component of coreutils.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Thu Feb 1 23:40:21 2024
    On Thu, 01 Feb 2024 15:00:03 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Thu, 01 Feb 2024 00:29:23 GMT, Scott Lurndal wrote:

    How would you display an manpage using nroff markup from an
    application?

    Much safer:

    subprocess.run \
    (
    args = ("man", os.path.expandvars("${INSTALL_LOC}/man/topic.man"))
    )

    You are aware you are posting to comp.lang.c, right?

    Yes. Nevertheless, this is the clearest and most concise (read: least work involved for me) way of explaining what I mean; I will leave it to the C experts to translate it into their preferred lower-level way of doing
    things.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Lawrence D'Oliveiro on Thu Feb 1 23:53:03 2024
    XPost: comp.unix.programmer

    On 2024-02-01, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 01 Feb 2024 15:24:00 -0800, Keith Thompson wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    nproc(1) is part of the GNU Core Utilities
    <manpages.debian.org/1/nproc.1.html>.

    And GNU make is not, so it's possible that a system might have make but
    not nproc.

    While that is theoretically possible, I somehow think such an installation would feel to the typical *nix user somewhat ... crippled.

    Selected GNU programs can be individually installed on Unix-like systems
    which already have other tools of their own.

    Particularly since the “install” command is part of coreutils.

    The install utility appeared in 4.2 BSD, which was released in
    August 1983.

    The GNU Project was announced in September 1983.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Fri Feb 2 01:46:36 2024
    XPost: comp.unix.programmer

    On 01.02.2024 22:34, David Brown wrote:

    I've nothing against shorter or simpler makefiles. [...]

    During mid/late 1990's someone at our site looked for an alternative
    to Make. After some evaluation of tools it was decided to not replace
    Make. I've just googled for what at that time appeared to be the most
    promising candidate (it's obviously still there) and the description
    of Jam reads as it would fulfill some of the requirements that have
    been mentioned by various people here (see https://freetype.org/jam/
    for details).

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Malcolm McLean on Fri Feb 2 00:35:23 2024
    XPost: comp.unix.programmer

    On 02/02/2024 00:26, Malcolm McLean wrote:
    On 01/02/2024 21:34, David Brown wrote:

    It works for me, and I'm sure could work for others if they didn't
    have makefiles forced down their throats and hardwired into their
    brains.

    /Nobody/ has makefiles forced on them.  People use "make" because it
    is convenient, and it works.  If something better comes along, and it
    is better enough to overcome the familiarity momentum, people will use
    that.

    What?
    You have total control of your programming environment and never have to consider anybody else? For hobby programming you do in a way. Not if you
    want other people to use your stuff. But can always say that fun of
    doing things exactly your way outweighs the fun of getting downloads.

    But for professional or academic programming, often you'll find you have
    to use make. You don't have a choice. Either someone else took the
    decision, or there are so many other people who expect that build shall
    be via make that you have no real alternative.

    Now in one study, someone had wanted to do a survey of genetic sequence analysis software. They reported no results for half the programs,
    because they had attempted to build them, and failed. They didn't say,
    but it's a fair bet that most of those build systems used make. The
    software distribution system is a disaster and badly needs fixing.

    But there are lots of caveats. Bart's system might be better, but it as
    you say it needs traction. I'd be reluctant to evangelise for it and get everyone to use it at work, because it might prove to have major
    drawbacks, and then I'd get the blame.

    There's a lite, flexible version of it, which doesn't interfere with any existing uses of 'make'.

    That is to also provide a simple list the C files somewhere, in a
    comment, or text files. Plus any other notes needed to build the project (written in English or Norwegian, I don't care; Norwegian will be decode
    to understand than a typical makefile).

    This is exactly what you did with the resource compiler, specifying the
    three lots of *.c files needed to build it; no makefiles or CMake needed
    (which failed if you remember).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Fri Feb 2 01:03:09 2024
    XPost: comp.unix.programmer

    On Thu, 01 Feb 2024 15:28:03 -0800, Keith Thompson wrote:

    The C standard doesn't specify file
    extensions, either for source files or for files included with #include.

    It does for the standard library includes, though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Michael S on Fri Feb 2 02:08:14 2024
    XPost: comp.unix.programmer

    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 1 Feb 2024 22:38:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:


    And then to proceed with automatiion of his pre and post-processing
    needs.

    But then I'd still be using "make", and Bart would not be happy.

    And "gcc -MD" does not need any extra #pragmas, so presumably neither
    would an implementation of that feature in bcc (or mcc or whatever).
    So Bart's new system would disappear entirely.




    Bart spares you from managing list(s) of objects in your makefile and
    from writing arcan helper macros.
    Yes, I know, you copy&past arcan macros from project to project, but
    you had to write them n years ago and that surely was not easy.

    "Not easy for you" doesn't automatically translate to "not easy for
    everyone else".

    Difficult is the configuration file for sendmail processed by m4.

    Make is easy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tTh@21:1/5 to bart on Fri Feb 2 03:22:46 2024
    XPost: comp.unix.programmer

    On 2/1/24 23:29, bart wrote:
    I do. You type:

       cc prog

    without knowing or caring whether the contains that one module, or there
    are 99 more.


    I also do. You type:

    make prog

    without knowing or caring whether the contains that one module, or
    there are 51 more.


    --
    +---------------------------------------------------------------------+
    | https://tube.interhacker.space/a/tth/video-channels | +---------------------------------------------------------------------+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Fri Feb 2 02:43:51 2024
    XPost: comp.unix.programmer

    On Thu, 01 Feb 2024 17:42:32 -0800, Keith Thompson wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Thu, 01 Feb 2024 15:28:03 -0800, Keith Thompson wrote:

    The C standard doesn't specify file extensions, either for source
    files or for files included with #include.

    It does for the standard library includes, though.

    Strictly speaking, it doesn't specify that the standard library headers
    are files.

    From the C99 spec, page 149:

    6.10.2 Source file inclusion
    Constraints
    A #include directive shall identify a header or source file that
    can be processed by the implementation.

    ...

    3 A preprocessing directive of the form
    # include "q-char-sequence" new-line
    causes the replacement of that directive by the entire contents of
    the source file identified by the specified sequence between the "
    delimiters. The named source file is searched for in an
    implementation-defined manner.

    So you see, the spec very explicitly uses the term “file”.

    <https://www.open-std.org/JTC1/SC22/WG14/www/docs/n869/>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Fri Feb 2 09:02:15 2024
    XPost: comp.unix.programmer

    On 01/02/2024 23:55, Michael S wrote:
    On Thu, 1 Feb 2024 22:38:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 01/02/2024 21:23, Michael S wrote:
    On Thu, 1 Feb 2024 18:34:08 +0000



    You proposal and needs of David Brown are not necessarily
    contradictory.
    All you need to do to satisfy him is to add to your compiler an
    option for export of dependencies in make-compatible format, i.e.
    something very similar to -MD option of gcc.

    Then David could write in his makefile:

    out/foo.elf : main_foo.c
    mcc -MD $< -o $@

    -include out/foo.d

    And then to proceed with automatiion of his pre and post-processing
    needs.

    But then I'd still be using "make", and Bart would not be happy.

    And "gcc -MD" does not need any extra #pragmas, so presumably neither
    would an implementation of that feature in bcc (or mcc or whatever).
    So Bart's new system would disappear entirely.




    Bart spares you from managing list(s) of objects in your makefile and
    from writing arcan helper macros.
    Yes, I know, you copy&past arcan macros from project to project, but
    you had to write them n years ago and that surely was not easy.


    Google "makefile automatic dependencies", then adapt to suit your own
    needs. Re-use the same makefile time and again.

    Yes, some of the functions I have in my makefiles are a bit hairy, and
    some of the command line options for gcc are a bit complicated. They
    are done now.

    If there had been an easier way than this, which still let me do what I
    need (Bart's system does not), which is popular enough that you can
    easily google for examples, blogs, and tutorials, then I'd have been
    happy to use that at the time. I won't change to something else unless
    it gives me significant additional benefits.

    People smarter and more experienced than Bart have been trying to invent
    better replacements for "make" for many decades. None have succeeded.
    Some build systems are better in some ways, but nothing has come close
    to covering the wide range of features and uses of make, or gaining hold outside a particular niche. Everyone who has ever made serious use of
    "make" knows it has many flaws, unnecessarily complications, limitations
    and inefficiencies. Despite that, it is the best we have.

    With Bart's limited knowledge and experience, and deeply ingrained
    prejudices and misunderstandings, the best we can hope for is something
    that works well enough for some simple cases of C programs. More realistically, it will work for Bart's use alone.

    And that, of course, is absolutely fine. No one is paying Bart to write
    a generic build system, or something of use to anyone else. He is free
    to write exactly what he wants, in the way he wants, and if ends up with
    a tool that he finds useful himself, that is great. If he ends up with something that at least some other people find useful, that is even
    better, and I wish him luck with his work.

    But don't hold your breath waiting for something that will replace make,
    or attract users of any other build system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Fri Feb 2 10:54:21 2024
    XPost: comp.unix.programmer

    On 02/02/2024 04:03, Keith Thompson wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 01 Feb 2024 17:42:32 -0800, Keith Thompson wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 01 Feb 2024 15:28:03 -0800, Keith Thompson wrote:
    The C standard doesn't specify file extensions, either for source
    files or for files included with #include.

    It does for the standard library includes, though.

    Strictly speaking, it doesn't specify that the standard library headers
    are files.

    From the C99 spec, page 149:

    6.10.2 Source file inclusion
    Constraints
    A #include directive shall identify a header or source file that
    can be processed by the implementation.

    ...

    3 A preprocessing directive of the form
    # include "q-char-sequence" new-line
    causes the replacement of that directive by the entire contents of
    the source file identified by the specified sequence between the "
    delimiters. The named source file is searched for in an
    implementation-defined manner.

    So you see, the spec very explicitly uses the term “file”.

    <https://www.open-std.org/JTC1/SC22/WG14/www/docs/n869/>

    Yes, but not in reference to the standard headers.

    A #include directive with <> searches for a "header", which is not
    stated to be a file. A #include directive with "" searches for a file
    in an implementation-defined manner; if that search fails, it tries
    again as if <> had been used.

    References to standard headers (stdio.h et al) always use the <> syntax.
    You can write `#include "stdio.h"` if you like, but it risks picking up
    a file with the same name instead of the standard header (which *might*
    be what you want).

    BTW, the n1256.pdf draft is a close approximation to the C99 standard;
    it consists of the published standard with the three Technical
    Corrigenda merged into it. The n1570.pdf draft is the last publicly
    release draft before C11 was published, and is close enough to C11 for
    most purposes.


    In 7.1.2 "Standard headers", it says:

    """
    Each library function is declared, with a type that includes a
    prototype, in a header, 188) whose contents are made available by the
    #include preprocessing directive.
    """

    "Header" here is in italics, meaning it is a definition of the term.
    And footnote 188 has :

    """
    header is not necessarily a source file, nor are the < and > delimited sequences in header names necessarily valid source file names.
    """

    (I am quoting from n2346, the final C18 draft. The section numbering is generally consistent between standard versions, but footnote numbers
    change, in case anyone is looking this up.)


    I have personally used a toolchain where the standard library headers
    did not exist as files, but were internal to the compiler (and the implementations were internal to the linker). I think the toolchain
    company was a bit paranoid that others would copy their proprietary library.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Fri Feb 2 11:13:42 2024
    XPost: comp.unix.programmer

    On 02/02/2024 01:26, Malcolm McLean wrote:
    On 01/02/2024 21:34, David Brown wrote:

    It works for me, and I'm sure could work for others if they didn't
    have makefiles forced down their throats and hardwired into their
    brains.

    /Nobody/ has makefiles forced on them.  People use "make" because it
    is convenient, and it works.  If something better comes along, and it
    is better enough to overcome the familiarity momentum, people will use
    that.

    What?
    You have total control of your programming environment and never have to consider anybody else? For hobby programming you do in a way. Not if you
    want other people to use your stuff. But can always say that fun of
    doing things exactly your way outweighs the fun of getting downloads.


    Okay, none of the people talking about "make" /here/ had it forced on
    them for the uses they are talking about /here/.

    Yes, I have a very large degree of control over my programming
    environment - because I work in a company where employees get to make
    the decisions that they are best qualified to make, and management's job
    is to support them. One of the important factors I consider is
    interaction with colleagues and customers, for which "make" works well.

    And while people may be required to use make, or particular compilers,
    or OS's, no one is forced to /like/ a tool or find it useful. I believe
    that when people here say they like make, or find it works well for
    them, or that it can handle lots of different needs, or that they know
    of nothing better for their requirements, they are being honest about
    that. If they didn't like it, they would say.

    The only person here whom we can be absolutely sure does /not/ have
    "make" forced upon them for their development, is Bart. And he is the
    one who complains about it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Fri Feb 2 11:05:22 2024
    XPost: comp.unix.programmer

    On 02/02/2024 00:30, Lawrence D'Oliveiro wrote:
    On Thu, 1 Feb 2024 22:34:36 +0100, David Brown wrote:

    I am, however, considering CMake (which works at a
    higher level, and outputs makefiles, ninja files or other project
    files).

    Ninja was created as an alternative to Make.

    It is an alternative to some uses of make - but by no means all uses.

    Basically, if your Makefiles
    are going to be generated by a meta-build system like CMake or Meson, then they don’t need to support the kinds of niceties that facilitate writing them by hand. So you strip it write down to the bare-bones functionality, which makes your builds fast while consuming minimal resources, and that
    is Ninja.

    Yes.

    It is not normal to write ninja files by hand - the syntax is relatively simple, but quite limited. So it covers the lower level bits of "make",
    but not the higher level bits.


    Perhaps ninja is the tool that Bart is looking for? For the kinds of
    things he is doing, I don't think it would be hard to write the ninja
    files by hand.



    So it won't work for my needs, as I want to work at a higher level
    (without manually detailing file lists and dependencies).

    But if I find that CMake supports all I need at that level, then I
    expect I could just as easily generate ninja files as makefiles. The
    only issue that I know of is that ninja does not have full jobserver
    support, which could be important if the build involves other parallel
    tasks (like gcc LTO linking).


    It appears to have some disadvantages compared to my makefiles,
    such as needed to be run as an extra step when files are added or
    removed to a project or dependencies are changed, but that doesn't
    happen too often, and it's integration with other tools and projects
    might make it an overall win.

    Some are proposing Meson as an alternative to CMake. I think they are
    saying that the fact that its scripting language is not fully Turing- equivalent is an advantage.

    Me, while I think the CMake language can be a little clunky in places, I still think having Turing-equivalence is better than not having it. ;)

    For many reasons, CMake is the prime candidate as an alternative to make
    for my use.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Fri Feb 2 10:47:12 2024
    XPost: comp.unix.programmer

    On 01/02/2024 23:29, bart wrote:
    On 01/02/2024 21:34, David Brown wrote:
    On 01/02/2024 19:34, bart wrote:

    You don't see that the language taking over task (1) of the things
    that makefiles do, and possibly (2) (of the list I posted; repeated
    below), can streamline makefiles to make them shorter, simpler,
    easier to write and to read, and with fewer opportunities to get
    stuff wrong?

    That was a rhetorical question. Obviously not.

    I've nothing against shorter or simpler makefiles.  But as far as I
    can see, you are just moving the same information from a makefile into
    the C files.

    Indeed, you are duplicating things - now your C files have to have
    "#pragma module this, #pragma module that" in addition to having
    "#include this.h, #include that.h".  With my makefiles, all the "this"
    and "that" is found automatically - writing the includes in the C code
    is sufficient.

    I don't think so. Seeing:

        #include "file.h"

    doesn't necessarily mean there is a matching "file.c". It might not
    exist, or the header might be for some external library, or maybe it
    does exist but in a different location.

    As I said, you are duplicating things.

    For my builds, I do not have anywhere that I need to specify "file.c".


    Or maybe some code may use a file "fred.c", which needs to be submitted
    to the compiler, but for which there is either no header used, or uses a header file with a different name.

    As I said, C's uses of .h and .c files are chaotic.

    My uses of .h and .c files are not chaotic.

    Maybe you can't write well-structured C programs. Certainly not
    everyone can. (And /please/ do not give another list of open source
    programs that you don't like. I didn't write them. I can tell you how
    and why /I/ organise my projects and makefiles - I don't speak for others.)


    Did you have in mind using gcc's -MM option? For my 'cipher.c' demo,
    that only gives a set of header names.  Missing are hmac.c and sha2.c.


    I use makefiles where gcc's "-M" options are part of the solution - not
    the whole solution.

    If I try it on lua.c, it gives me only 5 header files; the project
    comprises 33 .c files and 27 .h files.


    I don't care. I did not write lua.

    But I /have/ integrated lua with one of my projects, long ago. It fit
    into my makefile format without trouble - I added the lua directory as a subdirectory of my source directory, and that was all that was needed.



    Perhaps I would find your tools worked for a "Hello, world" project.
    Maybe they were still okay as it got slightly bigger.  Then I'd have
    something that they could not handle, and I'd reach for make.  What
    would be the point of using "make" to automate - for example -
    post-processing of a binary to add a CRC check, but using your tools
    to handle the build?  It's much easier just to use "make" for the
    whole thing.


    Because building one binary is a process should be the job of a
    compiler, not some random external tool that knows nothing of the
    language or compiler.

    No, it is the job of the linker.

    There is where you're still stuck in the past.

    I first got rid of a formal 'linker' about 40 years ago. I got rid of
    the notion of combining independently compiled modules into an
    executable a decade ago.

    No, you built a monolithic tool that /included/ the linker. That's fine
    for niche tools that are not intended to work with anything else. Most
    people work with many tools - that's why we have standards, defined file formats, and flexible tools with wide support.

    Other people got rid of monolithic tools forty years ago when they
    realised it was a terrible way to organise things.



    But I suspect you don't understand what a 'whole-program compiler' does:


    I know exactly what it does. I am entirely without doubt that I know
    the point and advantages of them better than you do - the /real/ points
    and advantages, not some pathetic "it means I don't have to use that
    horrible nasty make program" reason.

    * It means that for each binary, all sources are recompiled at the same
      time to create it

    No, it does not.


    * It doesn't mean that an application can only comprise one binary

    Correct.


    * It moves the compilation unit granularity from a module to a single
      EXE or DLL file

    No, it does not.


    * Interfaces (in the case of a lower level language), are moved inter-
      module to inter-program. The boundaries are between one program or
      library and another, not between modules.

    Correct.


    A language which claims to have a module system, but still compiles a
    module at a time, will probably still have discrete inter-module
    interfaces, although they may be handled automatically.


    Correct.


    In real-world whole program compilation systems, the focus is on
    inter-module optimisations. Total build times are expected to go /up/.
    Build complexity can be much higher, especially for large programs. It
    is more often used for C++ than C.

    The main point of a lot of whole-program compilation is to allow
    cross-module optimisation. It means you can have "access" functions
    hidden away in implementation files so that you avoid global variables
    or inter-dependencies between modules, but now they can be inline across modules so that you have no overhead or costs for this. It means you
    can write code that is more structured and modular, with different teams handling different parts, and with layers of abstractions, but when you
    pull it all together into one whole-program build, the run-time costs
    and overhead for this all disappear. And it means lots of checks and
    static analysis can be done across the whole program.


    For such programs, each translation unit is still compiled separately,
    but the "object" files contain internal data structures and analysis information, rather than generated code. Lots of the work is done by
    this point, with inter-procedural optimisations done within the unit.
    These compilations will be done as needed, in parallel, under the
    control of a build system. Then they are combined for the linking and link-time optimisation which fits the parts together. Doing this in a
    scalable way is hard, and the subject of a lot of research, as you need
    to partition it into chunks that can be handled in parallel on multiple
    cpu cores (or even distributed amongst servers). Once you have parts of
    code that are ready, they are handed on to backend compilers that do
    more optimisation and generate the object code, and this in turn is
    linked (sometimes incrementally in parts, again aiming at improving
    parallel building and scalability.


    You go to all this effort because you are building software that is used
    by millions of people, and your build effort is minor compared to the
    total improvements for all users combined. Or you do it because you are building speed-critical software. Or you want the best static analysis
    you can get, and want that done across modules. Or you are building
    embedded systems that need to be as efficient as possible.

    You don't do it because you find "make" ugly.


    It is also very useful on old-fashioned microcontrollers with multiple
    banks for data ram and code memory, and no good data stack access - the compiler can do large-scale lifetime analysis and optimise placement and
    the re-use of the very limited ram.



    /Nobody/ has makefiles forced on them.  People use "make" because it
    is convenient, and it works.

    BUT IT DOESN'T.

    IT DOES WORK.

    People use it all the time.

    It fails a lot of the time on Windows, but they are too
    complicated to figure out why.

    People use it all the time on Windows.

    Even Microsoft ships its own version of make, "nmake.exe", and has done
    for decades.

    /You/ can't work it, but you excel at failing to get things working.
    You have a special gift - you just have to look at a computer with tools
    that you didn't write yourself, and it collapses.


    But I have no interest in changing to something vastly more limited
    and which adds nothing at all.

    That's right; it adds nothing, but it takes a lot away! Like a lot of
    failure points.

    Like pretty much everything I need.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Fri Feb 2 10:54:22 2024
    XPost: comp.unix.programmer

    On 02/02/2024 10:13, David Brown wrote:
    On 02/02/2024 01:26, Malcolm McLean wrote:
    On 01/02/2024 21:34, David Brown wrote:

    It works for me, and I'm sure could work for others if they didn't
    have makefiles forced down their throats and hardwired into their
    brains.

    /Nobody/ has makefiles forced on them.  People use "make" because it
    is convenient, and it works.  If something better comes along, and it
    is better enough to overcome the familiarity momentum, people will
    use that.

    What?
    You have total control of your programming environment and never have
    to consider anybody else? For hobby programming you do in a way. Not
    if you want other people to use your stuff. But can always say that
    fun of doing things exactly your way outweighs the fun of getting
    downloads.


    Okay, none of the people talking about "make" /here/ had it forced on
    them for the uses they are talking about /here/.

    Yes, I have a very large degree of control over my programming
    environment - because I work in a company where employees get to make
    the decisions that they are best qualified to make, and management's job
    is to support them.  One of the important factors I consider is
    interaction with colleagues and customers, for which "make" works well.

    And while people may be required to use make, or particular compilers,
    or OS's, no one is forced to /like/ a tool or find it useful.  I believe that when people here say they like make, or find it works well for
    them, or that it can handle lots of different needs, or that they know
    of nothing better for their requirements, they are being honest about
    that.  If they didn't like it, they would say.

    The only person here whom we can be absolutely sure does /not/ have
    "make" forced upon them for their development, is Bart.  And he is the
    one who complains about it.


    Not for my own development, no. Unless that includes having to build
    external dependenceies from source, which are written in C.

    Or just things I want to test my C compiler on.

    If I want to build Seed7, for example, that comes with 19 different
    makefiles. LibJPEG has 15 different makefiles. GMP has one makefiles,
    but a 30,000-line configure script dependent on Linux.

    I could and have spent a lot of time on many of those in manually
    discovering the C files necessary to building the project.

    Once done, the process was beautifully streamlined and simple.

    But I know this a waste of time and nobody's mind is going to be changed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to tTh on Fri Feb 2 11:13:13 2024
    XPost: comp.unix.programmer

    On 02/02/2024 02:22, tTh wrote:
    On 2/1/24 23:29, bart wrote:
    I do. You type:

        cc prog

    without knowing or caring whether the contains that one module, or
    there are 99 more.


    I also do. You type:

       make prog

    without knowing or caring whether the contains that one module, or
    there are 51 more.


    Really? OK, let's try it:

    c:\c>make cipher
    cc cipher.c -o cipher
    C:\tdm\bin\ld.exe: C:\Users\44775\AppData\Local\Temp\ccRvFIdY.o:cipher.c:(.text+0x55a):
    undefined reference to `hmac_sha256_final'

    It seems I do need to care after all!

    Oh, you mean I don't need to care AFTER I've created a complicated
    makefile containing all those details that you claim I don't need to
    bother with?

    Let's try with a real solution:

    c:\c>mcc cipher
    Compiling cipher.c to cipher.exe


    Or here's one where I don't need to add anything to the C code:

    c:\c>bcc -auto cipher
    1 Compiling cipher.c to cipher.asm (Pass 1)
    * 2 Compiling hmac.c to hmac.asm (Pass 2)
    * 3 Compiling sha2.c to sha2.asm (Pass 2)
    Assembling to cipher.exe

    I'm the one who's trying innovative approaches to minimise the extra
    gumph you need to provide to build programs.

    You're the one who needs to first write a pile of garbage within a
    makefile in order for you to do:

    make prog

    Below is the makefile needed to build lua 5.4, which is a project of
    only 35 C modules. Simple, isn't it?

    ---------------------------------
    # Makefile for building Lua
    # See ../doc/readme.html for installation and customization instructions.

    # == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT
    =======================

    # Your platform. See PLATS for possible values.
    PLAT= guess

    CC= gcc -std=gnu99
    CFLAGS= -O2 -Wall -Wextra -DLUA_COMPAT_5_3 $(SYSCFLAGS) $(MYCFLAGS)
    LDFLAGS= $(SYSLDFLAGS) $(MYLDFLAGS)
    LIBS= -lm $(SYSLIBS) $(MYLIBS)

    AR= ar rcu
    RANLIB= ranlib
    RM= rm -f
    UNAME= uname

    SYSCFLAGS=
    SYSLDFLAGS=
    SYSLIBS=

    MYCFLAGS=
    MYLDFLAGS=
    MYLIBS=
    MYOBJS=

    # Special flags for compiler modules; -Os reduces code size.
    CMCFLAGS=

    # == END OF USER SETTINGS -- NO NEED TO CHANGE ANYTHING BELOW THIS LINE
    =======

    PLATS= guess aix bsd c89 freebsd generic ios linux linux-readline macosx
    mingw posix solaris

    LUA_A= liblua.a
    CORE_O= lapi.o lcode.o lctype.o ldebug.o ldo.o ldump.o lfunc.o lgc.o
    llex.o lmem.o lobject.o lopcodes.o lparser.o lstate.o lstring.o ltable.o
    ltm.o lundump.o lvm.o lzio.o
    LIB_O= lauxlib.o lbaselib.o lcorolib.o ldblib.o liolib.o lmathlib.o
    loadlib.o loslib.o lstrlib.o ltablib.o lutf8lib.o linit.o
    BASE_O= $(CORE_O) $(LIB_O) $(MYOBJS)

    LUA_T= lua
    LUA_O= lua.o

    LUAC_T= luac
    LUAC_O= luac.o

    ALL_O= $(BASE_O) $(LUA_O) $(LUAC_O)
    ALL_T= $(LUA_A) $(LUA_T) $(LUAC_T)
    ALL_A= $(LUA_A)

    # Targets start here.
    default: $(PLAT)

    all: $(ALL_T)

    o: $(ALL_O)

    a: $(ALL_A)

    $(LUA_A): $(BASE_O)
    $(AR) $@ $(BASE_O)
    $(RANLIB) $@

    $(LUA_T): $(LUA_O) $(LUA_A)
    $(CC) -o $@ $(LDFLAGS) $(LUA_O) $(LUA_A) $(LIBS)

    $(LUAC_T): $(LUAC_O) $(LUA_A)
    $(CC) -o $@ $(LDFLAGS) $(LUAC_O) $(LUA_A) $(LIBS)

    test:
    ./$(LUA_T) -v

    clean:
    $(RM) $(ALL_T) $(ALL_O)

    depend:
    @$(CC) $(CFLAGS) -MM l*.c

    echo:
    @echo "PLAT= $(PLAT)"
    @echo "CC= $(CC)"
    @echo "CFLAGS= $(CFLAGS)"
    @echo "LDFLAGS= $(LDFLAGS)"
    @echo "LIBS= $(LIBS)"
    @echo "AR= $(AR)"
    @echo "RANLIB= $(RANLIB)"
    @echo "RM= $(RM)"
    @echo "UNAME= $(UNAME)"

    # Convenience targets for popular platforms.
    ALL= all

    help:
    @echo "Do 'make PLATFORM' where PLATFORM is one of these:"
    @echo " $(PLATS)"
    @echo "See doc/readme.html for complete instructions."

    guess:
    @echo Guessing `$(UNAME)`
    @$(MAKE) `$(UNAME)`

    AIX aix:
    $(MAKE) $(ALL) CC="xlc" CFLAGS="-O2 -DLUA_USE_POSIX -DLUA_USE_DLOPEN" SYSLIBS="-ldl" SYSLDFLAGS="-brtl -bexpall"

    bsd:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_POSIX -DLUA_USE_DLOPEN" SYSLIBS="-Wl,-E"

    c89:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_C89" CC="gcc -std=c89"
    @echo ''
    @echo '*** C89 does not guarantee 64-bit integers for Lua.'
    @echo '*** Make sure to compile all external Lua libraries'
    @echo '*** with LUA_USE_C89 to ensure consistency'
    @echo ''

    FreeBSD NetBSD OpenBSD freebsd:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_LINUX -DLUA_USE_READLINE -I/usr/include/edit" SYSLIBS="-Wl,-E -ledit" CC="cc"

    generic: $(ALL)

    ios:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_IOS"

    Linux linux: linux-noreadline

    linux-noreadline:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_LINUX" SYSLIBS="-Wl,-E -ldl"

    linux-readline:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_LINUX -DLUA_USE_READLINE" SYSLIBS="-Wl,-E -ldl -lreadline"

    Darwin macos macosx:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_MACOSX -DLUA_USE_READLINE" SYSLIBS="-lreadline"

    mingw:
    $(MAKE) "LUA_A=lua54.dll" "LUA_T=lua.exe" \
    "AR=$(CC) -shared -o" "RANLIB=strip --strip-unneeded" \
    "SYSCFLAGS=-DLUA_BUILD_AS_DLL" "SYSLIBS=" "SYSLDFLAGS=-s" lua.exe
    $(MAKE) "LUAC_T=luac.exe" luac.exe

    posix:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_POSIX"

    SunOS solaris:
    $(MAKE) $(ALL) SYSCFLAGS="-DLUA_USE_POSIX -DLUA_USE_DLOPEN -D_REENTRANT" SYSLIBS="-ldl"

    # Targets that do not create files (not all makes understand .PHONY).
    .PHONY: all $(PLATS) help test clean default o a depend echo

    # Compiler modules may use special flags.
    llex.o:
    $(CC) $(CFLAGS) $(CMCFLAGS) -c llex.c

    lparser.o:
    $(CC) $(CFLAGS) $(CMCFLAGS) -c lparser.c

    lcode.o:
    $(CC) $(CFLAGS) $(CMCFLAGS) -c lcode.c

    # DO NOT DELETE

    lapi.o: lapi.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
    lobject.h ltm.h lzio.h lmem.h ldebug.h ldo.h lfunc.h lgc.h lstring.h \
    ltable.h lundump.h lvm.h
    lauxlib.o: lauxlib.c lprefix.h lua.h luaconf.h lauxlib.h
    lbaselib.o: lbaselib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    lcode.o: lcode.c lprefix.h lua.h luaconf.h lcode.h llex.h lobject.h \
    llimits.h lzio.h lmem.h lopcodes.h lparser.h ldebug.h lstate.h ltm.h \
    ldo.h lgc.h lstring.h ltable.h lvm.h
    lcorolib.o: lcorolib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    lctype.o: lctype.c lprefix.h lctype.h lua.h luaconf.h llimits.h
    ldblib.o: ldblib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    ldebug.o: ldebug.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
    lobject.h ltm.h lzio.h lmem.h lcode.h llex.h lopcodes.h lparser.h \
    ldebug.h ldo.h lfunc.h lstring.h lgc.h ltable.h lvm.h
    ldo.o: ldo.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
    lobject.h ltm.h lzio.h lmem.h ldebug.h ldo.h lfunc.h lgc.h lopcodes.h \
    lparser.h lstring.h ltable.h lundump.h lvm.h
    ldump.o: ldump.c lprefix.h lua.h luaconf.h lobject.h llimits.h lstate.h \
    ltm.h lzio.h lmem.h lundump.h
    lfunc.o: lfunc.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
    llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h
    lgc.o: lgc.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
    llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h lstring.h ltable.h
    linit.o: linit.c lprefix.h lua.h luaconf.h lualib.h lauxlib.h
    liolib.o: liolib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    llex.o: llex.c lprefix.h lua.h luaconf.h lctype.h llimits.h ldebug.h \
    lstate.h lobject.h ltm.h lzio.h lmem.h ldo.h lgc.h llex.h lparser.h \
    lstring.h ltable.h
    lmathlib.o: lmathlib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    lmem.o: lmem.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
    llimits.h ltm.h lzio.h lmem.h ldo.h lgc.h
    loadlib.o: loadlib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    lobject.o: lobject.c lprefix.h lua.h luaconf.h lctype.h llimits.h \
    ldebug.h lstate.h lobject.h ltm.h lzio.h lmem.h ldo.h lstring.h lgc.h \
    lvm.h
    lopcodes.o: lopcodes.c lprefix.h lopcodes.h llimits.h lua.h luaconf.h
    loslib.o: loslib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    lparser.o: lparser.c lprefix.h lua.h luaconf.h lcode.h llex.h lobject.h \
    llimits.h lzio.h lmem.h lopcodes.h lparser.h ldebug.h lstate.h ltm.h \
    ldo.h lfunc.h lstring.h lgc.h ltable.h
    lstate.o: lstate.c lprefix.h lua.h luaconf.h lapi.h llimits.h lstate.h \
    lobject.h ltm.h lzio.h lmem.h ldebug.h ldo.h lfunc.h lgc.h llex.h \
    lstring.h ltable.h
    lstring.o: lstring.c lprefix.h lua.h luaconf.h ldebug.h lstate.h \
    lobject.h llimits.h ltm.h lzio.h lmem.h ldo.h lstring.h lgc.h
    lstrlib.o: lstrlib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    ltable.o: ltable.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
    llimits.h ltm.h lzio.h lmem.h ldo.h lgc.h lstring.h ltable.h lvm.h
    ltablib.o: ltablib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    ltm.o: ltm.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
    llimits.h ltm.h lzio.h lmem.h ldo.h lgc.h lstring.h ltable.h lvm.h
    lua.o: lua.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    luac.o: luac.c lprefix.h lua.h luaconf.h lauxlib.h ldebug.h lstate.h \
    lobject.h llimits.h ltm.h lzio.h lmem.h lopcodes.h lopnames.h lundump.h lundump.o: lundump.c lprefix.h lua.h luaconf.h ldebug.h lstate.h \
    lobject.h llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lstring.h lgc.h \
    lundump.h
    lutf8lib.o: lutf8lib.c lprefix.h lua.h luaconf.h lauxlib.h lualib.h
    lvm.o: lvm.c lprefix.h lua.h luaconf.h ldebug.h lstate.h lobject.h \
    llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h lopcodes.h lstring.h \
    ltable.h lvm.h ljumptab.h
    lzio.o: lzio.c lprefix.h lua.h luaconf.h llimits.h lmem.h lstate.h \
    lobject.h ltm.h lzio.h

    # (end of Makefile)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gary R. Schmidt@21:1/5 to All on Sat Feb 3 00:25:23 2024
    XPost: comp.unix.programmer

    On 02/02/2024 22:13, bart wrote:
    [Bitching about "make" snipped]

    Try "cake", Zoltan wrote it many decades ago, when we were at $GOSHWHATAUNIVERSITY, because he thought "make" was too prolix.

    Cheers,
    Gary B-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Fri Feb 2 15:28:49 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 09:02:15 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    But don't hold your breath waiting for something that will replace
    make, or attract users of any other build system.



    It seems, you already forgot the context of my post that started this
    short sub-thread.

    BTW, I would imagine that Stu Feldman, if he is still in good health,
    would fine talking with Bart more entertaining that talking with you.
    I think, you, English speakers, call it birds of feather.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to bart on Fri Feb 2 13:29:53 2024
    XPost: comp.unix.programmer

    On 02/02/2024 11:13, bart wrote:

    You're the one who needs to first write a pile of garbage within a
    makefile in order for you to do:

    make prog

    Below is the makefile needed to build lua 5.4, which is a project of
    only 35 C modules. Simple, isn't it?

    ---------------------------------
    # Makefile for building Lua
    # See ../doc/readme.html for installation and customization instructions.

    # == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT

    Now this is an interesting comment. The makefile is set up for gcc. For
    another compiler it won't work.

    If I try to switch to 'tcc', there are a number of problems. First,
    unless you do 'make clean', the .o files lying about (I guess a
    consequence of being to do incremental builds), are incompatible.

    At this point I discovered a bug in the makefile for Lua (you might say
    it's not bug, it's one of the settings that need changing, but I've no
    idea how or where):

    Although this makefile works with gcc on Windows, it thinks the
    executable is called 'lua', not 'lua.exe'. It will produce 'lua.exe'
    with gcc, but it checks for the existence of 'lua'.

    That is never present, so it always links; it never says 'is up-to-date'.

    With tcc however, there's another issue: tcc requires the .exe extension
    in the -o option, otherwise it writes the executable as 'lua'. Now, at
    last, make sees 'lua' and deems it up-to-date. Unfortunately that won't
    run under Windows.

    Either not at all, or it will use the lua.exe left over from gcc. I can
    bodge this by using '-o $@.exe', producing lua.exe from tcc, but make is
    still checking 'lua'.

    There are some minor things: tcc doesn't like the -lm option for example.

    But what it comes down to is that it seems I need a separate makefile
    for each compiler. As supplied, it didn't even work 100% for gcc on Windows.

    That means duplicating all that file info.

    This is a solution I used before, using this @ file:

    ------------------------------
    -O2 -s -o lua.exe
    lua.c lapi.c lcode.c lctype.c ldebug.c ldo.c ldump.c lfunc.c lgc.c
    llex.c lmem.c lobject.c lopcodes.c lparser.c lstate.c lstring.c
    ltable.c ltm.c lundump.c lvm.c lzio.c lauxlib.c lbaselib.c lcorolib.c
    ldblib.c liolib.c lmathlib.c loadlib.c loslib.c lstrlib.c ltablib.c
    lutf8lib.c linit.c
    ------------------------------


    If I run it like this:

    gcc @luafiles

    it produces a 260KB executable. Which is another interesting thing:
    using 'make lua' set up for gcc produces a 360KB executable.

    But I can also run it like this:

    tcc @luafiles

    The same file works for both gcc and tcc.

    It won't work for mcc unless I split it into two, as that first line of
    options doesn't work there. However with mcc I can now just do this:

    mcc lua

    So two solutions for this project that (1) don't involve a makefile; (2)
    work better than the makefile.

    It's true that it involved recompiling every module. But tcc still
    builds this project in 0.3 seconds.

    This project contains 34 C files, or which 33 are needed (not 35 as I
    said). That means that using *.c is not possible, unless that extra file
    (I believe used when building a shared library) is renamed.

    If that is done, then all compilers just need "*.c" plus whatever other
    options are needed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Fri Feb 2 13:47:25 2024
    XPost: comp.unix.programmer

    On 02/02/2024 08:02, David Brown wrote:
    On 01/02/2024 23:55, Michael S wrote:
    On Thu, 1 Feb 2024 22:38:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 01/02/2024 21:23, Michael S wrote:
    On Thu, 1 Feb 2024 18:34:08 +0000



    You proposal and needs of David Brown are not necessarily
    contradictory.
    All you need to do to satisfy him is to add to your compiler an
    option for export of dependencies in make-compatible format, i.e.
    something very similar to -MD option of gcc.

    Then David could write in his makefile:

    out/foo.elf : main_foo.c
        mcc -MD $< -o $@

    -include out/foo.d

    And then to proceed with automatiion of his pre and post-processing
    needs.

    But then I'd still be using "make", and Bart would not be happy.

    And "gcc -MD" does not need any extra #pragmas, so presumably neither
    would an implementation of that feature in bcc (or mcc or whatever).
    So Bart's new system would disappear entirely.




    Bart spares you from managing list(s) of objects in your makefile and
    from writing arcan helper macros.
    Yes, I know, you copy&past arcan macros from project to project, but
    you had to write them n years ago and that surely was not easy.


    Google "makefile automatic dependencies", then adapt to suit your own needs.  Re-use the same makefile time and again.

    Yes, some of the functions I have in my makefiles are a bit hairy, and
    some of the command line options for gcc are a bit complicated.  They
    are done now.

    If there had been an easier way than this, which still let me do what I
    need (Bart's system does not), which is popular enough that you can
    easily google for examples, blogs, and tutorials, then I'd have been
    happy to use that at the time.  I won't change to something else unless
    it gives me significant additional benefits.

    People smarter and more experienced than Bart have been trying to invent better replacements for "make" for many decades.  None have succeeded.
    Some build systems are better in some ways, but nothing has come close
    to covering the wide range of features and uses of make, or gaining hold outside a particular niche.  Everyone who has ever made serious use of "make" knows it has many flaws, unnecessarily complications, limitations
    and inefficiencies.  Despite that, it is the best we have.

    With Bart's limited knowledge and experience,

    That's true: only 47 years in computing, and 42 years of evolving,
    implementing and running my systems language.

    What can I possibly know about compiling sources files of a lower-level language into binaries?

    How many assemblers, compilers, linkers, and interpreters have /you/
    written?

    and deeply ingrained
    prejudices and misunderstandings, the best we can hope for is something
    that works well enough for some simple cases of C programs.

    With the proposal outlined in my OP, any of MY C programs, if I was to
    write or port multi-module projects in that language, could be trivially
    built by giving only the name of the compiler, and the name of one module.

      More
    realistically, it will work for Bart's use alone.

    It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
    all seem to delight in wheeling out the most complex scenarios you can find.

    That is another aspect you might do well to learn how to do: KISS. (Yes
    I can be a patronising fuck too.)


    And that, of course, is absolutely fine.  No one is paying Bart to write
    a generic build system, or something of use to anyone else.  He is free
    to write exactly what he wants, in the way he wants, and if ends up with
    a tool that he finds useful himself, that is great.  If he ends up with something that at least some other people find useful, that is even
    better, and I wish him luck with his work.

    But don't hold your breath waiting for something that will replace make,
    or attract users of any other build system.

    Jesus. And you seem to determined to ignore everything I write, or have
    a short memory.

    I'm not suggesting replacing make, only to reduce its involvement.

    Twice I posted a list of 3 things that make takes care of; I'm looking
    at replacing just 1 of those things, the I which for me is more critical.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Fri Feb 2 14:14:31 2024
    XPost: comp.unix.programmer

    On 02/02/2024 09:47, David Brown wrote:
    On 01/02/2024 23:29, bart wrote:

    As I said, C's uses of .h and .c files are chaotic.

    My uses of .h and .c files are not chaotic.

    We can't write tools that only work for careful users. Any open-source
    project I want to build WILL be chaotic.

    We can however write languages where you are forced to be more
    disciplined. Mine doesn't have the equivalent of .h files for example.

    However this is about C.


    I first got rid of a formal 'linker' about 40 years ago. I got rid of
    the notion of combining independently compiled modules into an
    executable a decade ago.

    No, you built a monolithic tool that /included/ the linker.

    No, I ELIMINATED the linker.

    And in the past, I wrote a program called a Loader, much simpler than a
    linker, and very fast (it had to be as I worked with floppies).

      That's fine
    for niche tools that are not intended to work with anything else.  Most people work with many tools - that's why we have standards, defined file formats, and flexible tools with wide support.

    Other people got rid of monolithic tools forty years ago when they
    realised it was a terrible way to organise things.





    But I suspect you don't understand what a 'whole-program compiler' does:


    I know exactly what it does.  I am entirely without doubt that I know
    the point and advantages of them better than you do

    You can't create a language devised for whole-program compilation, and implement a full-stack compiler for it, without learning a lot about the
    ins and outs.

    So I suspect I know a bit more about it than you do.

    Probably you're mixing this up with whole-program optimisation.

    - the /real/ points
    and advantages, not some pathetic "it means I don't have to use that
    horrible nasty make program" reason.

    * It means that for each binary, all sources are recompiled at the same
       time to create it

    No, it does not.

    That's not a whole-program compiler then. Not if half the modules were
    compiled last week!


    * It doesn't mean that an application can only comprise one binary

    Correct.


    * It moves the compilation unit granularity from a module to a single
       EXE or DLL file

    No, it does not.

    Again, it can't be a whole-program compiler if it can compile modules independently.

    In real-world whole program compilation systems, the focus is on
    inter-module optimisations.  Total build times are expected to go /up/. Build complexity can be much higher, especially for large programs.  It
    is more often used for C++ than C.

    The main point of a lot of whole-program compilation is to allow
    cross-module optimisation.  It means you can have "access" functions
    hidden away in implementation files so that you avoid global variables
    or inter-dependencies between modules, but now they can be inline across modules so that you have no overhead or costs for this.  It means you
    can write code that is more structured and modular, with different teams handling different parts, and with layers of abstractions, but when you
    pull it all together into one whole-program build, the run-time costs
    and overhead for this all disappear.  And it means lots of checks and
    static analysis can be done across the whole program.


    For such programs, each translation unit is still compiled separately,
    but the "object" files contain internal data structures and analysis information, rather than generated code.  Lots of the work is done by
    this point, with inter-procedural optimisations done within the unit.
    These compilations will be done as needed, in parallel, under the
    control of a build system.  Then they are combined for the linking and link-time optimisation which fits the parts together.  Doing this in a scalable way is hard, and the subject of a lot of research, as you need
    to partition it into chunks that can be handled in parallel on multiple
    cpu cores (or even distributed amongst servers).  Once you have parts of code that are ready, they are handed on to backend compilers that do
    more optimisation and generate the object code, and this in turn is
    linked (sometimes incrementally in parts, again aiming at improving
    parallel building and scalability.

    You've just described a tremendously complex way to do whole-program
    analysis.

    There are easier ways. The C transpiler I use takes a project of dozens
    of modules in my language, and produces a single C source file which
    will form one EXE or one DLL file.

    Now any ordinary optimising C compiler has a view of the entire program
    and can do wider optimisations (but that view does not span multiple
    EXE/DLL files.)

    /You/ can't work it, but you excel at failing to get things working. You
    have a special gift - you just have to look at a computer with tools
    that you didn't write yourself, and it collapses.

    Yes, I do. I'm like that kid poking fun at the emperor's new clothes;
    I'm just stating what I see. But in one way it is hilarious seeing you
    lot defend programs like 'as' to the death.

    Why not just admit that it is a POS that you've had to learn to live
    with, instead of trying to make out it is somehow superior?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Fri Feb 2 15:45:31 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 10:47:12 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 01/02/2024 23:29, bart wrote:
    On 01/02/2024 21:34, David Brown wrote:
    On 01/02/2024 19:34, bart wrote:

    You don't see that the language taking over task (1) of the
    things that makefiles do, and possibly (2) (of the list I posted;
    repeated below), can streamline makefiles to make them shorter,
    simpler, easier to write and to read, and with fewer
    opportunities to get stuff wrong?

    That was a rhetorical question. Obviously not.

    I've nothing against shorter or simpler makefiles. But as far as
    I can see, you are just moving the same information from a
    makefile into the C files.

    Indeed, you are duplicating things - now your C files have to have
    "#pragma module this, #pragma module that" in addition to having
    "#include this.h, #include that.h". With my makefiles, all the
    "this" and "that" is found automatically - writing the includes in
    the C code is sufficient.

    I don't think so. Seeing:

    #include "file.h"

    doesn't necessarily mean there is a matching "file.c". It might not
    exist, or the header might be for some external library, or maybe
    it does exist but in a different location.

    As I said, you are duplicating things.

    For my builds, I do not have anywhere that I need to specify "file.c".


    Or maybe some code may use a file "fred.c", which needs to be
    submitted to the compiler, but for which there is either no header
    used, or uses a header file with a different name.

    As I said, C's uses of .h and .c files are chaotic.

    My uses of .h and .c files are not chaotic.

    Maybe you can't write well-structured C programs. Certainly not
    everyone can. (And /please/ do not give another list of open source programs that you don't like. I didn't write them. I can tell you
    how and why /I/ organise my projects and makefiles - I don't speak
    for others.)


    Did you have in mind using gcc's -MM option? For my 'cipher.c'
    demo, that only gives a set of header names. Missing are hmac.c
    and sha2.c.

    I use makefiles where gcc's "-M" options are part of the solution -
    not the whole solution.

    If I try it on lua.c, it gives me only 5 header files; the project comprises 33 .c files and 27 .h files.


    I don't care. I did not write lua.

    But I /have/ integrated lua with one of my projects, long ago. It
    fit into my makefile format without trouble - I added the lua
    directory as a subdirectory of my source directory, and that was all
    that was needed.



    Perhaps I would find your tools worked for a "Hello, world"
    project. Maybe they were still okay as it got slightly bigger.
    Then I'd have something that they could not handle, and I'd
    reach for make. What would be the point of using "make" to
    automate - for example - post-processing of a binary to add a
    CRC check, but using your tools to handle the build? It's much
    easier just to use "make" for the whole thing.


    Because building one binary is a process should be the job of a
    compiler, not some random external tool that knows nothing of the
    language or compiler.

    No, it is the job of the linker.

    There is where you're still stuck in the past.

    I first got rid of a formal 'linker' about 40 years ago. I got rid
    of the notion of combining independently compiled modules into an executable a decade ago.

    No, you built a monolithic tool that /included/ the linker. That's
    fine for niche tools that are not intended to work with anything
    else. Most people work with many tools - that's why we have
    standards, defined file formats, and flexible tools with wide support.

    Other people got rid of monolithic tools forty years ago when they
    realised it was a terrible way to organise things.


    Actually, nowadays monolithic tools are solid majority in programming.
    I mean, programming in general, not C/C++/Fortran programming which by
    itself is a [sizable] minority.
    Even in C++, a majority uses non-monolithic tools well-hidden behind
    front end (IDE) that makes them indistinguishable from monolithic.



    But I suspect you don't understand what a 'whole-program compiler'
    does:

    I know exactly what it does. I am entirely without doubt that I know
    the point and advantages of them better than you do - the /real/
    points and advantages, not some pathetic "it means I don't have to
    use that horrible nasty make program" reason.

    * It means that for each binary, all sources are recompiled at the
    same time to create it

    No, it does not.


    * It doesn't mean that an application can only comprise one binary

    Correct.


    * It moves the compilation unit granularity from a module to a
    single EXE or DLL file

    No, it does not.


    * Interfaces (in the case of a lower level language), are moved
    inter- module to inter-program. The boundaries are between one
    program or library and another, not between modules.

    Correct.


    A language which claims to have a module system, but still compiles
    a module at a time, will probably still have discrete inter-module interfaces, although they may be handled automatically.


    Correct.


    In real-world whole program compilation systems, the focus is on inter-module optimisations. Total build times are expected to go
    /up/. Build complexity can be much higher, especially for large
    programs. It is more often used for C++ than C.

    The main point of a lot of whole-program compilation is to allow cross-module optimisation. It means you can have "access" functions
    hidden away in implementation files so that you avoid global
    variables or inter-dependencies between modules, but now they can be
    inline across modules so that you have no overhead or costs for this.
    It means you can write code that is more structured and modular,
    with different teams handling different parts, and with layers of abstractions, but when you pull it all together into one
    whole-program build, the run-time costs and overhead for this all
    disappear. And it means lots of checks and static analysis can be
    done across the whole program.


    For such programs, each translation unit is still compiled
    separately, but the "object" files contain internal data structures
    and analysis information, rather than generated code. Lots of the
    work is done by this point, with inter-procedural optimisations done
    within the unit. These compilations will be done as needed, in
    parallel, under the control of a build system. Then they are
    combined for the linking and link-time optimisation which fits the
    parts together. Doing this in a scalable way is hard, and the
    subject of a lot of research, as you need to partition it into chunks
    that can be handled in parallel on multiple cpu cores (or even
    distributed amongst servers). Once you have parts of code that are
    ready, they are handed on to backend compilers that do more
    optimisation and generate the object code, and this in turn is linked (sometimes incrementally in parts, again aiming at improving parallel building and scalability.


    You go to all this effort because you are building software that is
    used by millions of people, and your build effort is minor compared
    to the total improvements for all users combined. Or you do it
    because you are building speed-critical software. Or you want the
    best static analysis you can get, and want that done across modules.
    Or you are building embedded systems that need to be as efficient as possible.

    You don't do it because you find "make" ugly.


    It is also very useful on old-fashioned microcontrollers with
    multiple banks for data ram and code memory, and no good data stack
    access - the compiler can do large-scale lifetime analysis and
    optimise placement and the re-use of the very limited ram.



    /Nobody/ has makefiles forced on them. People use "make" because
    it is convenient, and it works.

    BUT IT DOESN'T.

    IT DOES WORK.

    People use it all the time.

    It fails a lot of the time on Windows, but they are too
    complicated to figure out why.

    People use it all the time on Windows.

    Even Microsoft ships its own version of make, "nmake.exe", and has
    done for decades.

    /You/ can't work it, but you excel at failing to get things working.
    You have a special gift - you just have to look at a computer with
    tools that you didn't write yourself, and it collapses.


    But I have no interest in changing to something vastly more
    limited and which adds nothing at all.

    That's right; it adds nothing, but it takes a lot away! Like a lot
    of failure points.

    Like pretty much everything I need.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Fri Feb 2 15:49:20 2024
    XPost: comp.unix.programmer

    On 02/02/2024 14:28, Michael S wrote:
    On Fri, 2 Feb 2024 09:02:15 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    But don't hold your breath waiting for something that will replace
    make, or attract users of any other build system.



    It seems, you already forgot the context of my post that started this
    short sub-thread.


    That is absolutely possible. It was not intentional, but the number of
    posts in recent times has been overwhelming. I apologise if I have misinterpreted what you wrote.

    BTW, I would imagine that Stu Feldman, if he is still in good health,
    would fine talking with Bart more entertaining that talking with you.

    I have no idea who that is, so I'll take your word for it.

    I think, you, English speakers, call it birds of feather.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Fri Feb 2 16:43:51 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 14:14:31 +0000
    bart <bc@freeuk.com> wrote:

    On 02/02/2024 09:47, David Brown wrote:
    On 01/02/2024 23:29, bart wrote:

    As I said, C's uses of .h and .c files are chaotic.

    My uses of .h and .c files are not chaotic.

    We can't write tools that only work for careful users. Any
    open-source project I want to build WILL be chaotic.

    We can however write languages where you are forced to be more
    disciplined. Mine doesn't have the equivalent of .h files for example.

    However this is about C.


    I first got rid of a formal 'linker' about 40 years ago. I got rid
    of the notion of combining independently compiled modules into an
    executable a decade ago.

    No, you built a monolithic tool that /included/ the linker.

    No, I ELIMINATED the linker.

    And in the past, I wrote a program called a Loader, much simpler than
    a linker, and very fast (it had to be as I worked with floppies).

    That's fine
    for niche tools that are not intended to work with anything else.
    Most people work with many tools - that's why we have standards,
    defined file formats, and flexible tools with wide support.

    Other people got rid of monolithic tools forty years ago when they realised it was a terrible way to organise things.





    But I suspect you don't understand what a 'whole-program compiler'
    does:

    I know exactly what it does. I am entirely without doubt that I
    know the point and advantages of them better than you do

    You can't create a language devised for whole-program compilation,
    and implement a full-stack compiler for it, without learning a lot
    about the ins and outs.

    So I suspect I know a bit more about it than you do.

    Probably you're mixing this up with whole-program optimisation.

    - the /real/ points
    and advantages, not some pathetic "it means I don't have to use
    that horrible nasty make program" reason.

    * It means that for each binary, all sources are recompiled at the
    same time to create it

    No, it does not.

    That's not a whole-program compiler then. Not if half the modules
    were compiled last week!


    * It doesn't mean that an application can only comprise one binary


    Correct.


    * It moves the compilation unit granularity from a module to a
    single EXE or DLL file

    No, it does not.

    Again, it can't be a whole-program compiler if it can compile modules independently.

    In real-world whole program compilation systems, the focus is on inter-module optimisations. Total build times are expected to go
    /up/. Build complexity can be much higher, especially for large
    programs. It is more often used for C++ than C.

    The main point of a lot of whole-program compilation is to allow cross-module optimisation. It means you can have "access"
    functions hidden away in implementation files so that you avoid
    global variables or inter-dependencies between modules, but now
    they can be inline across modules so that you have no overhead or
    costs for this. It means you can write code that is more
    structured and modular, with different teams handling different
    parts, and with layers of abstractions, but when you pull it all
    together into one whole-program build, the run-time costs and
    overhead for this all disappear. And it means lots of checks and
    static analysis can be done across the whole program.


    For such programs, each translation unit is still compiled
    separately, but the "object" files contain internal data structures
    and analysis information, rather than generated code. Lots of the
    work is done by this point, with inter-procedural optimisations
    done within the unit. These compilations will be done as needed, in parallel, under the control of a build system. Then they are
    combined for the linking and link-time optimisation which fits the
    parts together. Doing this in a scalable way is hard, and the
    subject of a lot of research, as you need to partition it into
    chunks that can be handled in parallel on multiple cpu cores (or
    even distributed amongst servers). Once you have parts of code
    that are ready, they are handed on to backend compilers that do
    more optimisation and generate the object code, and this in turn is
    linked (sometimes incrementally in parts, again aiming at improving parallel building and scalability.

    You've just described a tremendously complex way to do whole-program analysis.


    But it proves that your statement above (it can't be a whole-program
    compiler if it can compile modules independently) is false.


    There are easier ways. The C transpiler I use takes a project of
    dozens of modules in my language, and produces a single C source file
    which will form one EXE or one DLL file.

    Now any ordinary optimising C compiler has a view of the entire
    program and can do wider optimisations (but that view does not span
    multiple EXE/DLL files.)


    If the program in question is really big then there is a good chance
    that your method will expose internal limits of the back-end compiler.
    I think, that's one of the reason (not the only one) why Mozilla didn't re-write the whole Firefox in Rust. According to my understanding, Rust
    does something similar to your approach, except that it outputs LLVM IR
    instead of C and there are real concern that LLVM back end would have
    troubles with input as big as the whole FF.


    /You/ can't work it, but you excel at failing to get things
    working. You have a special gift - you just have to look at a
    computer with tools that you didn't write yourself, and it
    collapses.

    Yes, I do. I'm like that kid poking fun at the emperor's new clothes;
    I'm just stating what I see. But in one way it is hilarious seeing
    you lot defend programs like 'as' to the death.

    Why not just admit that it is a POS that you've had to learn to live
    with, instead of trying to make out it is somehow superior?



    I never run gnu as directly. Running it by means of driver program
    (personally I prefer clang for that task, but gcc will do the job as
    well) isolates me from all peculiarities.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Fri Feb 2 16:53:35 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 15:49:20 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 02/02/2024 14:28, Michael S wrote:
    On Fri, 2 Feb 2024 09:02:15 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    But don't hold your breath waiting for something that will replace
    make, or attract users of any other build system.



    It seems, you already forgot the context of my post that started
    this short sub-thread.


    That is absolutely possible. It was not intentional, but the number
    of posts in recent times has been overwhelming. I apologise if I
    have misinterpreted what you wrote.

    BTW, I would imagine that Stu Feldman, if he is still in good
    health, would fine talking with Bart more entertaining that talking
    with you.

    I have no idea who that is, so I'll take your word for it.


    Inventor of make

    I think, you, English speakers, call it birds of feather.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Fri Feb 2 15:57:28 2024
    XPost: comp.unix.programmer

    On 02/02/2024 14:47, bart wrote:
    On 02/02/2024 08:02, David Brown wrote:
    On 01/02/2024 23:55, Michael S wrote:
    On Thu, 1 Feb 2024 22:38:13 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 01/02/2024 21:23, Michael S wrote:
    On Thu, 1 Feb 2024 18:34:08 +0000



    You proposal and needs of David Brown are not necessarily
    contradictory.
    All you need to do to satisfy him is to add to your compiler an
    option for export of dependencies in make-compatible format, i.e.
    something very similar to -MD option of gcc.

    Then David could write in his makefile:

    out/foo.elf : main_foo.c
        mcc -MD $< -o $@

    -include out/foo.d

    And then to proceed with automatiion of his pre and post-processing
    needs.

    But then I'd still be using "make", and Bart would not be happy.

    And "gcc -MD" does not need any extra #pragmas, so presumably neither
    would an implementation of that feature in bcc (or mcc or whatever).
    So Bart's new system would disappear entirely.




    Bart spares you from managing list(s) of objects in your makefile and
    from writing arcan helper macros.
    Yes, I know, you copy&past arcan macros from project to project, but
    you had to write them n years ago and that surely was not easy.


    Google "makefile automatic dependencies", then adapt to suit your own
    needs.  Re-use the same makefile time and again.

    Yes, some of the functions I have in my makefiles are a bit hairy, and
    some of the command line options for gcc are a bit complicated.  They
    are done now.

    If there had been an easier way than this, which still let me do what
    I need (Bart's system does not), which is popular enough that you can
    easily google for examples, blogs, and tutorials, then I'd have been
    happy to use that at the time.  I won't change to something else
    unless it gives me significant additional benefits.

    People smarter and more experienced than Bart have been trying to
    invent better replacements for "make" for many decades.  None have
    succeeded. Some build systems are better in some ways, but nothing has
    come close to covering the wide range of features and uses of make, or
    gaining hold outside a particular niche.  Everyone who has ever made
    serious use of "make" knows it has many flaws, unnecessarily
    complications, limitations and inefficiencies.  Despite that, it is
    the best we have.

    With Bart's limited knowledge and experience,

    That's true: only 47 years in computing, and 42 years of evolving, implementing and running my systems language.

    Yes. Most of it using your languages, your tools, your programs, and determinedly refusing to learn or use anything else more than the barest minimum, and so completely convinced of your own superiority and the
    failings of everyone else and all other languages and software that you
    are unable to learn things properly or consider anything from a
    viewpoint other than your own.

    You have experience - but it is limited by the walls you put up around yourself.


    What can I possibly know about compiling sources files of a lower-level language into binaries?

    You know how /you/ do it, and how /you/ want to do it. You know sod all
    about anyone else.


    That is another aspect you might do well to learn how to do: KISS. (Yes
    I can be a patronising fuck too.)


    KISS is great. It's what encourages people to use existing standard
    tools like "make" and "C", instead of trying to re-invent their own
    personal wheels all the time. /Sometimes/ it is useful to re-invent
    something from scratch. Most of the time, it is not.


    And that, of course, is absolutely fine.  No one is paying Bart to
    write a generic build system, or something of use to anyone else.  He
    is free to write exactly what he wants, in the way he wants, and if
    ends up with a tool that he finds useful himself, that is great.  If
    he ends up with something that at least some other people find useful,
    that is even better, and I wish him luck with his work.

    But don't hold your breath waiting for something that will replace
    make, or attract users of any other build system.

    Jesus. And you seem to determined to ignore everything I write, or have
    a short memory.

    I'm not suggesting replacing make, only to reduce its involvement.

    I didn't say you were trying to replace make, or even thought you were.
    I said you were not replacing make. There's a difference.


    Twice I posted a list of 3 things that make takes care of; I'm looking
    at replacing just 1 of those things, the I which for me is more critical.


    And I have repeatedly said that if you are making a tool that is useful
    for you, then great - make your tool and use it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Fri Feb 2 15:18:51 2024
    XPost: comp.unix.programmer

    On 02/02/2024 14:43, Michael S wrote:
    On Fri, 2 Feb 2024 14:14:31 +0000
    bart <bc@freeuk.com> wrote:

    You've just described a tremendously complex way to do whole-program
    analysis.


    But it proves that your statement above (it can't be a whole-program
    compiler if it can compile modules independently) is false.

    Then /every/ compiler can be regarded as a whole-program one, since the
    end result, even if the modules were randomly compiled over the last
    month, will always be a whole program.

    So it comes down to what is meant by a whole-program compiler.

    My definition is where you build one program (eg. one EXE or DLL file on Windows) with ONE invocaton of the compiler, which processes ALL source
    and support files from scratch.

    The output (from my compiler) is a single file, usually an EXE or DLL,
    that may use external shared libraries. Or, rarely, it may generate a
    single OBJ file for more exotic requirements, but it will need external
    tools. Then it may end up as a component of a larger program.

    Or sometimes the output is fixed up in memory and run immediately.

    That describes the compiler I use for my systems language.

    My C compiler is not a whole-program one. Although you can submit all
    modules and it can produce one EXE/DLL file, so that the behaviour can
    appear similar, internally they are compiled independently.

    I have thought about using real whole-program techniques (so that all
    modules share a global symbol table for example, and common headers are processed only once), but I don't use C enough to make that interesting
    to attempt.


    There are easier ways. The C transpiler I use takes a project of
    dozens of modules in my language, and produces a single C source file
    which will form one EXE or one DLL file.

    Now any ordinary optimising C compiler has a view of the entire
    program and can do wider optimisations (but that view does not span
    multiple EXE/DLL files.)


    If the program in question is really big then there is a good chance
    that your method will expose internal limits of the back-end compiler.

    My personal whole-program projects impose some limitations.

    One is the scale of the application being compiled. However they are
    designed for use with a fast compiler. That puts an upper limit of about
    0.5M lines per project, if you want to keep build time below, say, 1 second.

    (Figures pertain to my slowish PC, running an unoptimised compiler, so
    are conservative. An optimised compiler might be 40% faster.)

    0.5M lines of code means about a 5MB executable, which is a pretty hefty project. The vast majority of executables and libraries on my PC are
    smaller than that.

    Another is that whole-program compilation is harder to parallelise (the
    above figures are for a single core). But you can of course compile
    multiple programs at the same time.

    The killer is that most professional compilers are hugely complex: they
    are big, and they take considerable machine resources. These are ones
    like gcc or any using LLVM.

    So to get around that in order to do whole-program stuff, things get
    very, very complicated.

    I can't help that.

    But I can't remember how we got here. The thread subject is the far-simpler-to-realise topic of discovering the modules for a
    non-whole-program C compiler, which seems to give the big boys a lot
    more trouble!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Fri Feb 2 15:18:11 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 02/02/2024 08:02, David Brown wrote:

    With Bart's limited knowledge and experience,

    That's true: only 47 years in computing, and 42 years of evolving, >implementing and running my systems language.

    It's pretty clear that you have very limited knowledge
    and experience with unix, make and and pretty much
    anything that isn't your soi disant compiler.


    What can I possibly know about compiling sources files of a lower-level >language into binaries?

    Very little, it appears, outside of your toy projects.


    How many assemblers, compilers, linkers, and interpreters have /you/
    written?

    Can't speak for David, but in my case, at least one of each, and
    you can add operating systems and hypervisors to that list.


    It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
    all seem to delight in wheeling out the most complex scenarios you can find.

    The "stuff" I write is for customers. Any so-called-bart-complexity is based on
    customer requirements. The customers are quite happy with the solutions
    they get.


    That is another aspect you might do well to learn how to do: KISS. (Yes
    I can be a patronising fuck too.)

    KISS is a good principle to follow, and while I cannot again speak
    for David, it's a principle followed by most programmers I've worked
    with. That doesn't mean throwing away perfectly usable tools
    (one can easily make KISS-compliant makefiles, for example).


    I'm not suggesting replacing make, only to reduce its involvement.

    And to reduce it's involvement, something must replace make. ipso facto.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Fri Feb 2 16:26:12 2024
    XPost: comp.unix.programmer

    On 02/02/2024 14:45, Michael S wrote:

    Actually, nowadays monolithic tools are solid majority in programming.
    I mean, programming in general, not C/C++/Fortran programming which by
    itself is a [sizable] minority.
    Even in C++, a majority uses non-monolithic tools well-hidden behind
    front end (IDE) that makes them indistinguishable from monolithic.


    It can often be helpful to have a single point of interaction - a
    front-end that combines tools. But usually these are made of parts.

    For many of the microcontrollers I work with, the manufacturer's
    standard development toolset is based around Eclipse and gcc. From the
    user point of view, it looks a lot like one monolithic IDE that lets you
    write your code, compile and link it, and download and debug it on the microcontroller. Under the hood, it is far from a monolithic
    application. Different bits come from many different places. This
    means the microcontroller manufacturer is only making the bits that are specific to /their/ needs - such as special views while debugging, or
    "wizards" for configuring chip pins. The Eclipse folk are experts at
    making an editor and IDE, the gcc folks are experts at the compiler, the openocd folks know about jtag debugging, and so on. And to a fair
    extent, advanced users can use the bits they want and leave out other
    bits. I sometimes use other editors, but might still use the toolchain provided with the manufacturer's tools. I might swap out the debugger connection. I might use the IDE for something completely different. I
    might install additional features in the IDE. I might use different toolchains. Manufacturers, when putting things together, might change
    where they get their toolchains, or what debugging connectors they use.
    It's even been known for them to swap out the base IDE while keeping
    most of the rest the same (VS Code has become a popular choice now, and
    a few use NetBeans rather than Eclipse).

    (Oh, and for those that don't believe "make" and "gcc" work on Windows,
    these development tools invariably have "make" and almost invariably use
    gcc as their toolchain, all working in almost exactly the same way on
    Linux and Windows. The only difference is builds are faster on Linux.)

    This is getting the best (or at least, trying to) from all worlds. It
    gives people the ease-of-use advantages of monolithic tools without the
    key disadvantages of real monolithic tools - half-arse editors,
    half-arsed project managers, half-arsed compilers, and poor
    extensibility because the suppliers are trying to do far too much
    themselves.

    I don't think it is common now to have /real/ monolithic development
    tools. But it is common to have front-ends aimed at making the
    underlying tools easier and more efficient to use, and to provide
    all-in-one base packages.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Fri Feb 2 16:31:46 2024
    XPost: comp.unix.programmer

    On 02/02/2024 15:14, bart wrote:

    Yes, I do. I'm like that kid poking fun at the emperor's new clothes;
    I'm just stating what I see. But in one way it is hilarious seeing you
    lot defend programs like 'as' to the death.


    No, /you/ are the emperor in this analogy. Well, you are actually the
    kid - except you are the kid with no clothes who /thinks/ he's an emperor.

    Why not just admit that it is a POS that you've had to learn to live
    with, instead of trying to make out it is somehow superior?


    The whole world is out of step, except Bart.

    Has it never occurred to you that when you are in disagreement with
    everyone, /you/ might be the one that is wrong? I think you suffer from
    the "misunderstood genius" myth. It's surprisingly common amongst
    people who have invested heavily in going their own way, against common knowledge or common practice. It's a sort of psychological defence
    mechanism against realising you've been wrong all this time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 2 16:26:12 2024
    XPost: comp.unix.programmer

    On 2024-02-02, bart <bc@freeuk.com> wrote:
    disciplined. Mine doesn't have the equivalent of .h files for example.

    My musical instrument has frets for easy intonation, you silly violin
    people, in your silly violin newsgroup.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Fri Feb 2 17:00:12 2024
    XPost: comp.unix.programmer

    On 02/02/2024 15:31, David Brown wrote:
    On 02/02/2024 15:14, bart wrote:

    Yes, I do. I'm like that kid poking fun at the emperor's new clothes;
    I'm just stating what I see. But in one way it is hilarious seeing you
    lot defend programs like 'as' to the death.


    No, /you/ are the emperor in this analogy.  Well, you are actually the
    kid - except you are the kid with no clothes who /thinks/ he's an emperor.

    Why not just admit that it is a POS that you've had to learn to live
    with, instead of trying to make out it is somehow superior?


    The whole world is out of step, except Bart.

    Has it ever occurred to YOU that the world is more than Unix and make
    and massive compilers like gcc and clang?


    Has it never occurred to you that when you are in disagreement with
    everyone, /you/ might be the one that is wrong?  I think you suffer from
    the "misunderstood genius" myth.  It's surprisingly common amongst
    people who have invested heavily in going their own way, against common knowledge or common practice.  It's a sort of psychological defence mechanism against realising you've been wrong all this time.

    This is a newsgroup about C. That is a language that can be fairly
    adequately implemented with a 180KB program, the size of Tiny C. Tiny C
    itself can turn C source into binary at about 10MB per second.

    So, a toy language, really, and a toy implementation that nevertheless
    does the job: in most cases, a user of the resulting program will not be
    able to tell how it was compiled.

    And yet there is this massive collection of of huge, complex tools built
    around a toy language, dwarfing it by 1000:1, that you insist is what
    it's really all about, and you want to put down anyone who disagrees.

    It's like saying that the only businesses worth having are huge
    corporations, or the only form of transport must be a jetliner.

    The way 'as' works IS rubbish. It is fascinating how you keep trying to
    turn it round and make it about me. There can't possibly be anything
    wrong with it, whoever says so must be deluded!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 2 17:31:40 2024
    XPost: comp.unix.programmer

    On 2024-02-02, bart <bc@freeuk.com> wrote:
    Has it ever occurred to YOU that the world is more than Unix and make
    and massive compilers like gcc and clang?

    There is more, but from your perspective, it's just more stuff to
    shake your fist at and avoid learning about.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Fri Feb 2 17:44:26 2024
    XPost: comp.unix.programmer

    On 02/02/2024 15:18, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 02/02/2024 08:02, David Brown wrote:

    With Bart's limited knowledge and experience,

    That's true: only 47 years in computing, and 42 years of evolving,
    implementing and running my systems language.

    It's pretty clear that you have very limited knowledge
    and experience with unix, make and and pretty much
    anything that isn't your soi disant compiler.

    Yes. And?



    What can I possibly know about compiling sources files of a lower-level
    language into binaries?

    Very little, it appears, outside of your toy projects.

    That's right, I only have experience of the stuff I've done. And?

    Most stuff I want to build is on a similar scale, so you'd probably
    consider all that as toys too.

    You're saying that anyone not using Unix, not building 10Mloc projects,
    and not a fan of make, should FOAD?



    How many assemblers, compilers, linkers, and interpreters have /you/
    written?

    OK. How do I know these aren't just toys, or is it only you who is
    allowed to judge?

    BTW what exactly is a toy project?


    Can't speak for David, but in my case, at least one of each, and
    you can add operating systems and hypervisors to that list.

    I don't do OSes. If I did, you probably have a good idea of what mine
    would look like!


    It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
    all seem to delight in wheeling out the most complex scenarios you can find.

    The "stuff" I write is for customers. Any so-called-bart-complexity is based on
    customer requirements. The customers are quite happy with the solutions
    they get.


    That is another aspect you might do well to learn how to do: KISS. (Yes
    I can be a patronising fuck too.)

    KISS is a good principle to follow, and while I cannot again speak
    for David, it's a principle followed by most programmers I've worked
    with. That doesn't mean throwing away perfectly usable tools
    (one can easily make KISS-compliant makefiles, for example).


    I'm not suggesting replacing make, only to reduce its involvement.

    And to reduce it's involvement, something must replace make. ipso facto.

    No. I'm saying make should be less involved in specifying which files to
    be submitted to a compiler-toolchain.

    Especially for a makefile specifying a production or distribution build,
    such as one done at a remote site by someone is not the developer, but
    just wants a working binary.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Fri Feb 2 18:26:53 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 02/02/2024 15:18, Scott Lurndal wrote:


    You're saying that anyone not using Unix, not building 10Mloc projects,
    and not a fan of make, should FOAD?

    No. I'm saying that your dislike of make is personal. If you
    don't like it, don't use it. Make your own, nobody is stopping
    you. Just don't brag about how "small", "easy" or "nice" it is
    until it can handle the same job as make.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 2 18:54:06 2024
    XPost: comp.unix.programmer

    On 2024-02-02, bart <bc@freeuk.com> wrote:
    The way 'as' works IS rubbish.

    Pretend a developer of "as" (say, the GNU one) is reading this thread.

    What is it that is broken?

    Do you have a minimal repro test case of your issue?

    What is the proposed fix?

    turn it round and make it about me. There can't possibly be anything
    wrong with it, whoever says so must be deluded!

    A vast amount of code is being compiled daily, passing through as,
    without anyone noticing.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tTh@21:1/5 to bart on Fri Feb 2 20:43:59 2024
    XPost: comp.unix.programmer

    On 2/2/24 16:18, bart wrote:

    My definition is where you build one program (eg. one EXE or DLL file on Windows) with ONE invocaton of the compiler, which processes ALL source
    and support files from scratch.

    And can you disclose the magic trick who let your magic
    compiler know exactly the list of "ALL source and support
    files" needed for a scratchy build ?


    --
    +---------------------------------------------------------------------+
    | https://tube.interhacker.space/a/tth/video-channels | +---------------------------------------------------------------------+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Fri Feb 2 19:52:45 2024
    XPost: comp.unix.programmer

    On 02/02/2024 18:36, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    The way 'as' works IS rubbish. It is fascinating how you keep trying
    to turn it round and make it about me. There can't possibly be
    anything wrong with it, whoever says so must be deluded!

    "as" works. It's not perfect, but it's good enough. Its job is to
    translate assembly code to object code. It does that. There's is
    nothing you could do with your preferred user interface (whatever that
    might be) that can't be done with the existing one. "as" is rarely
    invoked directly, so any slight clumsiness in its well defined user
    interface hardly matters. Any changes to its interface could break
    existing scripts.

    That always seems to be the excuse. Some half-finished test version is produced, with no proper file interface, and an output that temporarily
    gets sent to a.out until it can be sorted out properly.

    But it never is finished, and the same raw half-finished product works
    the same way decades later, surprising every new generation who have to
    relearn its quirks.

    I saw an example today in tutorial:

    as -o filename.o filename.as

    having to type the name twice again.


    Nobody is claiming that "there can't possibly be anything wrong with
    it". You made that up.

    Why does the way "as" works offend you?


    It just does. I've used a few assemblers, this one is downright weird:

    * The output is odd: it ALWAYS goes to a.out. Hmm, that's the executable
    file produced by gcc, so is it performing linking? No, this a.out
    is an object file; gcc's a.out is an executable! (And if you and link
    a.out with gcc, it will go wrong unless you use -o)

    * You have to specify the output file name and extension, so writing
    the input file twice

    * If you start it with no params, instead of a usage message, it sits
    silently waiting for you to type assembly code live at the terminal

    * It accepts multiple input files. OK, presumably it will write each to
    a corresponding .o file? No it writes all the code to one output.

    * So, it is combining multiple, distinct assembler files to one single
    object? That's actually quite neat. Except no, the input files are
    effectively just concatenated into one big ASM file. Any symbol L3
    in one, will clash with an L3 in another.

    * All these behaviours are quite different to that of gcc. Pass
    a.s, a.s, a.s to gcc, and will produce a.o, b.o, c.o. 'as' will
    concatenate and produce one object file called a.out.

    This is apparently the flagship assembler provided with Unix systems.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to tTh on Fri Feb 2 20:16:27 2024
    XPost: comp.unix.programmer

    On 02/02/2024 19:43, tTh wrote:
    On 2/2/24 16:18, bart wrote:

    My definition is where you build one program (eg. one EXE or DLL file
    on Windows) with ONE invocaton of the compiler, which processes ALL
    source and support files from scratch.

       And can you disclose the magic trick who let your magic
       compiler know exactly the list of "ALL source and support
       files" needed for a scratchy build ?

    If talking about the language used by my whole-program compiler, the
    info comes from the module scheme. And /that/ involves simply listing
    the modules the project comprises, in one place.

    Other languages with module schemes tend to have ragged collections of
    'import' statements at the top of every module.

    Below is the lead module of my C compiler (cc.m) which is not C code,
    but in my language. This module only contains project info. This allows
    a choice of module for different configurations, but I sometimes just
    comment lines in and out.

    No other project info, import statements etc appear anywhere else.

    5 modules are not shown in that list, which belong to the language's
    standard library. Those are included automatically.


    -------------------------------
    cc.m
    -------------------------------

    !CLI
    module cc_cli

    !Global Data and Tables

    module cc_decls
    module cc_tables

    !Lexing and Parsing
    module cc_lex
    module cc_parse

    !General

    module cc_lib
    module cc_support

    !PCL handling
    module cc_blockPCL
    module cc_genPCL
    module cc_libPCL
    module cc_pcl

    !MCL handling

    module cc_mcldecls
    module cc_genmcl
    module cc_libmcl
    module cc_optim
    module cc_stackmcl

    !Bundled headers

    module cc_headers
    ! module cc_headersx

    module cc_export

    !Diagnostics
    module cc_show
    ! module cc_showdummy

    !x64 and exe backend

    ! module mc_genss
    ! module mc_objdecls
    ! module mc_writeexe
    ! module mc_writess
    ! module mc_disasm

    -------------------------------

    ! lines are comments. PCL is the IL. MCL is native code. The mc_ files
    provide direct EXE generation, rather than ASM (as normally built it
    invokes my external assembly 'as'. I'm joking, it is called 'aa'.). But
    I haven't bothered with that backend.

    This is in in action:

    c:\cx>mm cc Build the above app
    Compiling cc.m---------- to cc.exe

    c:\cx>cc sql Test cc on sql.c
    Compiling sql.c to sql.exe

    c:\cx>sql See if it works
    SQLite version 3.25.3 2018-11-05 20:37:38
    ...

    Building the compiler with 'mm cc' takes 80ms; that's why I have no need
    of dependency graphs or incremental compilation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Fri Feb 2 20:21:50 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 02/02/2024 18:36, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:

    I saw an example today in tutorial:

    as -o filename.o filename.as

    having to type the name twice again.

    As has been pointed out numerous times, you don't need to
    type both names twice.

    All I have to type to build my entire project is 'mr'.

    I very seldom every need to use 'cc' or 'as' directly except
    for one-off examples to post here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 2 21:09:08 2024
    XPost: comp.unix.programmer

    On 2024-02-02, bart <bc@freeuk.com> wrote:
    That always seems to be the excuse. Some half-finished test version is produced, with no proper file interface, and an output that temporarily
    gets sent to a.out until it can be sorted out properly.

    Historically, GCC had to work with proprietary "as" programs; it
    couldn't redefine the API.

    But it never is finished, and the same raw half-finished product works
    the same way decades later, surprising every new generation who have to relearn its quirks.

    Most people using GCC do not learn anything about as.

    Even those programming in assembler, because the gcc driver
    recognizes .s and .S suffixes.

    But we've been through this?

    I saw an example today in tutorial:

    as -o filename.o filename.as

    having to type the name twice again.

    You want:

    gcc -c filename.S

    The .S file is preprocessed for you so you can use #define macros and
    #include, and handed off to an assembler, somehow resulting in
    filename.o.

    There are lots of tutorials out there written by people who are
    "writing to learn".

    It just does. I've used a few assemblers, this one is downright weird:

    I've used lots of electric stove elements and room heaters! But this
    ceramic resistor is weird. It has no on/off switch or temperature
    control, and just bare wires?! No plug. Sure, if I pass current through
    it, it warms up, so I guess what these electrical engineers call "working"
    in their little world.

    "as" is not an application designed for programming; it's an internal
    toolchain component. Why can't you understand that?

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Fri Feb 2 21:16:02 2024
    XPost: comp.unix.programmer

    On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:

    A #include directive with <> searches for a "header", which is not
    stated to be a file. A #include directive with "" searches for a file
    in an implementation-defined manner; if that search fails, it tries
    again as if <> had been used.

    The trouble with that interpretation is, it would seem to rule out the use
    of things like include libraries for user headers. Do you really think
    that was the intention?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Fri Feb 2 21:18:31 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 11:05:22 +0100, David Brown wrote:

    On 02/02/2024 00:30, Lawrence D'Oliveiro wrote:

    Ninja was created as an alternative to Make.

    It is an alternative to some uses of make - but by no means all uses.

    It gets rid of the overlap when you have a meta-build system generating
    the lowest-level build control files (Makefiles).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Feb 2 21:23:59 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 19:52:45 +0000, bart wrote:

    as -o filename.o filename.as

    On *nix systems, we can use “cc” as kind of a “universal” compile command,
    not just for C code but for assembler as well, e.g.

    cc -c filename.o filename.s

    (without preprocessor)

    cc -c filename.o filename.S

    (with preprocessor)

    cc -o filename filename.S

    (with preprocessor and linking stages as well).

    Can your system offer these options?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Feb 2 21:42:06 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:

    That's true: only 47 years in computing, and 42 years of evolving, implementing and running my systems language.

    On how many different platforms?

    Seems like your primary experience has been with beating your head against Microsoft Windows. That’s got to have health implications.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Fri Feb 2 21:51:43 2024
    XPost: comp.unix.programmer

    On 02/02/2024 21:23, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 19:52:45 +0000, bart wrote:

    as -o filename.o filename.as

    On *nix systems, we can use “cc” as kind of a “universal” compile command,
    not just for C code but for assembler as well, e.g.

    cc -c filename.o filename.s

    (without preprocessor)

    cc -c filename.o filename.S

    (with preprocessor)

    cc -o filename filename.S

    (with preprocessor and linking stages as well).

    Can your system offer these options?

    What option is that, to have one command 'cc' that can deal with N
    different languages?

    No. But I can offer a system where you have a choice of N different
    compilers or assemblers for the same language:

    lc64 filename.c
    tcc filename.c
    gcc filename.c -ofilename.exe
    mcc filename

    nasm -fwin64 filename.asm
    yasm -fwin64 filename.asm
    aa filename -obj
    as filename.s -ofilename.o

    How do you choose different compilers or assemblers with 'cc'?

    I also offer a scheme with my tools where the tool magically knows what language it is dealing with:

    mm filename (filename.m => filename.exe)
    mcc filename (filename.c => filename.exe)
    aa filename (filename.asm => filename.exe)
    qq filename (run filename.q)
    pci filename (run filename.pcl)
    ms filename (run filename.m)

    .m and .q files are lead modules of an application.
    .c files are standalone modules

    One .asm file normally represents a whole program unless generated from mcc

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Fri Feb 2 22:12:04 2024
    XPost: comp.unix.programmer

    On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:

    That's true: only 47 years in computing, and 42 years of evolving,
    implementing and running my systems language.

    On how many different platforms?

    I started in 1976. I started using Windows for my products in 1995
    because all my potential customers could buy an off-the-shelf Windows
    PC. Linux was nowhere. Unix was only in academia, I think; nowhere
    relevant to me anyway.

    My first compiler (not for my language) targetted PDP10. Subsequent ones generated code for Z80 plus 3 generations of x86.

    I started using the C library in 1995 too (via my FFI), as it was
    simpler than WinAPI for file-handling.

    Seems like your primary experience has been with beating your head against Microsoft Windows. That’s got to have health implications.

    That wasn't a serious question was it; you just wanted to have a go at
    Windows.

    Why do you consider that fair game, but people hate if when anyone
    criticises Unix?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sat Feb 3 01:31:18 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:

    What option is that, to have one command 'cc' that can deal with N
    different languages?

    Hint: it uses the filename extension to determine which language, and
    which flavour of the language even, it is dealing with.

    No. But I can offer a system where you have a choice of N different
    compilers or assemblers for the same language:

    Are their ABIs compatible?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sat Feb 3 01:29:26 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 22:12:04 +0000, bart wrote:

    On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:

    Seems like your primary experience has been with beating your head
    against Microsoft Windows. That’s got to have health implications.

    That wasn't a serious question was it; you just wanted to have a go at Windows.

    You yourself have complained endlessly about build setups that work fine
    on *nix systems, but that give you trouble on Windows. It’s like you don’t see the source of your difficulties right in front of your eyes.

    Why do you consider that fair game, but people hate if when anyone
    criticises Unix?

    I don’t care about “Unix” any more, and I doubt many other people do. All the systems legally entitled to call themselves ”Unix” are dying if not already dead.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Sat Feb 3 01:32:28 2024
    XPost: comp.unix.programmer

    On Fri, 02 Feb 2024 16:09:09 -0800, Keith Thompson wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:

    A #include directive with <> searches for a "header", which is not
    stated to be a file. A #include directive with "" searches for a file
    in an implementation-defined manner; if that search fails, it tries
    again as if <> had been used.

    The trouble with that interpretation is, it would seem to rule out the
    use of things like include libraries for user headers. Do you really
    think that was the intention?

    I don't know. I imagine an implementation could interpret the word
    "file" to include information extracted from libraries.

    Then the distinction between “headers” that are “files”, versus those that
    are not, as so carefully worded in the standard (as you pointed out),
    becomes meaningless.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Lawrence D'Oliveiro on Sat Feb 3 02:36:38 2024
    XPost: comp.unix.programmer

    On 2024-02-03, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 02 Feb 2024 16:09:09 -0800, Keith Thompson wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:

    A #include directive with <> searches for a "header", which is not
    stated to be a file. A #include directive with "" searches for a file >>>> in an implementation-defined manner; if that search fails, it tries
    again as if <> had been used.

    The trouble with that interpretation is, it would seem to rule out the
    use of things like include libraries for user headers. Do you really
    think that was the intention?

    I don't know. I imagine an implementation could interpret the word
    "file" to include information extracted from libraries.

    Then the distinction between “headers” that are “files”, versus those that
    are not, as so carefully worded in the standard (as you pointed out),
    becomes meaningless.

    Not any more than the distinction between shell functions, built-in
    commands and external commands.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Malcolm McLean on Sat Feb 3 06:30:20 2024
    XPost: comp.unix.programmer

    On 2024-02-03, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
    On 02/02/2024 18:54, Kaz Kylheku wrote:
    On 2024-02-02, bart <bc@freeuk.com> wrote:
    The way 'as' works IS rubbish.

    Pretend a developer of "as" (say, the GNU one) is reading this thread.

    What is it that is broken?

    Do you have a minimal repro test case of your issue?

    What is the proposed fix?

    turn it round and make it about me. There can't possibly be anything
    wrong with it, whoever says so must be deluded!

    A vast amount of code is being compiled daily, passing through as,
    without anyone noticing.

    It's a constant problem.

    It's usually easy enough to knock up a piece of code to do something.
    The problem is deploying it.

    I am not sure what you're referring to. The "as" program is successfully deployed. It meets its requirements and does its job behind the scenes,
    without attracting attention.


    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Sat Feb 3 00:53:05 2024
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:

    A #include directive with <> searches for a "header", which is not
    stated to be a file. A #include directive with "" searches for a file
    in an implementation-defined manner; if that search fails, it tries
    again as if <> had been used.

    The trouble with that interpretation is, it would seem to rule out the use >> of things like include libraries for user headers. Do you really think
    that was the intention?

    I don't know. I imagine an implementation could interpret the word
    "file" to include information extracted from libraries. Note that it
    doesn't have to correspond to the concept of a "file" used in <stdio.h>;
    that refers to files in the execution environment, not the compilation environment.

    To me what the C standard says is clear. A #include "whatever.h"
    gets its stuff from a file (assuming of course the appropriate
    file can be found, and not revert to the #include <whatever.h>
    form). A #include <whatever.h> gets its stuff from a header,
    said header perhaps being stored in a file or perhaps not, and if
    file-stored then it could be a 1-1 relationship, or a 1-many
    relationship, or a many-1 relationship. Since the C standard
    doesn't define the term 'header', an implementation is allowed to
    actualize it however the implementation chooses (including the
    possibility of storing information inside the compiler itself).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to bart on Sat Feb 3 01:05:15 2024
    bart <bc@freeuk.com> writes:

    On 31/01/2024 00:46, Tim Rentsch wrote:

    Looking over one of my current projects (modest in size,
    a few thousand lines of C source, plus some auxiliary
    files adding perhaps another thousand or two), here are
    some characteristics essential for my workflow (given
    in no particular order):

    * have multiple outputs (some outputs the result of
    C compiles, others the result of other tools)

    * use different flag settings for different translation
    units

    * be able to express dependency information

    * produece generated source files, sometimes based
    on other source files

    * be able to invoke arbitrary commands, including
    user-written scripts or other programs

    * build or rebuild some outputs only when necessary

    * condition some processing steps on successful
    completion of other processing steps

    * deliver partially built as well as fully built
    program units

    * automate regression testing and project archival
    (in both cases depending on completion status)

    * produce sets of review locations for things like
    program errors or TBD items

    * express different ways of combining compiler
    outputs (such as .o files) depending on what
    is being combined and what output is being
    produced (sometimes a particular set of inputs
    will be combined in several different ways to
    produce several different outputs)

    Indeed it is the case that producing a complete program is one
    part of my overall build process. But it is only one step out
    of many, and it is easy to express without needing any special
    considerations from the build system.

    So, will a specific build of such a project produce a single
    EXE/DLL//SO file? (The // includes the typical file extension of
    Linux executables.)

    No, there are several outputs of this kind, including object
    files, static libraries, and dynamic libraries, and all for a C
    environment. (There are also other outputs but of a different
    kind than what you are asking about.)

    I have no expectation that you will incorporate these ideas or
    capabilities into a tool you are building for yourself. I gave
    the list in case other readers might have an interest.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Sat Feb 3 12:02:48 2024
    XPost: comp.unix.programmer

    On 03/02/2024 01:29, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 22:12:04 +0000, bart wrote:

    On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:

    Seems like your primary experience has been with beating your head
    against Microsoft Windows. That’s got to have health implications.

    That wasn't a serious question was it; you just wanted to have a go at
    Windows.

    You yourself have complained endlessly about build setups that work fine
    on *nix systems, but that give you trouble on Windows. It’s like you don’t
    see the source of your difficulties right in front of your eyes.

    I guess you're not curious about WHY a project that builds easily on
    Unix causes problems on Windows?

    Please think about it. You are keen to just totally dismiss the problem
    and suggest it is Windows that is fault.

    But maybe there IS a dependency on some Unix feature that is
    unnecessary. Why force Windows users to pointlessly use MSYS2 or go
    through a nightmare of cross-compiling via WSL?

    Apparently being considerate is not in a Unix programmer's toolkit.

    Why do you consider that fair game, but people hate if when anyone
    criticises Unix?

    I don’t care about “Unix” any more, and I doubt many other people do. All
    the systems legally entitled to call themselves ”Unix” are dying if not already dead.

    See, I don't even know the difference. This is a collection of OSes
    without even a name. I was trying to avoid saying 'Linux' because of the preponderance of other Unix-related OSes. But I would rather shoot
    myself than resort to using '*nixes'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Tim Rentsch on Sat Feb 3 11:54:43 2024
    On 03/02/2024 09:05, Tim Rentsch wrote:
    bart <bc@freeuk.com> writes:

    Indeed it is the case that producing a complete program is one
    part of my overall build process. But it is only one step out
    of many, and it is easy to express without needing any special
    considerations from the build system.

    So, will a specific build of such a project produce a single
    EXE/DLL//SO file? (The // includes the typical file extension of
    Linux executables.)

    No, there are several outputs of this kind, including object
    files, static libraries, and dynamic libraries, and all for a C
    environment. (There are also other outputs but of a different
    kind than what you are asking about.)

    I have no expectation that you will incorporate these ideas or
    capabilities into a tool you are building for yourself. I gave
    the list in case other readers might have an interest.

    OK. You seem fairly level-headed and calm, so I'll try this explanation.

    Let's say that 'ccc' (not 'cc') is a C compiler; it only processes C
    source files.

    All that I am attempting is that for a full build for this 3-file project:

    ccc one two three

    which takes one.c, two.c and three.c and produces (on Windows say)
    one.exe, that you can instead do:

    ccc one

    for the same job. That would take that particular task (together with
    endless dependency lists that you have to determine with extra tools)
    out of makefiles.

    Here I'm not interested in incremental compilation; this is typically
    for a remote build (away from the developer's setup) of a finished,
    working program.

    It doesn't mean doing away with 'make'; that might still be needed to orchestrate everything else that might be needed.

    Although I'd prefer that those needs were also listed separately, in
    comments or readme files, so that people can devise their own solutions
    if needed (see, that is now one extra level of flexibility above just
    supplying a complex, busy makefile which you just have to blindly trust
    will work).

    But in simple cases then yes, you just need 'ccc' and a bunch of C
    source files.

    With my experimental compiler using my #pragma method, I can do exactly
    that:

    mcc lua 33 files prepended to lua.c

    mcc pico 23 files created pico.c

    mcc bbx 45 files created bbx.c

    mcc cjpeg 54 files prepended to each
    mcc djpeg 54 files

    In the last example, the two programs shared 46 common .c files. I put
    those #pragma lines in a common file that was then #included.

    (This makes processing those lines via a script harder, but I've given
    up on that idea. In the original makefile, those 46 were compiled into a
    .a archive file)

    I created a special lead module when there wasn't a clear candidate to
    take that role. That module provides the name of the EXE.

    OK, the idea works. I will leave it in my compiler, and use it as secret
    weapon for my personal use.

    Nobody else cares, so everyone please pretend this thread was never
    posted, and sorry for wasting your time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Sat Feb 3 12:16:17 2024
    XPost: comp.unix.programmer

    On 03/02/2024 01:31, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:

    What option is that, to have one command 'cc' that can deal with N
    different languages?

    Hint: it uses the filename extension to determine which language, and
    which flavour of the language even, it is dealing with.

    This is the filename extension which Linux famously ignores, because you
    can use any extension you like?

    Hint: my tools KNOW which language they are dealing with:

    c:\c>copy hello.c hello.x
    c:\c>mcc hello.x
    Compiling hello.x to hello.exe

    c:\c>copy hello.c hello.s
    c:\c>mcc hello.s
    Compiling hello.s to hello.exe

    Now let's see what the famous gcc does:

    root@XXX:/mnt/c/c# cp hello.c hello.x
    root@XXX:/mnt/c/c# gcc hello.x
    hello.x: file not recognized: file format not recognized

    root@XXX:/mnt/c/c# cp hello.c hello.s
    root@XXX:/mnt/c/c# gcc hello.s
    hello.s: Assembler messages:
    hello.s:2: Error: unknown vector operation: `{'
    ...

    No. But I can offer a system where you have a choice of N different
    compilers or assemblers for the same language:

    Are their ABIs compatible?

    Huh? They will all use the platform ABI.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Scott Lurndal on Sat Feb 3 13:19:42 2024
    XPost: comp.unix.programmer

    On 02.02.2024 16:18, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:

    (Ah, I spotted my person being addressed again by Bart.)


    It certainly won't for your stuff, or SL's, or JP's, or TR's, as you
    all seem to delight in wheeling out the most complex scenarios you can find.

    No. The "complex" scenarios - I called them professional projects as
    opposed to toy projects - I mentioned explicitly only because _you_
    (for obvious argumentation manoeuvre) asked me how my projects would
    look like. (You continuously expose such a lousy argumentation habit
    that it hurts.)

    Actually, all professional projects I did myself, or that I observed
    others were doing, were non-trivial (some even complex, yes). One
    point of professional software engineering and project management is
    managing any existing complexity. (There's many means for that; one
    had been talked about, modularization (or other divide and conquer techniques).)

    [...]


    That is another aspect you might do well to learn how to do: KISS. [...]

    I suppose we have a different view about what KISS is. In my book
    it is (for example) to use existing, simple and established tools
    instead of reinventing the wheel and (even needlessly) implement
    my own version before I can start with my project.


    KISS is a good principle to follow, and while I cannot again speak
    for David, it's a principle followed by most programmers I've worked
    with. That doesn't mean throwing away perfectly usable tools
    (one can easily make KISS-compliant makefiles, for example).

    Not all projects can be kept simple. But certainly we try to not
    make it more complex than necessary.

    I'm not suggesting replacing make, only to reduce its involvement.

    (And yet I've seen no sensible argument from you why - only that
    you don't know it, and that you don't like it, and that it don't
    fit you.)

    [...]

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Sat Feb 3 14:39:12 2024
    XPost: comp.unix.programmer

    On 02.02.2024 16:26, David Brown wrote:

    [...] The Eclipse folk are experts at making an editor and IDE, [...]

    I have to disagree with this bit.

    At the time I used it[*] they were even incapable of integrating
    an existing Vim editor (even a standard Vi would have been okay
    for me, but no). - Instead they supported/embedded an own trashy
    vi clone with a small subset of the vi feature (not to speak about
    the vim features).

    Janis

    [*] Disclaimer: my Eclipse/Java episode was around 2005 and I
    don't know the current state, whether that has changed meanwhile,
    or got better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Sat Feb 3 14:52:05 2024
    XPost: comp.unix.programmer

    On 03/02/2024 06:52, Malcolm McLean wrote:
    On 02/02/2024 22:12, bart wrote:

    Why do you consider that fair game, but people hate if when anyone
    criticises Unix?

    Because UNIX systems used to typically cost tens of thousands of
    dollars, whilst a PC could be had for under a thousand dollars.

    I think you got stuck somewhere in the 1990's.

    So
    everyone could have a PC, but if you were given a UNIX system you were a
    bit special. And that gave UNIX programmers a sense of superiority.


    Anyone who has used both *nix and Windows for development work knows
    *nix is superior for that. (Windows has other advantages - I use both
    Windows and Linux. Neither is perfect, each has their good points and
    bad points.)

    It's a very silly attitude of course.

    Fighting against a system that works against everything you are trying
    to do is a very silly attitude. Bart has choices - no one is forcing
    him to compile open source software that causes him trouble, no one is
    forcing him to use Windows, or C, or make, or anything else. He has
    completely free choices. He could use Linux and compile the projects
    without trouble. He could use Windows and pre-built binaries. He could
    use other projects, other languages, other tools. He could choose to
    put his feet by the fireside and do Sudokus, or to travel the world and
    see other places. He has worked hard, made his money, and has earned
    the right to relax a bit. But he actively chooses to fight against
    software tools that cause him nothing but anguish and frustration, and
    then whines about them here. /That/ is a very silly attitude.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to bart on Sat Feb 3 14:23:29 2024
    XPost: comp.unix.programmer

    On 02.02.2024 23:12, bart wrote:
    On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:

    That's true: only 47 years in computing, and 42 years of evolving,
    implementing and running my systems language.

    On how many different platforms?

    I started in 1976. I started using Windows for my products in 1995
    because all my potential customers could buy an off-the-shelf Windows
    PC.

    And that is actually the problem that folks have tried to make clear
    to you. Being so long in a bubble, working on an a technically lousy
    inferior system that decades long did not even manage (maybe not even
    intend) to catch up with any state-of-the-art techniques; what do you
    expect?

    Linux was nowhere. Unix was only in academia, I think; nowhere
    relevant to me anyway.

    There was a special license for academia that made it possible to
    spread UNIX quickly through academia. But that is just one aspect
    why it disseminated so fast.

    I don't know about you, whether you have an academic technical
    background, whether you had the chance to try out UNIX or the BSD
    variant these days.

    Myself I already knew a couple OSes (for PCs, some not even worth
    to call them OS, for medium scale systems, and also for mainframes)
    before I had my first contact with a Unix system. With that systems
    and OS background it was easy to strive for the better ones; from
    the ones I met it was Unix. (BTW, I observed a similar enthusiasm
    with a friend of mine, a long year hardcore MS DOS user, when he
    got his fingers onto a Unix system.) You might imagine what a joy
    it thus was when Linux and the GNU tools appeared, a powerful and
    reliable(!) system and OS base, and even (almost) for free.

    (For you, I dare to say, it's obviously far too late. That ship has
    sailed. If you'd have strived for a broader experience in your early
    days it would certainly be a different situation. And I am not only
    speaking about systems and OSes, also all project management, tools,
    methods, and strategy themes. Just continue your way in your scope.)

    [...]

    Why do you consider that fair game, but people hate if when anyone
    criticises Unix?

    I cannot speak for "people". Myself I name any issues I see; Unix
    issues are not exempt from that. - I have even a printed version of
    "The UNIX - HATERS Handbook" in my bookshelf (though a lot of its
    content is meanwhile outdated, it doesn't apply any more). - And I
    can certainly collect a page full of deficiencies I see with Linux.
    But so what? (For the MS platforms I could probably "fill a book".)
    Yet, in past decades, I haven't seen any serious competitor to Unix.
    (Note: When I'm saying that I am not considering e.g. supercomputers
    doing e.g. massive hydrodynamic computations. But even in this area
    there's meanwhile also Linux clusters running.)

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Keith Thompson on Sat Feb 3 15:13:13 2024
    XPost: comp.unix.programmer

    On 02.02.2024 22:15, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    [...]

    But OK, let me drop everything and fix it for you. I can submit a patch
    for "as" so it behaves the way you want. I'll also submit patches for
    gcc so it invokes "as" with the new interface. It will still have to
    handle the old interface at least temporarily, so there will have to be
    a way to ask "as" which interface it uses. Nothing will ever generate a
    file named "a.out" unless it's explicitly told to do so. I'll also send
    the word out so everyone knows not to rely on the name "a.out" anymore.
    And I'll convince everyone that they've been doing it wrong for the last several decades.

    I'll let you know when that's done. Because nothing short of that would satisfy you.

    He would still find something that is different from his tool so
    that he anyway won't accept it.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Janis Papanagnou on Sat Feb 3 14:16:25 2024
    XPost: comp.unix.programmer

    On 03/02/2024 13:23, Janis Papanagnou wrote:
    On 02.02.2024 23:12, bart wrote:
    On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:

    That's true: only 47 years in computing, and 42 years of evolving,
    implementing and running my systems language.

    On how many different platforms?

    I started in 1976. I started using Windows for my products in 1995
    because all my potential customers could buy an off-the-shelf Windows
    PC.

    And that is actually the problem that folks have tried to make clear
    to you. Being so long in a bubble,

    Which bubble, the one before 1995, or after?

    Don't you see that using only Unix-like systems is also a bubble?


    I don't know about you, whether you have an academic technical
    background,

    I have a CS bachelor's degree.

    whether you had the chance to try out UNIX or the BSD
    variant these days.

    I've tinkered only with various kinds of Linuxes. It was fun if your
    idea of fun was spending 80% of your time getting things configured to
    work properly, and doing apt-gets every 5 minutes for yet another
    essential dependency.

    But if absolutely necessary I could switch my bubble to Linux, after an
    initial painful period of converting my tools. Then the bubble would
    eventually look very similar to the Windows one. As I say below, I don't
    care about OSes.

    Myself I already knew a couple OSes (for PCs, some not even worth
    to call them OS, for medium scale systems, and also for mainframes)
    before I had my first contact with a Unix system. With that systems
    and OS background it was easy to strive for the better ones; from
    the ones I met it was Unix. (BTW, I observed a similar enthusiasm
    with a friend of mine, a long year hardcore MS DOS user, when he
    got his fingers onto a Unix system.) You might imagine what a joy
    it thus was when Linux and the GNU tools appeared, a powerful and
    reliable(!) system and OS base, and even (almost) for free.

    (For you, I dare to say, it's obviously far too late. That ship has
    sailed. If you'd have strived for a broader experience in your early
    days it would certainly be a different situation.

    I'd say using anything but Unix /is/ a broader experience than using
    only Unix. That latter seems to give people the impression that unless
    every OS should either be exactly like Unix, then it is worthless.

    I used whatever OS DECSystem10/20 came with; MultiJob on ICL; RSX11M on
    PDP11; a CP/M clone (that my company, not me, developed); then the
    progression from MSDOS.

    Personally I have no interest in OSes other than they a provide a file
    system. Anything involving graphical apps in the 80s, I had to program everything down to the last pixel. Using, naturally, my language and my compiler.

    I also worked for few years in hardware development where the products I
    made didn't have an OS at all when I was working on them, but I needed
    the means to put test programs into them.

    There, you had to be resourceful. How would a Unix-like system (did they
    even work on 8-bit machines) have benefited me?

    I cannot speak for "people". Myself I name any issues I see; Unix
    issues are not exempt from that. - I have even a printed version of
    "The UNIX - HATERS Handbook" in my bookshelf (though a lot of its
    content is meanwhile outdated, it doesn't apply any more). - And I
    can certainly collect a page full of deficiencies I see with Linux.
    But so what? (For the MS platforms I could probably "fill a book".)
    Yet, in past decades, I haven't seen any serious competitor to Unix.
    (Note: When I'm saying that I am not considering e.g. supercomputers
    doing e.g. massive hydrodynamic computations. But even in this area
    there's meanwhile also Linux clusters running.)

    MSDOS and Windows were intended for direct use by ordinary consumers.
    Unix was intended for developers.

    Few ordinary consumers directly use a Unix-like system unless it is made
    to look like MacOS or Android. Or they run a GUI desktop that makes it
    look a bit like Windows.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Sat Feb 3 14:59:14 2024
    XPost: comp.unix.programmer

    On 03/02/2024 13:52, David Brown wrote:
    On 03/02/2024 06:52, Malcolm McLean wrote:
    On 02/02/2024 22:12, bart wrote:

    Why do you consider that fair game, but people hate if when anyone
    criticises Unix?

    Because UNIX systems used to typically cost tens of thousands of
    dollars, whilst a PC could be had for under a thousand dollars.

    I think you got stuck somewhere in the 1990's.

    So everyone could have a PC, but if you were given a UNIX system you
    were a bit special. And that gave UNIX programmers a sense of
    superiority.


    Anyone who has used both *nix and Windows for development work knows
    *nix is superior for that.  (Windows has other advantages - I use both Windows and Linux.  Neither is perfect, each has their good points and
    bad points.)

    It's a very silly attitude of course.

    Fighting against a system that works against everything you are trying
    to do is a very silly attitude.  Bart has choices - no one is forcing
    him to compile open source software that causes him trouble, no one is forcing him to use Windows, or C, or make, or anything else.  He has completely free choices.  He could use Linux and compile the projects without trouble.  He could use Windows and pre-built binaries.  He could use other projects, other languages, other tools.  He could choose to
    put his feet by the fireside and do Sudokus, or to travel the world and
    see other places.

    I've done all that of course. At one point I was taking 30-40 holidays a
    year (since there are only 52 weeks in a year, some of them had to be
    short breaks!).

    But when relaxing, even abroad, I /had/ to work on some project. I was
    so used to years of working all day, 7 days a week, to meet some
    deadline, that I couldn't switch off.

    I'm concerned about increasing bloat and complexity everywhere. So I'm
    just making a stand by developing my own small, human-scale products.

    No, I don't /need/ to use some library. But I won't get far these days
    without using any libraries; this is no longer the 80s/90s where you did everything. And you can't do so now anyway; the hardware is too hard to
    access.

    Then I can rail about products that are unnecessarily hard to use, be
    they APIs expressed in C, or stuff I need to build from source, just
    because developers are based on Linux and they expect all users to use
    Linux too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to bart on Sat Feb 3 07:17:17 2024
    bart <bc@freeuk.com> writes:

    On 03/02/2024 09:05, Tim Rentsch wrote:

    bart <bc@freeuk.com> writes:

    Indeed it is the case that producing a complete program is one
    part of my overall build process. But it is only one step out
    of many, and it is easy to express without needing any special
    considerations from the build system.

    So, will a specific build of such a project produce a single
    EXE/DLL//SO file? (The // includes the typical file extension of
    Linux executables.)

    No, there are several outputs of this kind, including object
    files, static libraries, and dynamic libraries, and all for a C
    environment. (There are also other outputs but of a different
    kind than what you are asking about.)

    I have no expectation that you will incorporate these ideas or
    capabilities into a tool you are building for yourself. I gave
    the list in case other readers might have an interest.

    OK. You seem fairly level-headed and calm, so I'll try this
    explanation. [...]

    You have no interest in what's important to me in a build system.

    I have no interest in what's important to you in a build system.

    And in any case the recent discussion of build systems has gone
    beyond the bounds of comp.lang.c and should be conducted in some
    other newsgroup, or perhaps some other venue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Sat Feb 3 16:26:21 2024
    XPost: comp.unix.programmer

    On 03/02/2024 14:39, Janis Papanagnou wrote:
    On 02.02.2024 16:26, David Brown wrote:

    [...] The Eclipse folk are experts at making an editor and IDE, [...]

    I have to disagree with this bit.


    My point is independent of whether or not you like Eclipse (people are
    split on that), or what editor you think is best (people break out in
    fights over that).

    The point is that the editor and IDE people make the editor and IDE, the compiler people make the compiler, the debugger people make the
    debugger, and so on - while to the user, the package looks more or less
    like a complete "do everything" development tool.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Tim Rentsch on Sat Feb 3 16:05:40 2024
    On 03/02/2024 15:17, Tim Rentsch wrote:
    bart <bc@freeuk.com> writes:

    On 03/02/2024 09:05, Tim Rentsch wrote:

    bart <bc@freeuk.com> writes:

    Indeed it is the case that producing a complete program is one
    part of my overall build process. But it is only one step out
    of many, and it is easy to express without needing any special
    considerations from the build system.

    So, will a specific build of such a project produce a single
    EXE/DLL//SO file? (The // includes the typical file extension of
    Linux executables.)

    No, there are several outputs of this kind, including object
    files, static libraries, and dynamic libraries, and all for a C
    environment. (There are also other outputs but of a different
    kind than what you are asking about.)

    I have no expectation that you will incorporate these ideas or
    capabilities into a tool you are building for yourself. I gave
    the list in case other readers might have an interest.

    OK. You seem fairly level-headed and calm, so I'll try this
    explanation. [...]

    You have no interest in what's imprtant to me in a build system.

    Determining which files to submit to a compiler is not important to you?

    OK... (I think the compiler will say otherwise!).

    But whether that is actually the case, that's the only thing I was
    addressing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Malcolm McLean on Sat Feb 3 16:03:12 2024
    XPost: comp.unix.programmer

    On 03/02/2024 15:44, Malcolm McLean wrote:
    On 03/02/2024 13:52, David Brown wrote:
    On 03/02/2024 06:52, Malcolm McLean wrote:
    On 02/02/2024 22:12, bart wrote:

    Why do you consider that fair game, but people hate if when anyone
    criticises Unix?

    Because UNIX systems used to typically cost tens of thousands of
    dollars, whilst a PC could be had for under a thousand dollars.

    I think you got stuck somewhere in the 1990's.

    So everyone could have a PC, but if you were given a UNIX system you
    were a bit special. And that gave UNIX programmers a sense of
    superiority.


    Anyone who has used both *nix and Windows for development work knows
    *nix is superior for that.  (Windows has other advantages - I use both
    Windows and Linux.  Neither is perfect, each has their good points and
    bad points.)

    You right. On Unix you can fire up the system, launch an editor, type
    "hello world" into it, type gcc or cc *.c -lm at the shell, type
    ./a.out, and yove got the outout "Hello world" and a template you can
    then modify to do practically anything you want.

    On Windows you've got the fire up Visual Studio, and set up a project
    file. And then you've gt to fiddle with it to enable the standard
    library. And then it will demand you include "stdafx.h" and you've got
    to fiddle with it a bit more to sop it asking for that. Then, whilst you
    will get an executable, when you launch it from the IDE, the output
    window will disappear before you can read it. And you have to fiddle
    with it a bit more. It's much less convenient.

    You're assuming VS has been installed. How about assuming Tiny C has
    been installed instead. Then:

    `On Windows, 'fire up the system', launch an editor, type "Hello world"
    into it, type "tcc *.c" at the shell, type hello and you've got the
    output "Hello world" and a simpler template.`

    So there is little difference (eg. gcc is preinstalled on one, but if
    you want to use tcc, it must be installed on both OSes).

    You're not comparing like with like.

    DB is saying Unix is superior because it comes with a million developer
    tools either built-in or instantly available via apt-get or whatever.

    That cuts no ice with me because I can work with a very spartan set of
    tools.

    On the other had, if you want a GUI, the Windows system is all set up
    for you and you just have to call the right functions. On Unix you have
    to configure some sort of front end to X, there's a lot more messing
    about, and the GUI elements aren't consistent.

    For GUI they're both a nightmare unless you use a simpler library that
    sits on top. Or are you saying that X is even worse than WinAPI?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Sat Feb 3 17:11:57 2024
    XPost: comp.unix.programmer

    On 03.02.2024 16:26, David Brown wrote:
    On 03/02/2024 14:39, Janis Papanagnou wrote:
    On 02.02.2024 16:26, David Brown wrote:

    [...] The Eclipse folk are experts at making an editor and IDE, [...]

    I have to disagree with this bit.


    My point is independent of whether or not you like Eclipse (people are
    split on that), or what editor you think is best (people break out in
    fights over that).

    The point is that the editor and IDE people make the editor and IDE,

    Yes, and my point was that they had no integration concept for that.
    (Thus it wouldn't occur to me to call that "experts". But they likely
    had just another agenda.) And this is of course completely independent
    of anyones opinion on Eclipse or on any specific editor.

    the
    compiler people make the compiler, the debugger people make the
    debugger, and so on - while to the user, the package looks more or less
    like a complete "do everything" development tool.

    Right. (Only they ignored the modularization on the "editor and IDE" component.)

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Sat Feb 3 17:59:32 2024
    XPost: comp.unix.programmer

    On 2024-02-03, bart <bc@freeuk.com> wrote:
    On 03/02/2024 01:31, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:

    What option is that, to have one command 'cc' that can deal with N
    different languages?

    Hint: it uses the filename extension to determine which language, and
    which flavour of the language even, it is dealing with.

    This is the filename extension which Linux famously ignores, because you
    can use any extension you like?

    Hint: my tools KNOW which language they are dealing with:

    You're arguing for a user-unfriendly system where you have to memorize
    a separate command for processing each language.

    Recognizing files by suffix is obviously superior.

    root@XXX:/mnt/c/c# cp hello.c hello.x
    root@XXX:/mnt/c/c# gcc hello.x
    hello.x: file not recognized: file format not recognized

    This is good; it's one more little piece of resistance
    against people using the wrong suffix.

    It's not the only one. Editors won't bring up the correct syntax
    formatting and coloring if the file suffix is wrong.

    Tools for cross-referencing identifiers in source code may also get
    things wrong due to the wrong suffix, or ignore the file entirely.

    Your argument of "I can rename my C to any suffix and my compiler
    still recognizes it" is completely childish.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Sat Feb 3 18:17:01 2024
    XPost: comp.unix.programmer

    On 2024-02-03, bart <bc@freeuk.com> wrote:
    On 03/02/2024 13:23, Janis Papanagnou wrote:
    On 02.02.2024 23:12, bart wrote:
    On 02/02/2024 21:42, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 13:47:25 +0000, bart wrote:

    That's true: only 47 years in computing, and 42 years of evolving,
    implementing and running my systems language.

    On how many different platforms?

    I started in 1976. I started using Windows for my products in 1995
    because all my potential customers could buy an off-the-shelf Windows
    PC.

    And that is actually the problem that folks have tried to make clear
    to you. Being so long in a bubble,

    Which bubble, the one before 1995, or after?

    Don't you see that using only Unix-like systems is also a bubble?

    Don't you see that living on Earth is literally being in bubble?

    Your bubble contains only one person.

    The Unix-like bubble is pertty huge, full of economic opportunities,
    spanning from embedded to server.

    A lot of embedded systems are Unix-like (and those that aren't often
    have Unix-like development environments). Phones, tables, routers, set
    top boxes, IOT, remote sensing, robotics, yaddda yadda.

    While you were dismissing Linux in 1995, it was actually going strong,
    marching forward. Only fools ignored it.

    A year before that, in 1994, I was doing contract work for Linux
    already. My client used it for serving up pay-per-click web pages to
    paying customers. I was working on the log processing and billing side
    of it, and also created a text-UI (curses) admin tool.

    I'd say using anything but Unix /is/ a broader experience than using
    only Unix.

    No, it isn't. That is fallacious. Working with anything else plus Unix
    is a broader experience than only Unix. Otherwise, we can't say.

    That latter seems to give people the impression that unless
    every OS should either be exactly like Unix, then it is worthless.

    Actually there are ways in which that is objectively true.

    An OS that provides more or less the same semantics as POSIX, but using interfaces that are gratuitously different, and incompatible, is
    worthless in this day and age.

    Something that doesn't conform to compatibility standards, and isn't demonstrably better for it, is a dud.

    There is good different and bad different. More or less same, but
    incompatible, is bad different.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Sat Feb 3 19:35:37 2024
    XPost: comp.unix.programmer

    On 03/02/2024 17:59, Kaz Kylheku wrote:
    On 2024-02-03, bart <bc@freeuk.com> wrote:
    On 03/02/2024 01:31, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:

    What option is that, to have one command 'cc' that can deal with N
    different languages?

    Hint: it uses the filename extension to determine which language, and
    which flavour of the language even, it is dealing with.

    This is the filename extension which Linux famously ignores, because you
    can use any extension you like?

    Hint: my tools KNOW which language they are dealing with:

    You're arguing for a user-unfriendly system where you have to memorize
    a separate command for processing each language.

    You have to impart that information to the tool in any case. It can
    either be by file extension, or the name of the command.

    So, 'cc' is some tool that looks at a file extension and selects a
    suitable program based on that extension; well done.

    But what is the point? Do you routinely invoke cc with multiple files of
    mixed languages? Suppose you wanted a different C compiler on each .c
    file? Oh, you then invoke it separately for each file. So you do that
    anyway in that rare event.



    Recognizing files by suffix is obviously superior.

    root@XXX:/mnt/c/c# cp hello.c hello.x
    root@XXX:/mnt/c/c# gcc hello.x
    hello.x: file not recognized: file format not recognized

    This is good; it's one more little piece of resistance
    against people using the wrong suffix.

    It's not the only one. Editors won't bring up the correct syntax
    formatting and coloring if the file suffix is wrong.

    Tools for cross-referencing identifiers in source code may also get
    things wrong due to the wrong suffix, or ignore the file entirely.

    This completely contradicts what people have been saying about Linux
    where file extensions are optional and only serve as a convenience.

    For example, executables can have no extension, or .exe, or even .c.

    It is Windows that places more store by file extensions, which Linux
    people say is a bad thing.

    But above you say that is the advantage of Linux.

    Your argument of "I can rename my C to any suffix and my compiler
    still recognizes it" is completely childish.

    It only seems to be childish when one of my programs handles this better
    than one of yours!

    There is only thing my mcc program can't do, which is to compile a C
    file named 'filename.'; that is, 'filename' followed by an actual '.',
    not 'filename' with no extension.

    And that's only when I run it under Linux. That's because under Linux, 'filename' and 'filename.' are distinct files; the "." is part of the
    file name, not a notional separator.


    So here's a summary of what I've recently learnt about Linux in the form
    of an FAQ:

    Q Why do Linux executables usually not have extensions?

    A Because if they had an extension like .exe or .elf, you'd have to
    invoke them as prog.exe or ./prog.elf, which would rapidly get tedious


    Q Why do filenames specified in shell command lines tend to not have
    default extensions?

    A Because 'abc.' and 'abc' are distinct files, it can't tell whether
    'abc' means a 'abc.x' (the default extension) or 'abc'. You can't use
    'abc.' to indicate no extension, because that is an actual file name


    Q Why do a lot of programs on Linux that default to stdin and stdout
    for i/o, never display helpful prompts or messages?

    A Because that output would screw up any data output that is sent to
    stdout. So they are silent. (Imagine if Bash displayed no prompt!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Sat Feb 3 20:12:02 2024
    XPost: comp.unix.programmer

    On 03/02/2024 18:17, Kaz Kylheku wrote:
    On 2024-02-03, bart <bc@freeuk.com> wrote:

    Don't you see that using only Unix-like systems is also a bubble?

    Don't you see that living on Earth is literally being in bubble?

    Your bubble contains only one person.

    The Unix-like bubble is pertty huge, full of economic opportunities,
    spanning from embedded to server.

    You're missing my point. Unix imposes a certain mindset, mainly that
    there is only ONE way to things, and that is the Unix way.

    That is pretty obvious from the passionate posts people make about it.
    And it is obvious that they struggle outside it, which is why they hate
    Windows - it just isn't Unix!


    While you were dismissing Linux in 1995, it was actually going strong, marching forward. Only fools ignored it.

    A year before that, in 1994, I was doing contract work for Linux
    already. My client used it for serving up pay-per-click web pages to
    paying customers. I was working on the log processing and billing side
    of it, and also created a text-UI (curses) admin tool.

    Meanwhile, a decade before that, the question of OS in my first
    commercial product was utterly irrelevant. It provided a file system and
    it was used to launch my app.

    What was it again? I can barely remember. I JUST DO NOT CARE.

    Of all those OSes I have used, Windows might rank near the bottom, but
    not for the reasons you think. That's because it operated in protected
    mode so that lots of things which had been easy, became hard.

    And now it is just monstrous. As I'm typing this, there are 240
    processes and 2700 threads. In 1984, a machine running my app would have exactly process: itself!

    (I've just looked up that machine I used c.1985; it was a PCW 8256, an
    8-bit machine running CP/M+. Why not Unix? I've no idea. Maybe it was
    just not practical. Maybe CP/M+ was cheaper.

    More pressing to me was how to get programs inside it, given its
    non-standard 3" floppies, and how to display graphics, given that it was
    a word processor with a text-mode display.

    How would Unix have helped with that? It wouldn't.)

    I'd say using anything but Unix /is/ a broader experience than using
    only Unix.

    No, it isn't. That is fallacious. Working with anything else plus Unix
    is a broader experience than only Unix. Otherwise, we can't say.

    Well, I've tinkered with Linux on and off since the late 90s. So yes, I
    have used it. I have ported a few of my tools to it in the past.

    So I have some experience of it. It's not as though I first used it last
    week. I just don't care for it. But if I had a gun pointed at my head I
    could use it.

    An OS that provides more or less the same semantics as POSIX, but using interfaces that are gratuitously different, and incompatible, is
    worthless in this day and age.

    Because .... you say so?

    I mean, are core OSes really so hard to write that everyone in the world
    has to use the same one? There seems to plenty of amateur OS development
    still.

    Something that doesn't conform to compatibility standards, and isn't demonstrably better for it, is a dud.

    There is good different and bad different. More or less same, but incompatible, is bad different.

    I build a box where you feed in data in the form of byte-stream, and it
    gives results in the form of a byte-stream. Or replace one of those by something physical; say the box is a printer or scanner.

    There is no OS specified, you've no idea whether it uses POSIX, or even
    if there's a computer inside.

    But if it performs a useful task, then what is the problem?

    Same thing if you are working on a self-contained function, library or
    app. It may have inputs or outputs. Do you need to care what OS is
    running? No, only on the job it has to do.

    Really you make too much of it. The main thing I don't like is when I
    have some software that is hard to build on Windows when there is no
    reason for it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From tTh@21:1/5 to bart on Sat Feb 3 21:57:15 2024
    XPost: comp.unix.programmer

    On 2/3/24 20:35, bart wrote:

    But what is the point? Do you routinely invoke cc with multiple files of mixed languages? Suppose you wanted a different C compiler on each .c
    file? Oh, you then invoke it separately for each file. So you do that
    anyway in that rare event.

    And this is better done with a short Maakefile.

    --
    +---------------------------------------------------------------------+
    | https://tube.interhacker.space/a/tth/video-channels | +---------------------------------------------------------------------+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Malcolm McLean on Sat Feb 3 21:24:35 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    On 02/02/2024 18:26, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 02/02/2024 15:18, Scott Lurndal wrote:


    To build a "smaller, easier, nicer" make, if that is the goal (and it's
    a very legitimate one),

    No, it's not really a legitimate one, since make supports the
    single file cases -without a makefile-.

    And it supports the largest projects as well (the linux kernel,
    for example, is built with make).

    (Unfortunately if you write C++ rather than C, even a 3.7 GHz machine
    isn't going to be fast enough. But maybe your users don't use C++).


    FYI, on my 64-core 3.4Ghz build machine, it takes an hour
    to build our C++ application on a single core. With -j32,
    it finishes in a bit under 5 minutes. (32 because it is
    a system shared by a number of users, and the compiles
    end up I/O bound).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Malcolm McLean on Sat Feb 3 21:39:54 2024
    XPost: comp.unix.programmer

    On Sat, 3 Feb 2024 05:52:18 +0000, Malcolm McLean wrote:

    .. but if you were given a UNIX system you were a
    bit special. And that gave UNIX programmers a sense of superiority.

    It's a very silly attitude of course.

    Naturally. And a company like Microsoft, that is right now trying to turn Windows into Linux, is simply behaving like a very silly company.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Chris M. Thomasson on Sat Feb 3 22:11:45 2024
    XPost: comp.unix.programmer

    On 03/02/2024 20:31, Chris M. Thomasson wrote:
    On 2/3/2024 8:03 AM, bart wrote:
    [...]

    Do you have a windows installation with a recent version of MSVC
    installed? Give vcpkg a go, and see how it builds things... Then automatically integrates them into MSVC. It's pretty nice and about
    time. ;^)

    You haven't followed by posts very well. I want to keep as far away from
    all that stuff as possible.

    (The last time I installed VS, it took 90 minutes. Each time it started
    up, usually by inadvertently because of file association, it took 90
    seconds. On the same machine, an old one, it took 0.2 seconds to build
    my C compiler.)

    Everything I am about is managing to do this stuff by the simplest,
    leanest means possible. If a program is written in C, then why would you
    need anything other than a C compiler to build it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Malcolm McLean on Sat Feb 3 21:15:52 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    periority.

    It's a very silly attitude of course.


    Ah, self-awareness after all...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to Keith Thompson on Sat Feb 3 17:56:25 2024
    On 2/3/24 4:51 PM, Keith Thompson wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:
    A #include directive with <> searches for a "header", which is not
    stated to be a file. A #include directive with "" searches for a file >>>>> in an implementation-defined manner; if that search fails, it tries >>>>> again as if <> had been used.

    The trouble with that interpretation is, it would seem to rule out the use >>>> of things like include libraries for user headers. Do you really think >>>> that was the intention?

    I don't know. I imagine an implementation could interpret the word
    "file" to include information extracted from libraries. Note that it
    doesn't have to correspond to the concept of a "file" used in <stdio.h>; >>> that refers to files in the execution environment, not the compilation
    environment.

    To me what the C standard says is clear. A #include "whatever.h"
    gets its stuff from a file (assuming of course the appropriate
    file can be found, and not revert to the #include <whatever.h>
    form). A #include <whatever.h> gets its stuff from a header,
    said header perhaps being stored in a file or perhaps not, and if
    file-stored then it could be a 1-1 relationship, or a 1-many
    relationship, or a many-1 relationship. Since the C standard
    doesn't define the term 'header', an implementation is allowed to
    actualize it however the implementation chooses (including the
    possibility of storing information inside the compiler itself).

    On further thought, I tend to agree.

    I was thinking that an implementation could usefully provide some of its
    own headers as something other than files, as it's clearly allowed to do
    for the C standard headers. But the obvious way to do that would be to require such headers to be included with <>, not "". POSIX-specific
    headers like unistd.h are already conventionally included with <>.

    An implementation probably *could* bend the meaning of "file" enough to support having `#include "whatever.h"` load something other than a file
    in the host filesystem, but it's not as useful as I first thought it
    might be -- and it could interfere with user-provided header files that happen to have the same name.


    I beleive an implementation doesn't need to provide a way to provide replacements for the standard defined headers.

    The include search method is fully implementation defined, with only the provision that if you use " " and don't find the file, it needs to use
    the < > method, but that doesn't say that the standard headers might not
    be first in the " " search order.

    Als 7.1.2p4 says:

    If a file with the same name as one of the above < and > delimited
    sequences, not provided as part of the implementation, is placed in any
    of the standard places that are searched for included source files, the behavior is undefined.

    So overridding a Standard defined header is explicitly Undefined
    Behaivor. (Not sure if POSIX extends that to its headers).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Chris M. Thomasson on Sat Feb 3 22:19:25 2024
    On 03/02/2024 20:39, Chris M. Thomasson wrote:
    On 2/3/2024 3:54 AM, bart wrote:
    [...]

    Say I want to use your C compiler. How do I use it when I need to
    assemble and link external asm code? Say, I assembled something into an
    .o file, how can I make your C compiler use it, link it in, ect...

    Using the C ABI, I would create declarations for its functions.


    masm version, intel syntax:

    http://web.archive.org/web/20060214112539/http://appcore.home.comcast.net/appcore/src/cpu/i686/ac_i686_masm_asm.html


    So, this creates some functions. How would I use your compiler to call
    these functions from my C code in your system?

    My compilers are written as self-contained products. Inputs are source
    files in the language, the output is a EXE or DLL binary.

    External code from other languages is usually dynamically linked. You
    provide a C header file, and a DLL, say yourlib.dll:

    mcc prog.c yourlib.dll

    For anything different, then you do this:

    mcc prog.c -c

    This produces a standard prog.obj object file. Now you can use regular
    tools to statically link with code from other languages.

    I don't have my own tool to read .obj files and statically link them.
    That could be done, but it is not a priority for my stuff.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Chris M. Thomasson on Sun Feb 4 01:19:53 2024
    XPost: comp.unix.programmer

    On 04/02/2024 00:24, Chris M. Thomasson wrote:
    On 2/3/2024 2:11 PM, bart wrote:
    On 03/02/2024 20:31, Chris M. Thomasson wrote:
    On 2/3/2024 8:03 AM, bart wrote:
    [...]

    Do you have a windows installation with a recent version of MSVC
    installed? Give vcpkg a go, and see how it builds things... Then
    automatically integrates them into MSVC. It's pretty nice and about
    time. ;^)

    You haven't followed by posts very well. I want to keep as far away
    from all that stuff as possible.

    Okay.


    (The last time I installed VS, it took 90 minutes. Each time it
    started up, usually by inadvertently because of file association, it
    took 90 seconds. On the same machine, an old one, it took 0.2 seconds
    to build my C compiler.)

    It boots right up for me, less than two seconds, even though it is
    pretty damn fat.

    It might be faster now on my SSD drive. However my own stuff didn't need
    an SSD drive; that's part of the point of keeping things small.



    Everything I am about is managing to do this stuff by the simplest,
    leanest means possible. If a program is written in C, then why would
    you need anything other than a C compiler to build it?

    Can you C compiler handle C11? If so, that would be great. This one can
    do it, MSVC well, nope. MSVC handles C11 atomics, but not threads! GRRRRR.

    It compiles some undefined subset of C. But I haven't touched that side
    of it for years. That last update of it changed the backend.

    MCC is anyway now a private tool. Either programs work with it or they
    don't.

    But the problem being discussed at length is getting that input into the compiler in the first place!

    Everybody says use makefiles; well they don't work. They tend to be
    heavily skewed towards the use of gcc. My compiler isn't gcc.

    AFAIK the C standard doesn't mention gcc (nor, probably, makefiles!).

    So I'm disappointed there isn't a better, simpler solution to a very,
    very simple problem: what exactly goes in place of ... when building any complete program:

    cc ...

    And after 100s of posts, still nobody gets it. Oh, just use an
    invariably Linux-centric, gcc-centric script in a different language.
    How about an OS-neutral, compiler-neutral solution that doesn't involve
    a third-party language? (English - or Norwegian - accepted.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gary R. Schmidt@21:1/5 to bart on Sun Feb 4 14:07:48 2024
    XPost: comp.unix.programmer

    On 04/02/2024 12:19, bart wrote:
    On 04/02/2024 00:24, Chris M. Thomasson wrote:
    On 2/3/2024 2:11 PM, bart wrote:
    On 03/02/2024 20:31, Chris M. Thomasson wrote:
    On 2/3/2024 8:03 AM, bart wrote:
    [...]

    Do you have a windows installation with a recent version of MSVC
    installed? Give vcpkg a go, and see how it builds things... Then
    automatically integrates them into MSVC. It's pretty nice and about
    time. ;^)

    You haven't followed by posts very well. I want to keep as far away
    from all that stuff as possible.

    Okay.


    (The last time I installed VS, it took 90 minutes. Each time it
    started up, usually by inadvertently because of file association, it
    took 90 seconds. On the same machine, an old one, it took 0.2 seconds
    to build my C compiler.)

    It boots right up for me, less than two seconds, even though it is
    pretty damn fat.

    It might be faster now on my SSD drive. However my own stuff didn't need
    an SSD drive; that's part of the point of keeping things small.



    Everything I am about is managing to do this stuff by the simplest,
    leanest means possible. If a program is written in C, then why would
    you need anything other than a C compiler to build it?

    Can you C compiler handle C11? If so, that would be great. This one
    can do it, MSVC well, nope. MSVC handles C11 atomics, but not threads!
    GRRRRR.

    It compiles some undefined subset of C. But I haven't touched that side
    of it for years. That last update of it changed the backend.

    MCC is anyway now a private tool. Either programs work with it or they
    don't.

    But the problem being discussed at length is getting that input into the compiler in the first place!

    Everybody says use makefiles; well they don't work. They tend to be
    heavily skewed towards the use of gcc. My compiler isn't gcc.

    AFAIK the C standard doesn't mention gcc (nor, probably, makefiles!).

    So I'm disappointed there isn't a better, simpler solution to a very,
    very simple problem: what exactly goes in place of ... when building any complete program:

        cc ...

    And after 100s of posts, still nobody gets it. Oh, just use an
    invariably Linux-centric, gcc-centric script in a different language.
    How about an OS-neutral, compiler-neutral solution that doesn't involve
    a third-party language? (English - or Norwegian - accepted.)


    Oh, you mean FORTRAN-IV?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Malcolm McLean on Sun Feb 4 06:43:58 2024
    XPost: comp.unix.programmer

    On Sat, 3 Feb 2024 15:44:29 +0000, Malcolm McLean wrote:

    On the other had, if you want a GUI, the Windows system is all set up
    for you and you just have to call the right functions.

    Except the Win32 GUI functions are pretty low-level, so everybody uses
    some kind of toolkit. Only it’s not clear which toolkit is Microsoft’s official recommendation this week--is it MAUI? Dotnet? WinForms? Something
    else I haven’t even heard of?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sun Feb 4 06:47:33 2024
    XPost: comp.unix.programmer

    On Sat, 3 Feb 2024 12:02:48 +0000, bart wrote:

    I guess you're not curious about WHY a project that builds easily on
    Unix causes problems on Windows?

    Fundamental NT kernel limitations, going back decades and seemingly
    unfixable. Like poll(2)/select(2) not being able to work on pipes. Like
    why WSL1 had to be abandoned (in spite of Microsoft’s best efforts), and a proper Linux kernel brought in with WSL2.

    The irony is, the Cygwin folk have been able to build a better POSIX
    emulation layer than Microsoft has been able to manage.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sun Feb 4 06:51:41 2024
    XPost: comp.unix.programmer

    On Sat, 3 Feb 2024 14:59:14 +0000, bart wrote:

    I'm concerned about increasing bloat and complexity everywhere. So I'm
    just making a stand by developing my own small, human-scale products.

    And you run your stuff on what is probably the most bloated, monolithic, inflexible, unwieldy, clumsy, overcomplicated, inefficient and bug-ridden
    OS in existence--Microsoft Windows.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Sun Feb 4 11:08:10 2024
    XPost: comp.unix.programmer

    On 04/02/2024 06:51, Lawrence D'Oliveiro wrote:
    On Sat, 3 Feb 2024 14:59:14 +0000, bart wrote:

    I'm concerned about increasing bloat and complexity everywhere. So I'm
    just making a stand by developing my own small, human-scale products.

    And you run your stuff on what is probably the most bloated, monolithic, inflexible, unwieldy, clumsy, overcomplicated, inefficient and bug-ridden
    OS in existence--Microsoft Windows.

    I'm sure others can give it a run for its money too. I remember my
    Android phone taking 3 minutes to do a restart.

    However that vast complexity doesn't get in the way of building C
    programs which is what this is about.

    But the complexity of Linux CAN get in the way of that, since many build processes like to utilise half of its features, and that complexity then
    needs to be replicated on Windows it I want to build something there.

    So while your hatred of Windows is irrational; my dislike of Linux is
    rational, since it directly affects the subject of the thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Sun Feb 4 13:29:45 2024
    XPost: comp.unix.programmer

    On 03/02/2024 18:02, Malcolm McLean wrote:
    On 03/02/2024 16:03, bart wrote:
    On 03/02/2024 15:44, Malcolm McLean wrote:

    On the other had, if you want a GUI, the Windows system is all set up
    for you and you just have to call the right functions. On Unix you
    have to configure some sort of front end to X, there's a lot more
    messing about, and the GUI elements aren't consistent.

    For GUI they're both a nightmare unless you use a simpler library that
    sits on top. Or are you saying that X is even worse than WinAPI?

    I've programmed for both and Windows GUI is quite a bit easier to use.
    You have to enter a library name explictly to get the common controls,
    for some stupid reason, but once you do that the whole system is set up
    for you. Just call the API more or less as you would any other C
    function (except for tiny message loop interface), you've got a rich set
    of controls, and they are well designed and harmonised with the rest of
    the programs on the system.

    X - if you try to program to Xlib directly you're messing about with
    colur maps and goodness knows what just to get up a window. And if you
    don't it's dependency land and all that that entails, with some popular widget toolsets but no real standards. And often you find that these
    will break. However nowadays you can use QT. Which is alot better but
    still not very well designed with a non-canonical slot / message system
    and poor facilites for layout. That's why I was driven to write Baby X.
    A simple clean interface to Xlib that would allow you to get graphics up quickly and easily. You shouldn't have to do that, of course.


    No sane person ever programs directly using Xlib or WinAPI for a gui
    unless they have extremely niche requirements. People program using QT, wxWidgets, GTK, SDL, or a range of other toolkits - almost all of which
    are cross-platform and also support a range of language bindings. (Few
    people program gui apps in C.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Sun Feb 4 13:53:51 2024
    XPost: comp.unix.programmer

    On 03/02/2024 15:16, bart wrote:

    MSDOS and Windows were intended for direct use by ordinary consumers.
    Unix was intended for developers.


    There is a bit of truth in that - though Unix was also targeted at
    serious computer users, workstation users (such as for CAD, simulations,
    and other demanding work), server usage, and other "big" things. DOS
    and Windows were targeted at simpler consumer usage, office applications
    and games.

    As Linux gained traction, it became perfectly good for "ordinary
    consumers" too.

    My mother-in-law has used Linux for the last 15 years or so, for email, browsing, writing letters, banking, and suchlike. And she is as far
    from being a "computer nerd" as you can get!

    (Given your statement here, why do you find it so hard to accept that
    people find Linux a much better platform for developers than Windows?)


    Few ordinary consumers directly use a Unix-like system unless it is made
    to look like MacOS or Android. Or they run a GUI desktop that makes it
    look a bit like Windows.


    I run a gui that makes my computer look like a computer - instead of
    Windows which tries to make it look like a giant schizophrenic telephone
    (after having gone through stages such as XP's Teletubbies interface).

    Of course, with Linux you have a choice - you can pick a giant telephone
    gui if you like.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Sun Feb 4 13:44:53 2024
    XPost: comp.unix.programmer

    On 04/02/2024 05:56, Malcolm McLean wrote:
    On 04/02/2024 01:19, bart wrote:

    So I'm disappointed there isn't a better, simpler solution to a very,
    very simple problem: what exactly goes in place of ... when building
    any complete program:


    No. I get it. Over complicated build systems which break. Very serious
    issue. I've had builds break on me and I'm very surprised more people
    haven't had the same experience and don't easily understand what you are saying.

    But where David Brown is right is that it is one thing to diagnose the problem, quite another to solve it. That is extremely difficult and I
    don't think we'll find the answer easily. But continue to discuss.


    I'm glad you think I am right - and I agree that as a general point,
    solving issues is usually harder than diagnosing them. But I did not
    say anything remotely like that in any posts, as far as I am aware.

    In particular, I am not aware of any "diagnosis" of fundamental issues
    with build tools that need solving - certainly not "solving" by Bart's solution. (I am aware that /Bart/ has trouble using common tools, and
    that his solution might help /him/ - which is fine, and I wish him luck
    with it for fixing his own issues.) Some people might use tools badly,
    and some people publish projects where others find the builds difficult
    on different systems. That's a matter of use, not the tools - others
    find they work fine. (No tool is perfect, of course, and there's always
    scope for improvement.)

    So if you want to use my name, I'd rather you did it in reference to
    things I have actually said.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Sun Feb 4 14:01:08 2024
    XPost: comp.unix.programmer

    On 04/02/2024 12:53, David Brown wrote:
    On 03/02/2024 15:16, bart wrote:

    MSDOS and Windows were intended for direct use by ordinary consumers.
    Unix was intended for developers.


    There is a bit of truth in that - though Unix was also targeted at
    serious computer users, workstation users (such as for CAD,

    (My company specialised in low-end CAD products, one of them running on
    an 8-bit computer using CP/M. I think at one CAD/CAM trade show, we had
    the cheapest product by far.)

    (Given your statement here, why do you find it so hard to accept that
    people find Linux a much better platform for developers than Windows?)

    I didn't quite say that. I meant that Unix with its abstruse interface
    was more suited to technical people such as developers, but also those
    in academia or industry. Who could also afford such a machine (because
    somebody else was paying).

    Some aspects of it, such as case-sensitive commands and file system,
    would have caused difficulties. Real-life is not usually case-sensitive.
    Even now, ordinary people's exposure to it seems to be mainly with
    passwords.

    (I did a lot of telephone support walking people through dialogs on a
    terminal. A case-sensitive OS would have made things considerably harder.)

    But it does seem as though Unix was a breeding ground for multitudinous developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble
    on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Sun Feb 4 06:18:08 2024
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:

    A #include directive with <> searches for a "header", which is
    not stated to be a file. A #include directive with "" searches
    for a file in an implementation-defined manner; if that search
    fails, it tries again as if <> had been used.

    The trouble with that interpretation is, it would seem to rule
    out the use of things like include libraries for user headers.
    Do you really think that was the intention?

    I don't know. I imagine an implementation could interpret the
    word "file" to include information extracted from libraries. Note
    that it doesn't have to correspond to the concept of a "file" used
    in <stdio.h>; that refers to files in the execution environment,
    not the compilation environment.

    To me what the C standard says is clear. A #include "whatever.h"
    gets its stuff from a file (assuming of course the appropriate
    file can be found, and not revert to the #include <whatever.h>
    form). A #include <whatever.h> gets its stuff from a header,
    said header perhaps being stored in a file or perhaps not, and if
    file-stored then it could be a 1-1 relationship, or a 1-many
    relationship, or a many-1 relationship. Since the C standard
    doesn't define the term 'header', an implementation is allowed to
    actualize it however the implementation chooses (including the
    possibility of storing information inside the compiler itself).

    On further thought, I tend to agree.

    I was thinking that an implementation could usefully provide some of
    its own headers as something other than files, as it's clearly
    allowed to do for the C standard headers. But the obvious way to do
    that would be to require such headers to be included with <>, not
    "". POSIX-specific headers like unistd.h are already conventionally
    included with <>.

    An implementation probably *could* bend the meaning of "file" enough
    to support having `#include "whatever.h"` load something other than
    a file in the host filesystem, but it's not as useful as I first
    thought it might be -- and it could interfere with user-provided
    header files that happen to have the same name.

    A correction of my earlier statement: actually the C standard does
    define the word header, via the convention of using italics, in the
    first sentence of section 7.1.2, paragraph 1 (and it is "defined" in
    a way that IMO is singularly useless as a definition).

    It seems clear that any implementation-provided "include units" are
    meant to be supplied as 'headers', so that they may be accessed by
    using a #include <name.h> form of inclusion. Similarly it seems
    clear that any compilation-specific or project-specific "include
    units" are meant to be supplied as files, so that they may be
    accessed by using a #include "foo.h" form of inclusion. I don't see
    any place in the C standard that expresses this distinction, but
    surely it is meant to reflect the normal use pattern.

    Also I don't see anything in the C standard that would preclude
    having system-wide (but not tied to the implementation) "include
    units" be available as headers, and so accessible using the <> form
    of inclusion. Third-party libraries are often made available in
    this way.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Sun Feb 4 15:50:41 2024
    XPost: comp.unix.programmer

    On 02/02/2024 10:05, David Brown wrote:
    On 02/02/2024 00:30, Lawrence D'Oliveiro wrote:
    On Thu, 1 Feb 2024 22:34:36 +0100, David Brown wrote:

    I am, however, considering CMake (which works at a
    higher level, and outputs makefiles, ninja files or other project
    files).

    Ninja was created as an alternative to Make.

    It is an alternative to some uses of make - but by no means all uses.

    Basically, if your Makefiles
    are going to be generated by a meta-build system like CMake or
    Meson, then
    they don’t need to support the kinds of niceties that facilitate writing
    them by hand. So you strip it write down to the bare-bones
    functionality,
    which makes your builds fast while consuming minimal resources, and that
    is Ninja.

    Yes.

    It is not normal to write ninja files by hand - the syntax is
    relatively simple, but quite limited. So it covers the lower level bits
    of "make", but not the higher level bits.


    Perhaps ninja is the tool that Bart is looking for? For the kinds of
    things he is doing, I don't think it would be hard to write the ninja
    files by hand.

    I've had a look. It doesn't look much simpler to me. But even if it was
    (in that I could trivially extract the necessary info), open source
    projects would need to use it.

    This is from its manual:

    "Ninja is yet another build system. It takes as input the
    interdependencies of files (typically source code and output
    executables) and orchestrates building them, quickly.

    Ninja joins a sea of other build systems. Its distinguishing goal is to
    be fast. It is born from my work on the Chromium browser project, which
    has over 30,000 source files ... "

    The projects I'm into are 100 to 1000 times smaller than that.

    (On my machine, the binaries for Chrome are 20 files totalling 320MB.
    But 230MB of that is in one giant DLL file. I wouldn't have taken that approach. There are ways to split it up into more discrete binaries, and
    yet still present one monolithic DLL.

    BTW that 230MB DLL exports these 6 functions:

    0 00CF1480 13571200 Fun ChromeMain
    1 06DBE4B0 115074224 Fun CrashForExceptionInNonABICompliantCodeRange
    2 02423320 37892896 Fun GetHandleVerifier
    3 031AB2F0 52081392 Fun IsSandboxedProcess
    4 0240DCD0 37805264 Fun RelaunchChromeBrowserWithNewCommandLineIfNeeded
    5 08EF8C40 149916736 Fun sqlite3_dbdata_init
    )

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Richard Damon on Sun Feb 4 07:52:22 2024
    Richard Damon <richard@damon-family.org> writes:

    On 2/3/24 4:51 PM, Keith Thompson wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Thu, 01 Feb 2024 19:03:38 -0800, Keith Thompson wrote:

    A #include directive with <> searches for a "header", which is
    not stated to be a file. A #include directive with "" searches
    for a file in an implementation-defined manner; if that search
    fails, it tries again as if <> had been used.

    The trouble with that interpretation is, it would seem to rule
    out the use of things like include libraries for user headers.
    Do you really think that was the intention?

    I don't know. I imagine an implementation could interpret the
    word "file" to include information extracted from libraries.
    Note that it doesn't have to correspond to the concept of a
    "file" used in <stdio.h>; that refers to files in the execution
    environment, not the compilation environment.

    To me what the C standard says is clear. A #include "whatever.h"
    gets its stuff from a file (assuming of course the appropriate
    file can be found, and not revert to the #include <whatever.h>
    form). A #include <whatever.h> gets its stuff from a header,
    said header perhaps being stored in a file or perhaps not, and if
    file-stored then it could be a 1-1 relationship, or a 1-many
    relationship, or a many-1 relationship. Since the C standard
    doesn't define the term 'header', an implementation is allowed to
    actualize it however the implementation chooses (including the
    possibility of storing information inside the compiler itself).

    On further thought, I tend to agree.

    I was thinking that an implementation could usefully provide some
    of its own headers as something other than files, as it's clearly
    allowed to do for the C standard headers. But the obvious way to
    do that would be to require such headers to be included with <>,
    not "". POSIX-specific headers like unistd.h are already
    conventionally included with <>.

    An implementation probably *could* bend the meaning of "file"
    enough to support having `#include "whatever.h"` load something
    other than a file in the host filesystem, but it's not as useful as
    I first thought it might be -- and it could interfere with
    user-provided header files that happen to have the same name.

    I beleive an implementation doesn't need to provide a way to provide replacements for the standard defined headers.

    The include search method is fully implementation defined,

    Not exactly. The <> form of #include searches "a sequence of implementation-defined places" for a header, whereas the "" form
    of #include searches "in an implementation-defined manner" for
    "the named source file". The sequence of places (for headers) is
    fixed for each implementation, but where a "" form of #include
    searches (for a file) is not fixed but may vary from, for example,
    compilation to compilation. Of course there is nothing stopping
    an implementation from defining those two searches so that there
    is some overlap, but surely that possibility is not expected to be
    realized in any normal configuration (except perhaps by accident),
    either by the C standard's authors or by developers.

    with only
    the provision that if you use " " and don't find the file, it
    needs to use the < > method, but that doesn't say that the
    standard headers might not be first in the " " search order.

    The "" form of #include doesn't have a "search order", only a
    statement that a search is done "in an implementation-defined
    manner". Normally the set of places searched for "" files is
    not fixed, and typically can be changed by the developer using
    compilation options such as -I.

    Als 7.1.2p4 says:

    If a file with the same name as one of the above < and > delimited
    sequences, not provided as part of the implementation, is placed in
    any of the standard places that are searched for included source
    files, the behavior is undefined.

    So overridding a Standard defined header is explicitly Undefined
    Behaivor. (Not sure if POSIX extends that to its headers).

    I believe this conclusion is a misreading. The C standard uses the
    word "places" only in connection with the <> form of #include, and
    not with the "" form of #include. There is nothing wrong with, for
    example, #include "stdio.h", and having a local stdio.h file (which
    may do a #include <stdio.h> internally). Indeed it appears that
    part of the point of the rule that the <> form is used if the ""
    form fails is so that a #include "stdio.h" may be used and simply
    fall back to the <stdio.h> header if the file stdio.h is not
    present. There is no undefined behavior for having a file named
    "stdio.h" (or any other standard header name), as long as such
    files are not in one of the fixed set of places defined by the
    implementation for where headers (as opposed to regular #include
    source files) might reside.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Jackson@21:1/5 to bart on Sun Feb 4 16:13:45 2024
    XPost: comp.unix.programmer

    On 2024-02-03, bart <bc@freeuk.com> wrote:
    On 03/02/2024 18:17, Kaz Kylheku wrote:
    On 2024-02-03, bart <bc@freeuk.com> wrote:

    Don't you see that using only Unix-like systems is also a bubble?

    Don't you see that living on Earth is literally being in bubble?

    Your bubble contains only one person.

    The Unix-like bubble is pertty huge, full of economic opportunities,
    spanning from embedded to server.

    You're missing my point. Unix imposes a certain mindset, mainly that
    there is only ONE way to things, and that is the Unix way.
    ^^^

    I had to laugh. Others criticise the linux/unix world for having TOO
    many ways of doing things, which makes things difficult :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Sun Feb 4 18:27:21 2024
    XPost: comp.unix.programmer

    On 04/02/2024 16:50, Malcolm McLean wrote:
    On 04/02/2024 12:44, David Brown wrote:
    On 04/02/2024 05:56, Malcolm McLean wrote:
    On 04/02/2024 01:19, bart wrote:

    So I'm disappointed there isn't a better, simpler solution to a
    very, very simple problem: what exactly goes in place of ... when
    building any complete program:


    No. I get it. Over complicated build systems which break. Very
    serious issue. I've had builds break on me and I'm very surprised
    more people haven't had the same experience and don't easily
    understand what you are saying.

    But where David Brown is right is that it is one thing to diagnose
    the problem, quite another to solve it. That is extremely difficult
    and I don't think we'll find the answer easily. But continue to discuss. >>>

    I'm glad you think I am right - and I agree that as a general point,
    solving issues is usually harder than diagnosing them.  But I did not
    say anything remotely like that in any posts, as far as I am aware.

    In particular, I am not aware of any "diagnosis" of fundamental issues
    with build tools that need solving - certainly not "solving" by Bart's
    solution.  (I am aware that /Bart/ has trouble using common tools, and
    that his solution might help /him/ - which is fine, and I wish him
    luck with it for fixing his own issues.)  Some people might use tools
    badly, and some people publish projects where others find the builds
    difficult on different systems.  That's a matter of use, not the tools
    - others find they work fine.  (No tool is perfect, of course, and
    there's always scope for improvement.)

    So if you want to use my name, I'd rather you did it in reference to
    things I have actually said.


    You've said repeatedly and at great length that Bart's proposed
    solutions won't work.

    No, I have said repeatedly that it would not work for /me/.

    You haven't actually admitted that he has
    diagnosed a problem which needs to be solved

    Why would I "admit" something I don't believe?

    and maybe I should have
    made that clearer.

    It's not a matter of clarity - you said explicitly and clearly that I
    had talked about "diagnosing the problem".

    Where you're right is that writing a better build
    system than make is hard. Bart referenced Norwegian, which obviously
    meant you, and so I didn't introduce your name.


    I think Bart referenced Norwegian as something of a joke, as a pun on programming languages and human languages and a reference to the
    wandering topics we've had here recently.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Sun Feb 4 18:36:48 2024
    XPost: comp.unix.programmer

    On 04/02/2024 15:01, bart wrote:
    On 04/02/2024 12:53, David Brown wrote:
    On 03/02/2024 15:16, bart wrote:

    MSDOS and Windows were intended for direct use by ordinary consumers.
    Unix was intended for developers.


    There is a bit of truth in that - though Unix was also targeted at
    serious computer users, workstation users (such as for CAD,

    (My company specialised in low-end CAD products, one of them running on
    an 8-bit computer using CP/M. I think at one CAD/CAM trade show, we had
    the cheapest product by far.)


    OK. There's always been a market for low-end, low-price products on
    low-price hardware. (That's not a criticism in any way.) At the
    high-end, it was all Unix at least until Windows NT came out. And since
    then, it's still primarily *nix for the top end stuff - mostly running
    on Linux, of course. (Number two choice would be Macs, especially in
    some areas.)

    (Given your statement here, why do you find it so hard to accept that
    people find Linux a much better platform for developers than Windows?)

    I didn't quite say that. I meant that Unix with its abstruse interface
    was more suited to technical people such as developers, but also those
    in academia or industry. Who could also afford such a machine (because somebody else was paying).


    That used to be the case, yes.


    Some aspects of it, such as case-sensitive commands and file system,
    would have caused difficulties.

    I know that's a hobby-horse of yours, but it is completely irrelevant to
    most people. People who use gui's don't care about case sensitivity.

    Real-life is not usually case-sensitive.
    Even now, ordinary people's exposure to it seems to be mainly with
    passwords.


    I've only ever heard of it being an issue when someone has left their
    caps lock on by mistake.

    (I did a lot of telephone support walking people through dialogs on a terminal. A case-sensitive OS would have made things considerably harder.)


    Ordinary users don't use terminals.

    But it does seem as though Unix was a breeding ground for multitudinous developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble
    on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.


    I am very used to both environments. I would consider myself as a
    "power user" of Linux /and/ a "power user" of Windows. (I admit that my advanced usage on Windows is getting a bit out of date. I've tried to
    avoid anything after Windows 7.) This is why I can give a qualified
    opinion comparing the OS'es - though it is still obviously an opinion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Sun Feb 4 18:48:12 2024
    XPost: comp.unix.programmer

    On 03/02/2024 20:35, bart wrote:
    On 03/02/2024 17:59, Kaz Kylheku wrote:
    On 2024-02-03, bart <bc@freeuk.com> wrote:
    On 03/02/2024 01:31, Lawrence D'Oliveiro wrote:
    On Fri, 2 Feb 2024 21:51:43 +0000, bart wrote:

    What option is that, to have one command 'cc' that can deal with N
    different languages?

    Hint: it uses the filename extension to determine which language, and
    which flavour of the language even, it is dealing with.

    This is the filename extension which Linux famously ignores, because you >>> can use any extension you like?

    Hint: my tools KNOW which language they are dealing with:

    You're arguing for a user-unfriendly system where you have to memorize
    a separate command for processing each language.

    You have to impart that information to the tool in any case. It can
    either be by file extension, or the name of the command.

    So, 'cc' is some tool that looks at a file extension and selects a
    suitable program based on that extension; well done.


    "cc" is not a tool in itself - it's just a standard name for the
    system's C compiler. It might be a pure C compiler. On most Linux
    systems, it is a symbolic link to a version of gcc. And the "gcc"
    binary is a front-end, and will handle C, C++, assembly, linking, and -
    if you have installed the compilers - FORTRAN, Ada, and possibly other languages.

    But what is the point? Do you routinely invoke cc with multiple files of mixed languages?

    I certainly invoke it with C, C++ and the occasional assembly file.
    (Though I do so explicitly as gcc rather than cc.)

    Suppose you wanted a different C compiler on each .c
    file? Oh, you then invoke it separately for each file. So you do that
    anyway in that rare event.

    Yes.




    Recognizing files by suffix is obviously superior.

        root@XXX:/mnt/c/c# cp hello.c hello.x
        root@XXX:/mnt/c/c# gcc hello.x
        hello.x: file not recognized: file format not recognized

    This is good; it's one more little piece of resistance
    against people using the wrong suffix.

    It's not the only one.  Editors won't bring up the correct syntax
    formatting and coloring if the file suffix is wrong.

    Usually the suffix is used for the initial language type selection, and
    you can override it if you want.


    Tools for cross-referencing identifiers in source code may also get
    things wrong due to the wrong suffix, or ignore the file entirely.

    This completely contradicts what people have been saying about Linux
    where file extensions are optional and only serve as a convenience.


    No, it fits it perfectly.

    File extensions are a convenience. You can override them if you want.

    For example, executables can have no extension, or .exe, or even .c.

    Yes. It is common, for example, to have ".py" as the extension for
    Python source files - /and/ for executable Python files. But sometimes
    it is also convenient to have no extension for executable Python files
    intended to be used as convenient command-line programs.


    It is Windows that places more store by file extensions, which Linux
    people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect.


    Your argument of "I can rename my C to any suffix and my compiler
    still recognizes it" is completely childish.

    It only seems to be childish when one of my programs handles this better
    than one of yours!

    But it /doesn't/ handle it better. It does worse. gcc acts the way
    users expect, based on sensible choices of file extensions. Yours can't
    cope with any other kind of file and instead acts contrary to user
    expectation.


    There is only thing my mcc program can't do, which is to compile a C
    file named 'filename.'; that is, 'filename' followed by an actual '.',
    not 'filename' with no extension.

    And that's only when I run it under Linux. That's because under Linux, 'filename' and 'filename.' are distinct files; the "." is part of the
    file name, not a notional separator.


    Of course it is. It's simple and consistent.

    In Windows, it is sometimes part of a file name (when it is not the last
    period in the name), sometimes a magical character that appears or
    disappears (when the file ends in a period), and sometimes it delimits a
    file extension.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Sun Feb 4 18:22:57 2024
    XPost: comp.unix.programmer

    On 2024-02-03, bart <bc@freeuk.com> wrote:
    On 03/02/2024 18:17, Kaz Kylheku wrote:
    On 2024-02-03, bart <bc@freeuk.com> wrote:

    Don't you see that using only Unix-like systems is also a bubble?

    Don't you see that living on Earth is literally being in bubble?

    Your bubble contains only one person.

    The Unix-like bubble is pertty huge, full of economic opportunities,
    spanning from embedded to server.

    You're missing my point. Unix imposes a certain mindset, mainly that
    there is only ONE way to things, and that is the Unix way.

    In the world of Unix-like systems, there are many, many ways of working.
    There isn't one way.

    Can you name a software system which doesn't impose a mindset?

    For instance, doesn't a spreadsheet impose the mindset that data are
    visually represented by entries in tables, linked by formulas?

    That is pretty obvious from the passionate posts people make about it.

    Mostly, what people are passionate about here is pointing out the
    mistakes which make others wrong. That's it.

    In the world of Unix and Linux, there is a lot of criticism from the
    inside. Improvements are made.

    The free environments we have now are collectively a huge improvement
    over the original proprietary Unix, or even BSD. As a GNU/Linux user,
    when you step inside a proprietary Unix with no free software installed
    in it, it's like taking a time machine to the dark ages.

    And it is obvious that they struggle outside it, which is why they hate Windows - it just isn't Unix!

    Many users of GNU/Linux who are critics of Windows have previous
    experience with Windows. Users with no Windows exposure, whose first
    system was some kind of Linux

    While you were dismissing Linux in 1995, it was actually going strong,
    marching forward. Only fools ignored it.

    A year before that, in 1994, I was doing contract work for Linux
    already. My client used it for serving up pay-per-click web pages to
    paying customers. I was working on the log processing and billing side
    of it, and also created a text-UI (curses) admin tool.

    Meanwhile, a decade before that, the question of OS in my first
    commercial product was utterly irrelevant. It provided a file system and
    it was used to launch my app.

    Many applications critically depend on application programming
    interfaces (APIs) in the operating system.

    What was it again? I can barely remember. I JUST DO NOT CARE.

    If you talk about it, you care.

    Of all those OSes I have used, Windows might rank near the bottom, but
    not for the reasons you think. That's because it operated in protected
    mode so that lots of things which had been easy, became hard.

    Sure, easy things like trashing another application, or using
    hardware rudely while something else is accessing it!

    It's harder to be polite and request things, than to just barge
    in and take what you need.

    In a protected system, you have to use an operating system API to do
    some of those things. Want to put pixels directly into a frame buffer?
    Because there is a windowing system you have to arrange that with the OS
    and follow certain rules. You can't keep putting pixels there if your
    program loses focus and another window is put over your screen area.

    Tough luck!

    How would Unix have helped with that? It wouldn't.)

    The question how Unix would have helped on an 8 bit CP/M machine forty
    years go isn't very interesting today. At that time, Unix probably ran
    best on machines with at least perhaps five to ten times the resources
    of that, or thereabouts.

    An OS that provides more or less the same semantics as POSIX, but using
    interfaces that are gratuitously different, and incompatible, is
    worthless in this day and age.

    Because .... you say so?

    The economic marketplace says so, mainly.

    I mean, are core OSes really so hard to write that everyone in the world
    has to use the same one? There seems to plenty of amateur OS development still.

    Yes, developing a production OS is very, very hard.

    Writing and debugging just one damned driver for an OS (e.g. USB host controller) can be very hard and take a lot of time.

    Something that doesn't conform to compatibility standards, and isn't
    demonstrably better for it, is a dud.

    There is good different and bad different. More or less same, but
    incompatible, is bad different.

    I build a box where you feed in data in the form of byte-stream, and it
    gives results in the form of a byte-stream. Or replace one of those by something physical; say the box is a printer or scanner.

    There is no OS specified, you've no idea whether it uses POSIX, or even
    if there's a computer inside.

    If that byte stream is incompatible from the one used by every other
    such a box in the commercial marketplace, it's a dud. More so if it is undocumented and changes monthly.

    But if it performs a useful task, then what is the problem?

    Vendor lock-in.

    Same thing if you are working on a self-contained function, library or
    app. It may have inputs or outputs. Do you need to care what OS is
    running? No, only on the job it has to do.

    An app can hide the system from the user, but the app itself
    certainly cares about what OS it's running on.

    Really you make too much of it. The main thing I don't like is when I
    have some software that is hard to build on Windows when there is no
    reason for it.

    There is a reason; mainly, some of the program depends on a different OS
    which is not compatible with Windows.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Sun Feb 4 20:55:12 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 04/02/2024 17:48, David Brown wrote:
    On 03/02/2024 20:35, bart wrote:

    It is Windows that places more store by file extensions, which Linux
    people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect.

    Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker

    I've never seen a '.x ' suffix. Ever. And I use linker scripts
    regularly.

    script, and ASSUMES that .s is an assembly source file;

    The command is well documented. It assumes nothing.
    It (cc, the compiler driver command) will simply pass files with a .s suffix to the
    assembler, and the assembler will, correctly, treat it as
    assembler source. If it's not, that is the problem of RTFM
    by the user.

    It is definitely not the problem of the toolset (cc) or the
    assembler (which doesn't care what suffix is used).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Sun Feb 4 20:18:20 2024
    XPost: comp.unix.programmer

    On 04/02/2024 17:48, David Brown wrote:
    On 03/02/2024 20:35, bart wrote:

    It is Windows that places more store by file extensions, which Linux
    people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect.

    Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
    script, and ASSUMES that .s is an assembly source file; INCORRECT
    assumptions.

    I think I'm starting to understand the rules: whatever Windows does is
    always wrong, and whatever Linux does is always right!

    To be clear, this is the behaviour of /my/ applications, which work the
    same way on Windows /or/ Linux, that work primarily work on one type of
    file, that assume that file type no matter what the extension.

    BOTH methods can be problematic if you deliberately or accidentally mix
    up file types and extensions.

    And that's only when I run it under Linux. That's because under Linux,
    'filename' and 'filename.' are distinct files; the "." is part of the
    file name, not a notional separator.


    Of course it is.  It's simple and consistent.

    In Windows, it is sometimes part of a file name (when it is not the last period in the name), sometimes a magical character that appears or
    disappears (when the file ends in a period), and sometimes it delimits a
    file extension.

    It probably still needs to be a notional dot for backwards compatibility
    over decades.

    The first two DEC systems I used had 6.3 filenames, storing 'sixbit'
    characters in 1.5 words for 36 bits, or using 'radix-50' in 3 words for
    16 bits. You can see there is nowhere to put the dot.

    That was carried over to DOS's 8.3 filename.

    This dot then was really a virtual separator that did not need storing,
    any more than you need to store the dot in the ieee754 representation of 73.945.

    It has given very little trouble, and has the huge advantage that you
    can have default extensions on input files with no ambiguity.

    Let me guess: Unix allows you to have numbers like 73.945.112, while 73.
    is a different value from 73? Cool.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Sun Feb 4 22:51:29 2024
    XPost: comp.unix.programmer

    On 04/02/2024 21:18, bart wrote:
    On 04/02/2024 17:48, David Brown wrote:
    On 03/02/2024 20:35, bart wrote:

    It is Windows that places more store by file extensions, which Linux
    people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect.

    Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker
    script, and ASSUMES that .s is an assembly source file; INCORRECT assumptions.


    No, they are almost always /correct/ assumptions. If you want to use .s
    for a C file, that is allowed - but it is so unusual that you have to
    tell gcc about it ("gcc -x c cfile.s" will work).

    Would you prefer a system where the compiler just guesses and makes up
    the rules as it goes along? (Here's a hint - a file can be valid C and
    also valid C++, but compiling it in the different languages will give
    different results.)

    What works for little hobby tools does not always work at scale for
    serious tools.

    I think I'm starting to understand the rules: whatever Windows does is
    always wrong, and whatever Linux does is always right!


    You've claimed that many times over the years. If you were to stop
    merely /starting/ to think that, and take it as the basic assumption,
    then you would not always be correct - but you'd be wrong at lot less often!

    To be clear, this is the behaviour of /my/ applications, which work the
    same way on Windows /or/ Linux, that work primarily work on one type of
    file, that assume that file type no matter what the extension.


    Exactly. For your little program that can't deal with more than one
    type of file, you can do this. And since it is for your own little tool
    that no one else uses, you can do it exactly as you like.

    BOTH methods can be problematic if you deliberately or accidentally mix
    up file types and extensions.

    So stop deliberately being a screw-up. You'll find life vastly easier. Accidents can happen on occasion, but you're a lot less likely to shoot yourself in the foot if you stop aiming at your foot and squeezing the
    trigger.


    And that's only when I run it under Linux. That's because under
    Linux, 'filename' and 'filename.' are distinct files; the "." is part
    of the file name, not a notional separator.


    Of course it is.  It's simple and consistent.

    In Windows, it is sometimes part of a file name (when it is not the
    last period in the name), sometimes a magical character that appears
    or disappears (when the file ends in a period), and sometimes it
    delimits a file extension.

    It probably still needs to be a notional dot for backwards compatibility
    over decades.

    The first two DEC systems I used had 6.3 filenames, storing 'sixbit' characters in 1.5 words for 36 bits, or using 'radix-50' in 3 words for
    16 bits. You can see there is nowhere to put the dot.

    That was carried over to DOS's 8.3 filename.

    At a time when real OS's had moved beyond that. What a stupid decision
    - it's what you expect when you remember that MS DOS was written as a
    quick hack on a system called "quick and dirty OS" as a way for MS to
    con its customers.


    This dot then was really a virtual separator that did not need storing,
    any more than you need to store the dot in the ieee754 representation of 73.945.

    It has given very little trouble, and has the huge advantage that you
    can have default extensions on input files with no ambiguity.

    Let me guess: Unix allows you to have numbers like 73.945.112, while 73.
    is a different value from 73? Cool.


    Um, you remember this is comp.lang.c ? "73" is an integer constant,
    "73." is a double.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sun Feb 4 22:42:54 2024
    XPost: comp.unix.programmer

    On Sun, 4 Feb 2024 20:18:20 +0000, bart wrote:

    I think I'm starting to understand the rules: whatever Windows does is
    always wrong, and whatever Linux does is always right!

    You said it, we didn’t.

    Remember that *nix systems were being used for “real” programming before Windows was even a gleam in Bill Gates’ eye.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sun Feb 4 22:46:19 2024
    XPost: comp.unix.programmer

    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for multitudinous developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble
    on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be “used to” Windows rather than *nix, still has the same trouble. So maybe it’s not
    about being “used to” *nix at all, there really is something inherent in the fundamental design of that environment that makes development work
    easier.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Sun Feb 4 23:11:37 2024
    XPost: comp.unix.programmer

    On 04/02/2024 21:51, David Brown wrote:
    On 04/02/2024 21:18, bart wrote:

    BOTH methods can be problematic if you deliberately or accidentally
    mix up file types and extensions.

    So stop deliberately being a screw-up.

    I was replying initially to somebody claiming that being able to do:

    cc prog.a
    cc prog.b
    cc prog.c

    and marshalling the file into the right tool was not only some great achievement only possible on Linux, but also desirable.

    I think using dedicated tools instead is a better idea.



    That was carried over to DOS's 8.3 filename.

    At a time when real OS's had moved beyond that.

    When was that? The IBM PC came out in 1981. The DEC machines I mentioned
    were still in use. Oh, you mean Unix was the One and Only Real OS? I get it.

      What a stupid decision
    - it's what you expect when you remember that MS DOS was written as a
    quick hack on a system called "quick and dirty OS" as a way for MS to
    con its customers.

    Funny you should fixate on that, and not on the idea of a business
    computer running on a 4.8MHz 8088 processor with a crappy 'CGA' video
    board design that would barely pass as a student assignment. (Oh, that
    was IBM and not MS, and it is only MS you want to shit all over.)

    However it brought business computing to the masses. Where were the
    machines running your beloved Unix?

    I believe you were working on Spectrums then or some such machines; what filenames did /they/ allow, or did they not actually have a file system?

    You're being unjust on the people working on all this stuff at that
    period, trying to make things work with small processors, tiny amounts
    of memory and limited storage.



    This dot then was really a virtual separator that did not need
    storing, any more than you need to store the dot in the ieee754
    representation of 73.945.

    It has given very little trouble, and has the huge advantage that you
    can have default extensions on input files with no ambiguity.

    Let me guess: Unix allows you to have numbers like 73.945.112, while
    73. is a different value from 73? Cool.


    Um, you remember this is comp.lang.c ?  "73" is an integer constant,
    "73." is a double.


    Yes. But the question is whether the "." separating out the two parts of
    a filename should be actually stored, as a '.' character taking up extra
    space.

    It made perfect sense not to store it the time. But Unix made a decision
    at the time to store it literally, which could also have been thought crass.

    In hindsight, with filenames now allowing arbitrary dots, they made the
    right decision. But that was more due to luck. And probably not having
    to make concessions to running on low-end hardware.

    You however would try and argue that some great foresight was
    deliberately exercised and that the people behind those other systems
    made a dumb decision.

    I'm sorry but you weren't there.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Sun Feb 4 23:29:01 2024
    XPost: comp.unix.programmer

    On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for multitudinous
    developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble
    on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be “used to” Windows rather than *nix, still has the same trouble.


    *I* don't have trouble. Only with other people's projects originating
    from Linux.

    Apparently, on that OS, nobody knows how to build a program given only
    the C source files, and a C compiler.

    Or if they do, they are unwilling to part with that information. It is encrypted into a makefile, or worse.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Malcolm McLean on Mon Feb 5 01:45:38 2024
    XPost: comp.unix.programmer

    On Mon, 5 Feb 2024 00:07:33 +0000, Malcolm McLean wrote:

    On Windows you can't assume that the end user will be interested in development or have any develoment tools available.

    Worse than that, the assumption is that development will be done in a proprietary, self-contained IDE, primarily sourced from a single vendor.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to Keith.S.Thompson+u@gmail.com on Mon Feb 5 06:00:59 2024
    XPost: comp.unix.programmer

    In article <87jznjle00.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 2/4/2024 5:45 PM, Lawrence D'Oliveiro wrote:
    On Mon, 5 Feb 2024 00:07:33 +0000, Malcolm McLean wrote:
    On Windows you can't assume that the end user will be interested in
    development or have any develoment tools available.
    Worse than that, the assumption is that development will be done in a
    proprietary, self-contained IDE, primarily sourced from a single
    vendor.

    https://youtu.be/i_6zPIWQaUI ;^)

    If you must post random YouTube links, can you at least include a 1-line >description so we don't waste *too* much time?

    Better yet, if you could cut down on the followups that don't add
    anything relevant, I for one would appreciate it.

    Nice to see you back, Keith. I've been worried about you.

    "14 A View to a Kill Opening Theme 1985"

    --
    One should not believe everything posted to USENET.

    - Aharon (Arnold) Robbins arnold AT skeeve DOT com -
    - 4/15/19 -

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Mon Feb 5 13:02:52 2024
    XPost: comp.unix.programmer

    On 05/02/2024 01:07, Malcolm McLean wrote:
    On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for multitudinous
    developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble
    on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be “used to” >> Windows rather than *nix, still has the same trouble. So maybe it’s not
    about being “used to” *nix at all, there really is something inherent in >> the fundamental design of that environment that makes development work
    easier.
    On Windows you can't assume that the end user will be interested in development or have any develoment tools available. Or that he'll be
    able to do anything other than the most basic installation. It's a
    consumer platform.

    It /is/ a consumer platform, yes. And because it has no standard ways
    to build software, and no one (approximately) using it wants to build
    software on it, the norm is to distribute code in binary form for
    Windows. That works out fine for almost all Windows users. That
    includes libraries - even C programmers on Windows don't want to build "libjpeg" or whatever, they want a DLL.

    And thus there is much less effort put into making projects easy to
    build on Windows. People on Windows fall mostly into two categories -
    those that neither know nor care about building software and want
    ready-to-use binaries (that's almost all of them), and people who do development work and are willing and able to invest time and effort
    reading the readmes and install.txt files, looking at the structure of
    the code, running the makefiles or CMakes, importing the project into
    their favourite IDE, and whatever else.

    It's not that Linux software developers go out of their way to annoy
    Windows developers (well, /some/ do, but not many). But on Linux, and
    widening to other modern *nix systems, there are standard ways to build software. You know the people building it will have make, and gcc (or a compatible compiler with many of the same extensions and flags, like
    clang or icc), and development versions of countless libraries either
    installed or a quick apt-get away. On Windows, however, they might have
    MSVC, or cygwin, or mingw64, or TDM gcc, or lccwin, or tcc, or Borland
    C++ builder. They might have a "make", but it could be MS's more
    limited "nmake" version.

    People who do their work and development on Linux can't be expected to
    try to support every Windows setup. People who are making open source
    software voluntarily (as distinct from people paid to do so) certainly
    can't. It makes more sense for groups who specialise in porting and
    building software in Windows to do that work for many projects, rather
    than the original project developers doing that work. Thus groups like
    msys2, TDM, and others take open source projects and make Windows
    binaries for them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Mon Feb 5 13:42:24 2024
    XPost: comp.unix.programmer

    On 05/02/2024 00:11, bart wrote:
    On 04/02/2024 21:51, David Brown wrote:
    On 04/02/2024 21:18, bart wrote:

    BOTH methods can be problematic if you deliberately or accidentally
    mix up file types and extensions.

    So stop deliberately being a screw-up.


    That was carried over to DOS's 8.3 filename.

    At a time when real OS's had moved beyond that.

    When was that? The IBM PC came out in 1981. The DEC machines I mentioned
    were still in use. Oh, you mean Unix was the One and Only Real OS? I get
    it.


    There have been lots of OS's. MS DOS was - from the beginning - a hack
    on a simple limited OS.

    Older systems, or systems for more limited hardware, had limits on their filenames - that is reasonable and makes sense. By the time of the IBM
    PC, that should not have been necessary - at least not /so/ short names.
    The all-caps names (which then led to the silly case insensitive
    behaviour) had no excuse at all. And /relying/ on file extensions for
    critical things like executable type was never smart. (File extensions
    for user convenience is fine as a useful convention.)

      What a stupid decision
    - it's what you expect when you remember that MS DOS was written as a
    quick hack on a system called "quick and dirty OS" as a way for MS to
    con its customers.

    Funny you should fixate on that, and not on the idea of a business
    computer running on a 4.8MHz 8088 processor with a crappy 'CGA' video
    board design that would barely pass as a student assignment. (Oh, that
    was IBM and not MS, and it is only MS you want to shit all over.)

    Is it "funny" that in discussion about operating systems, I talked about
    the operating system - not the hardware? I agree that the IBM PC
    hardware was pathetic for its time - for a start, it should have been,
    as the designers wanted, built around an 68000 cpu.


    However it brought business computing to the masses. Where were the
    machines running your beloved Unix?

    They were doing all the important work. They still are.

    (And I certainly don't think Unix - either of that time, nor modern descendants, are perfect. But you only see everything as black or
    white, which is quite sad and pathetic.)


    I believe you were working on Spectrums then or some such machines; what filenames did /they/ allow, or did they not actually have a file system?


    There was some file system on microdrives - otherwise, no, no file system.

    I also worked with BBC Micros - now there was an OS that was extremely
    well designed.

    You're being unjust on the people working on all this stuff at that
    period, trying to make things work with small processors, tiny amounts
    of memory and limited storage.


    No, I just think they could have done a lot better with what they had.



    This dot then was really a virtual separator that did not need
    storing, any more than you need to store the dot in the ieee754
    representation of 73.945.

    It has given very little trouble, and has the huge advantage that you
    can have default extensions on input files with no ambiguity.

    Let me guess: Unix allows you to have numbers like 73.945.112, while
    73. is a different value from 73? Cool.


    Um, you remember this is comp.lang.c ?  "73" is an integer constant,
    "73." is a double.


    Yes. But the question is whether the "." separating out the two parts of
    a filename should be actually stored, as a '.' character taking up extra space.

    I understand how DOS and its descendants handle this. I understand how
    almost every other file system and OS handles this. I know which is better.


    It made perfect sense not to store it the time. But Unix made a decision
    at the time to store it literally, which could also have been thought
    crass.

    In hindsight, with filenames now allowing arbitrary dots, they made the
    right decision. But that was more due to luck. And probably not having
    to make concessions to running on low-end hardware.

    You however would try and argue that some great foresight was
    deliberately exercised and that the people behind those other systems
    made a dumb decision.

    I'm sorry but you weren't there.


    I appreciate that many decisions were the best choice at the time, and afterwards you are stuck with the consequences of that. Most of what I
    think is bad in C falls into that category.

    But some decisions were also clearly inferior at the time they were made.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Mon Feb 5 14:59:52 2024
    XPost: comp.unix.programmer

    On 05.02.2024 13:42, David Brown wrote:
    On 05/02/2024 00:11, bart wrote:

    [...] Oh, you mean Unix was the One and Only Real OS? I get it.

    (Obviously not.)

    There have been lots of OS's. MS DOS was - from the beginning - a hack
    on a simple limited OS.

    And MS marketing was able to foster a community who could easily be
    brainwashed to find it natural that SW is so buggy and unreliable.
    And few (from the many) flaws, deficiencies, and bugs can be clumsily
    worked around. Countless "experts" were arising from that who have
    specialized "guru wisdom" about the magic to work around some of these
    well known flaws. Blue screens were common. A standard tip - and even
    still in use nowadays! - was and is "Reboot your system.", and if that
    doesn't help then "Reinstall the software.", or the "Reinstall the OS"
    if nothing helped, and finally "Wait for version N+1 of this OS, there
    will be all good then." - and of course it never was.


    [...]
    The all-caps names (which then led to the silly case insensitive
    behaviour) had no excuse at all.

    All caps was initially a historic restriction of many OSes due to the
    limited character sets. At some point working case sensitivity became
    possible and supported; MS was not amongst the first here. Later the
    need for non-ASCII and internationalization became prevalent and it
    became technically possible to support that. Meanwhile we have multi-
    lingual computing. For certain user front-ends of applications it is
    more useful to not distinguish case; see Google search for a prominent
    example. For other application (or OS) interfaces it is necessary (or
    at least much desired) to support not only case sensitivity but also
    regular expression searches. Unix systems supported that inherently.
    In other contexts it needed decades to even consider supporting a
    switch to activate such a feature. Later applications supported own
    methods, for example to include or exclude words in searches.

    And /relying/ on file extensions for
    critical things like executable type was never smart. (File extensions
    for user convenience is fine as a useful convention.)

    [...]

    Is it "funny" that in discussion about operating systems, I talked about
    the operating system - not the hardware? I agree that the IBM PC
    hardware was pathetic for its time - for a start, it should have been,
    as the designers wanted, built around an 68000 cpu.

    One of the best and outstanding pieces of hardware from that time
    was (IMO) the IBM PC's "Model M" keyboard. (I'm still typing on a
    Model M clone.)


    You're being unjust on the people working on all this stuff at that
    period, trying to make things work with small processors, tiny amounts
    of memory and limited storage.

    (And I heard at that time that 640k would be more than enough. LOL.)

    No, I just think they could have done a lot better with what they had.

    Indeed. (But they refused. It's easier to manipulate a user base by
    the marketing division than fix inherently broken things.)


    Let me guess: Unix allows you to have numbers like 73.945.112, while
    73. is a different value from 73? Cool.

    Again "guessing"? Or just making up things? Or creating a straw man?"

    Frankly, I don't understand what argument you want to construct here,
    Bart.

    73.945.112 seems obviously to be a standard representation of a number
    with eight figures, using one of many internationally used separators.
    While some computer languages indeed allow to process "73 945 112" and
    also "73945112", you cannot expect that legibility support. Mostly, if
    at all, you may have the option to choose decimals after the "comma"
    only, as in 123.34$ or 123,45€.

    (But your intention here was most likely anyway just a red herring.)


    Um, you remember this is comp.lang.c ? "73" is an integer constant,
    "73." is a double.


    Yes. But the question is whether the "." separating out the two parts
    of a filename should be actually stored, as a '.' character taking up
    extra space.

    Filenames consisting of "two parts" is a fundamental misconception.


    I understand how DOS and its descendants handle this. I understand how almost every other file system and OS handles this. I know which is
    better.

    [...]

    In hindsight, with filenames now allowing arbitrary dots, they made
    the right decision.

    (What a bright enlightenment. Great.)

    But that was more due to luck. And probably not
    having to make concessions to running on low-end hardware.

    (And again some stupid continuation; random guesses based on opinion.)

    [...]

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Mon Feb 5 14:43:50 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 04/02/2024 21:51, David Brown wrote:
    On 04/02/2024 21:18, bart wrote:

    BOTH methods can be problematic if you deliberately or accidentally
    mix up file types and extensions.

    So stop deliberately being a screw-up.

    I was replying initially to somebody claiming that being able to do:

    cc prog.a
    cc prog.b
    cc prog.c

    and marshalling the file into the right tool was not only some great >achievement only possible on Linux, but also desirable.

    Nobody other than you have made such a claim.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Chris M. Thomasson on Mon Feb 5 06:48:16 2024
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:

    On 2/4/2024 8:41 PM, Keith Thompson wrote:

    [...]

    Better yet, if you could cut down on the followups that don't add
    anything relevant, I for one would appreciate it.

    For what it's worth, I second Keith's request, and strenuously
    support it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Malcolm McLean on Mon Feb 5 14:48:33 2024
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    On 30/01/2024 08:17, David Brown wrote:

    The build system isn't really about specifying an executable from
    sources. If that was all there was to I'd probably heve been told to set
    it up myself. It's more about giving people access to sources and
    ensuring that they are consistent and the right version is being used,

    Now you are changing the topic from build system to source code control
    system, git.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Janis Papanagnou on Mon Feb 5 15:45:07 2024
    XPost: comp.unix.programmer

    On 05/02/2024 13:59, Janis Papanagnou wrote:
    On 05.02.2024 13:42, David Brown wrote:
    On 05/02/2024 00:11, bart wrote:

    [...] Oh, you mean Unix was the One and Only Real OS? I get it.

    (Obviously not.)

    There have been lots of OS's. MS DOS was - from the beginning - a hack
    on a simple limited OS.

    And MS marketing was able to foster a community who could easily be brainwashed to find it natural that SW is so buggy and unreliable.
    And few (from the many) flaws, deficiencies, and bugs can be clumsily
    worked around. Countless "experts" were arising from that who have specialized "guru wisdom" about the magic to work around some of these
    well known flaws. Blue screens were common. A standard tip - and even
    still in use nowadays! - was and is "Reboot your system.", and if that doesn't help then "Reinstall the software.", or the "Reinstall the OS"
    if nothing helped, and finally "Wait for version N+1 of this OS, there
    will be all good then." - and of course it never was.

    Yeah, because no other OS has ever required a hard reboot. I've had to
    do a hard power-off and power-on cycle endless times on smart TVs,
    phones and tablets. None of them ran Windows.



    [...]
    The all-caps names (which then led to the silly case insensitive
    behaviour) had no excuse at all.

    All caps was initially a historic restriction of many OSes due to the
    limited character sets. At some point working case sensitivity became possible and supported; MS was not amongst the first here. Later the
    need for non-ASCII and internationalization became prevalent and it
    became technically possible to support that. Meanwhile we have multi-
    lingual computing. For certain user front-ends of applications it is
    more useful to not distinguish case; see Google search for a prominent example.

    Pretty much every front-end not aimed at technical users is
    case-insensitive.

    Most people will also come across case-sensitive filenames simply
    because the underlying *nix file system is exposed.

    Even then, sensible steps have been taken to ensure that main parts of
    URLs and email addresses are case-insensitive. There it is easy to see
    what chaos could ensue otherwise.


    Filenames consisting of "two parts" is a fundamental misconception.

    File specs can consist of multiple parts. On OSes that used drive
    letters like:

    A:filename.ext

    then that has 3 parts. It would be ludicrous to store that "A:" inside a directory. Especiall on media that then ends up as drive B:.

    Even in a file-spec like this:

    /a/b/c/filename.ext

    Is the full string "/a/b/c/filename.ext" stored in the directory entry
    for this file, or is it split up into different components?

    I don't know; you tell me. The former looks unwieldy.

    On some OSes the filetype was an attribute, stored separately from the filename, and displayed with a "." separator.

    In the same way, with these qualified names in some language source code:

    a.b.c
    a::b::c

    it is extremely unlikely that those "." and "::" symbols actually form
    part of the identifier for each.

    Meanwhile I need to use a small library of routines to split filespecs
    up into path, base file, and extension.


    I understand how DOS and its descendants handle this. I understand how
    almost every other file system and OS handles this. I know which is
    better.

    [...]

    In hindsight, with filenames now allowing arbitrary dots, they made
    the right decision.

    (What a bright enlightenment. Great.)

    But that was more due to luck. And probably not
    having to make concessions to running on low-end hardware.

    (And again some stupid continuation; random guesses based on opinion.)

    But it might well be perfectly true; you don't know either. So it is
    plausible.

    Based on my examples above, having notional "." and "/" symbols seemed
    the sensible thing to do. It is quite possible that Unix (remember this
    was part of the same group that made all those wise decisions about C),
    really did make that crass decision to actually store dots as part of
    the filename.

    BTW on Unix-like file systems, is a filename like "abc.def.ghi"
    considered to have the extension "def.ghi", or "ghi"? If the latter,
    then I take it that extensions can't have embedded dots?

    On Windows, the extension is "ghi". If that is the case on Linux too,
    then that treats the right-most dot specially.

    But I get it: you deeply despise Windows, MSDOS, MS, and you hate me for
    being an upstart.





    [...]

    Janis


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Chris M. Thomasson on Mon Feb 5 10:41:11 2024
    XPost: comp.unix.programmer

    On 2/4/24 18:10, Chris M. Thomasson wrote:
    On 2/4/2024 4:07 PM, Malcolm McLean wrote:
    On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for multitudinous >>>> developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble >>>> on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be “used to”
    Windows rather than *nix, still has the same trouble. So maybe it’s not >>> about being “used to” *nix at all, there really is something inherent in
    the fundamental design of that environment that makes development work
    easier.
    On Windows you can't assume that the end user will be interested in
    development or have any develoment tools available.

    Fwiw, I have seen Linux users that have no intent to program anything at
    all.


    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.


    Or that he'll be able to do anything other than the most basic
    installation. It's a consumer platform.


    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Chris M. Thomasson on Mon Feb 5 10:48:03 2024
    XPost: comp.unix.programmer

    On 2/4/24 16:02, Chris M. Thomasson wrote:
    On 2/4/2024 9:48 AM, David Brown wrote:
    [...]
    In Windows, it is sometimes part of a file name (when it is not the
    last period in the name), sometimes a magical character that appears
    or disappears (when the file ends in a period), and sometimes it
    delimits a file extension.

    picture_of_a_cow____________________this_is_not_a_virus_really.jpeg.gif.exe

    lol.

    Windows making such a big deal over file extensions and outright hiding
    them is silly IMO
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kees Nuyt@21:1/5 to bart on Mon Feb 5 17:57:53 2024
    XPost: comp.unix.programmer

    On Sun, 4 Feb 2024 01:19:53 +0000, bart <bc@freeuk.com> wrote:

    Everybody says use makefiles; well they don't work. They tend to be
    heavily skewed towards the use of gcc. My compiler isn't gcc.

    By default a lot of builtin "implicit rules" determine which
    program to use to make a .o from a .c etc. etc., and yes, that
    is GCC-centric.

    However, it is possible to remove all of those rules by calling
    make as
    make -rR
    meaning:
    -r, --no-builtin-rules
    -R, --no-builtin-variables
    , or by writing an empty
    .SUFFIXES:
    section in the Makefile.

    Then, provide an include file "myrules.mk" with your own rules.

    <https://www.gnu.org/software/make/manual/make.html#Old_002dFashioned-Suffix-Rules>

    Something like :
    %.o : %.c
    mcc $< -o $@

    etc., and include that in your Makefile with
    include myrules.mk

    <https://www.gnu.org/software/make/manual/make.html#Including-Other-Makefiles>

    I apologize in advance if I missed a post in this huge thread
    that already hinted you for that.

    --
    Regards,
    Kees Nuyt

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Malcolm McLean on Mon Feb 5 11:36:32 2024
    XPost: comp.unix.programmer

    On 2/5/24 11:29, Malcolm McLean wrote:
    On 05/02/2024 16:48, candycanearter07 wrote:
    On 2/4/24 16:02, Chris M. Thomasson wrote:
    On 2/4/2024 9:48 AM, David Brown wrote:
    [...]
    In Windows, it is sometimes part of a file name (when it is not the
    last period in the name), sometimes a magical character that appears
    or disappears (when the file ends in a period), and sometimes it
    delimits a file extension.

    picture_of_a_cow____________________this_is_not_a_virus_really.jpeg.gif.exe >>>
    lol.

    Windows making such a big deal over file extensions and outright
    hiding them is silly IMO

    Hiding the extension is a complete nightmare. Unless the automatic recognition system works perfectly, you can end up with a file you can't
    use.

    Or they could just use the magic number as a fallback..
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Jackson@21:1/5 to bart on Mon Feb 5 17:37:02 2024
    XPost: comp.unix.programmer

    On 2024-02-04, bart <bc@freeuk.com> wrote:
    On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for multitudinous
    developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble
    on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be ???used to??? >> Windows rather than *nix, still has the same trouble.


    *I* don't have trouble. Only with other people's projects originating
    from Linux.

    Apparently, on that OS, nobody knows how to build a program given only
    the C source files, and a C compiler.

    Programmers and Developers do.

    Or if they do, they are unwilling to part with that information. It is encrypted into a makefile, or worse.

    Encrypted? I always thought makefiles were plain text? You can read them
    with less^H^H^H^H "more" - which if memory serves, is also a DOS command?


    As an aside I skip most of this rubbish and just dip in occasionally.
    But I think I have some measure of where bart is coming from. Some
    people come at things not as they are, but as they wish they were given
    their background.

    I'll go back to lurking and just dipping into this if I've time to waste.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to no@thanks.net on Mon Feb 5 18:13:58 2024
    XPost: comp.unix.programmer

    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Malcolm McLean on Mon Feb 5 18:42:30 2024
    XPost: comp.unix.programmer

    On 05/02/2024 18:03, Malcolm McLean wrote:
    On 05/02/2024 17:37, Jim Jackson wrote:
    On 2024-02-04, bart <bc@freeuk.com> wrote:
    On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for
    multitudinous
    developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have
    trouble
    on an OS like MSDOS or Windows where they have to start from nothing. >>>>> Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be
    ???used to???
    Windows rather than *nix, still has the same trouble.


    *I* don't have trouble. Only with other people's projects originating
    from Linux.

    Apparently, on that OS, nobody knows how to build a program given only
    the C source files, and a C compiler.

    Programmers and Developers do.

    Or if they do, they are unwilling to part with that information. It is
    encrypted into a makefile, or worse.

    Encrypted? I always thought makefiles were plain text? You can read them
    with less^H^H^H^H "more" - which if memory serves, is also a DOS command?

    Here's one on my machine I selected almost at random

    !ifndef BCROOT
    BCROOT=$(MAKEDIR)\..
    !endif

    BCC32   = $(BCROOT)\bin\Bcc32.exe

    IDE_LinkFLAGS32 =  -L$(BCROOT)\LIB
    COMPOPTS= -O2 -tWC -tWM- -Vx -Ve -D_NO_VCL; -I../../../../; -L..\..\build\bcb5


    timer.exe : regex_timer.cpp
      $(BCC32) @&&|
     $(COMPOPTS) -e$@ regex_timer.cpp
    |


    Whilst some of this is pretty clear, it's not all obvious what the
    second half of the line
    $(BCC32) @&&|
    is meant to mean.



    I thought of some project and decided to look at NASM sources, choosing
    2.15 from a few years ago as it might be simpler.

    There was no makefile, only makefile.in of 1000 lines. If I type 'make',
    it says no targets found.

    There was also 'configure' of 11,000 lines, so I switched to WSL. Now
    typing ./configure shows:

    -bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory

    So it doesn't work on Linux either. If I look at INSTALL, it actually
    says use "sh configure". That now says:

    : not found14:
    configure: 30: Syntax error: newline unexpected (expecting ")")

    The entire project is only 106 .c files.

    If I try compiling a random .c file, it complains of a missing header.

    This is all quite typical.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Kees Nuyt on Mon Feb 5 19:11:51 2024
    XPost: comp.unix.programmer

    Kees Nuyt <k.nuyt@nospam.demon.nl> writes:
    On Sun, 4 Feb 2024 01:19:53 +0000, bart <bc@freeuk.com> wrote:

    Everybody says use makefiles; well they don't work. They tend to be
    heavily skewed towards the use of gcc. My compiler isn't gcc.

    By default a lot of builtin "implicit rules" determine which
    program to use to make a .o from a .c etc. etc., and yes, that
    is GCC-centric.

    I wouldn't call them GCC-centric, for the most it is POSIX-centric,
    i.e.
    CC = cc

    Although there is
    CXX = g++

    The built-in recipes are pretty generic.

    %.o: %.c
    # recipe to execute (built-in):
    $(COMPILE.c) $(OUTPUT_OPTION) $<

    %.cc:

    %: %.cc
    # recipe to execute (built-in):
    $(LINK.cc) $^ $(LOADLIBES) $(LDLIBS) -o $@

    %.o: %.cc
    # recipe to execute (built-in):
    $(COMPILE.cc) $(OUTPUT_OPTION) $<

    %.C:

    %: %.C
    # recipe to execute (built-in):
    $(LINK.C) $^ $(LOADLIBES) $(LDLIBS) -o $@

    %.o: %.C
    # recipe to execute (built-in):
    $(COMPILE.C) $(OUTPUT_OPTION) $<

    %.cpp:

    %: %.cpp
    # recipe to execute (built-in):
    $(LINK.cpp) $^ $(LOADLIBES) $(LDLIBS) -o $@

    %.o: %.cpp
    # recipe to execute (built-in):
    $(COMPILE.cpp) $(OUTPUT_OPTION) $<


    You can always override the variable on the make command line

    $ make CC=bcc

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Malcolm McLean on Mon Feb 5 19:16:42 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    On 05/02/2024 17:37, Jim Jackson wrote:
    On 2024-02-04, bart <bc@freeuk.com> wrote:
    On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for multitudinous >>>>> developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble >>>>> on an OS like MSDOS or Windows where they have to start from nothing. >>>>> Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be ???used to???
    Windows rather than *nix, still has the same trouble.


    *I* don't have trouble. Only with other people's projects originating
    from Linux.

    Apparently, on that OS, nobody knows how to build a program given only
    the C source files, and a C compiler.

    Programmers and Developers do.

    Or if they do, they are unwilling to part with that information. It is
    encrypted into a makefile, or worse.

    Encrypted? I always thought makefiles were plain text? You can read them
    with less^H^H^H^H "more" - which if memory serves, is also a DOS command?

    Here's one on my machine I selected almost at random

    !ifndef BCROOT
    BCROOT=$(MAKEDIR)\..
    !endif

    BCC32 = $(BCROOT)\bin\Bcc32.exe

    IDE_LinkFLAGS32 = -L$(BCROOT)\LIB
    COMPOPTS= -O2 -tWC -tWM- -Vx -Ve -D_NO_VCL; -I../../../../; >-L..\..\build\bcb5


    timer.exe : regex_timer.cpp
    $(BCC32) @&&|
    $(COMPOPTS) -e$@ regex_timer.cpp
    |

    The recipes are executed using the host shell.

    That must be one of barts makefiles.

    As it is shown, it is not a valid make recipe for any unix or
    linux shell.

    timer.exe: regex_timer.cpp
    $(BCC32) $(COMPOPTS) -e $@ regex_timer.cpp

    would be more likely, but the parameter to '-e' is completely
    interpeted by whatever program is specified by the BCC32 variable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Mon Feb 5 21:31:34 2024
    XPost: comp.unix.programmer

    On 05/02/2024 21:25, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    There was also 'configure' of 11,000 lines, so I switched to WSL. Now
    typing ./configure shows:

    -bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory

    It looks like you've downloaded the source as a .zip file, which was
    packaged incorrectly. I've reported this to their mailing list. Try downloading the .tar.gz file instead.


    Well, this was the 2020 version (I thought it might be slightly less challenging than the latest, and not so old that that had its own problems).

    Maybe a newer one is fixed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to Keith.S.Thompson+u@gmail.com on Mon Feb 5 21:57:46 2024
    In article <8734u6lhop.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 2/5/2024 6:48 AM, Tim Rentsch wrote:
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> writes:
    On 2/4/2024 8:41 PM, Keith Thompson wrote:
    [...]

    Better yet, if you could cut down on the followups that don't add
    anything relevant, I for one would appreciate it.
    For what it's worth, I second Keith's request, and strenuously
    support it.

    I was trying to lighten the mood, so to speak. Well, it backfired on me. ;^o

    Does that mean you're going to stop? You're just about to land in my >killfile, but I'm willing to reconsider. You do sometimes post relevant >content, but it's just not worth digging through the noise.

    The best possible thing that can happen to a CLC poster is to be killfiled
    by Keith - as I was (and still am) long ago. Then you are spared Keith's incessant bitching and whining about your posts.

    I am hoping Chris is lucky enough to be given this honor.

    --
    Meatball Ron wants to replace the phrase "climate change" with the phrase "energy dominance" in policy discussions.

    Yeah, like that makes a lot of sense...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Keith Thompson on Mon Feb 5 22:29:02 2024
    XPost: comp.unix.programmer

    On 2024-02-05, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    bart <bc@freeuk.com> writes:
    [...]
    There was also 'configure' of 11,000 lines, so I switched to WSL. Now
    typing ./configure shows:

    -bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory

    It looks like you've downloaded the source as a .zip file, which was
    packaged incorrectly.

    Or, no?

    I'm guessing that a .zip file is intended for Windows users and so it has
    text files in CR-LF format.

    It might be intended to be built in the MinGW environment.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Mon Feb 5 22:46:01 2024
    XPost: comp.unix.programmer

    On Mon, 5 Feb 2024 15:45:07 +0000, bart wrote:

    Pretty much every front-end not aimed at technical users is
    case-insensitive.

    Some Linux filesystems offer this option, should you want to enable it <https://lwn.net/Articles/784041/>.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Mon Feb 5 22:47:52 2024
    XPost: comp.unix.programmer

    On 2024-02-05, bart <bc@freeuk.com> wrote:
    -bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory

    This indicates that the thing you're trying to build was converted
    to Windows format. See that ^M? It's a carriage return; what's that
    doing in a POSIX shell script? Someone likely did that on purpose.

    Firstly, projects with ./configure shell scripts are often not ported to Windows at all. If that is the case, you could be the first one trying
    that. In that situation, the best bet is Cygwin. (Or WSL2, but that's
    basically not Windows.)

    The .zip file containing files converted to Windows format suggests
    that the package is ported to Windows, using some build environment that
    uses CR-LF files like MinGW.

    Your best bet is to consult the project and ask them, how is it ported
    to Windows? Then do it their way. Otherwise you're on your own.

    Another pattern that occurs is that FOSS projects which port their code
    to Windows themselves provide binaries for Windows, so they don't expect
    users to build those. Thus their procedure for building on Windows might not
    be well documented.

    This is all quite typical.

    You not knowing where to get a glue and generally being lost
    at sea with no rudder or sail?

    Don't you have some nephew or niece in the fifth grade who could
    help with this?

    When I go to the NASM site (https://www.nasm.us) there is a clear
    Download link.

    In the download link, there are versioned and dated release
    directories.

    In the most recent one, there are Win32 and Win64 subdirectories.

    There is a file

    nasm-2.16.02rc9-installer-x64.exe

    Doh?

    They've gone out of the way to support Windows users with an executable installer.

    If you want to know how they built that, they may have documentation
    elsewhere. There might be instructions in the accompanying .zip or else
    you just have to as in the mailing list.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Brown on Mon Feb 5 22:51:12 2024
    XPost: comp.unix.programmer

    On Mon, 5 Feb 2024 13:02:52 +0100, David Brown wrote:

    It /is/ a consumer platform, yes. And because it has no standard ways
    to build software, and no one (approximately) using it wants to build software on it, the norm is to distribute code in binary form for
    Windows. That works out fine for almost all Windows users. That
    includes libraries - even C programmers on Windows don't want to build "libjpeg" or whatever, they want a DLL.

    But without integrated package management, how do you keep it all up to
    date? If two separate apps use the same library, do they each end up with
    their own version, or do they share one version? Does each app have to run
    its own periodic background updater task to tell you there’s a new version available?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Tue Feb 6 00:03:50 2024
    XPost: comp.unix.programmer

    On 05/02/2024 22:47, Kaz Kylheku wrote:
    On 2024-02-05, bart <bc@freeuk.com> wrote:
    -bash: ./configure: /bin/sh^M: bad interpreter: No such file or directory

    This indicates that the thing you're trying to build was converted
    to Windows format. See that ^M? It's a carriage return; what's that
    doing in a POSIX shell script? Someone likely did that on purpose.

    Firstly, projects with ./configure shell scripts are often not ported to Windows at all. If that is the case, you could be the first one trying
    that. In that situation, the best bet is Cygwin. (Or WSL2, but that's basically not Windows.)

    The .zip file containing files converted to Windows format suggests
    that the package is ported to Windows, using some build environment that
    uses CR-LF files like MinGW.

    Your best bet is to consult the project and ask them, how is it ported
    to Windows? Then do it their way. Otherwise you're on your own.

    Another pattern that occurs is that FOSS projects which port their code
    to Windows themselves provide binaries for Windows, so they don't expect users to build those. Thus their procedure for building on Windows might not be well documented.

    This is all quite typical.

    You not knowing where to get a glue and generally being lost
    at sea with no rudder or sail?

    Don't you have some nephew or niece in the fifth grade who could
    help with this?

    When I go to the NASM site (https://www.nasm.us) there is a clear
    Download link.

    In the download link, there are versioned and dated release
    directories.

    In the most recent one, there are Win32 and Win64 subdirectories.

    There is a file

    nasm-2.16.02rc9-installer-x64.exe

    Doh?

    They've gone out of the way to support Windows users with an executable installer.

    If you want to know how they built that, they may have documentation elsewhere. There might be instructions in the accompanying .zip or else
    you just have to as in the mailing list.

    Yes I know there is an executable available for Windows. I first used
    Nasm at least 20 years ago, perhaps 25.

    But since it's supposed to be open source, I thought I'd have a go at
    building it. And of course I tried it under Windows first.

    I'm surprised that they went to the trouble of supplying configure
    scripts with CRLF line-endings, given that you can't actually run it
    under Windows.

    In any case, LF line endings on Windows tend not to be a problem. My
    editors generate LF not CRLF.

    Anyway, if I fix those line endings in 'configure', then it says:

    configure: error: cannot run /bin/bash autoconf/helpers/config.sub

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Lawrence D'Oliveiro on Tue Feb 6 00:07:08 2024
    XPost: comp.unix.programmer

    On 2024-02-05, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 5 Feb 2024 13:02:52 +0100, David Brown wrote:

    It /is/ a consumer platform, yes. And because it has no standard ways
    to build software, and no one (approximately) using it wants to build
    software on it, the norm is to distribute code in binary form for
    Windows. That works out fine for almost all Windows users. That
    includes libraries - even C programmers on Windows don't want to build
    "libjpeg" or whatever, they want a DLL.

    But without integrated package management, how do you keep it all up to
    date? If two separate apps use the same library, do they each end up with their own version, or do they share one version? Does each app have to run its own periodic background updater task to tell you there’s a new version available?

    Windows has solved this problem. Executables find .DLL libraries in
    their own directory.

    You ship a program with the exact libraries it needs which you
    tested with and those are the ones it will use.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to malcolm.arthur.mclean@gmail.com on Tue Feb 6 00:18:22 2024
    In article <uprqfk$gcv2$2@dont-email.me>,
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
    ...
    I am hoping Chris is lucky enough to be given this honor.


    Ah. So of course Keith couldn't understand what I was saying.

    As you have correctly noted, Keith is clearly of a different psychological
    type than you (and me). So, it is unlikely that useful communication will
    ever happen between the two of you. Thus, it is best if he KF's you.

    --
    Many people in the American South think that DJT is, and will be remembered
    as, one of the best presidents in US history. They are absolutely correct.

    He is currently at number 46 on the list. High praise, indeed!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Malcolm McLean on Tue Feb 6 00:16:02 2024
    XPost: comp.unix.programmer

    On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
    On 05/02/2024 22:51, Lawrence D'Oliveiro wrote:
    On Mon, 5 Feb 2024 13:02:52 +0100, David Brown wrote:

    It /is/ a consumer platform, yes. And because it has no standard ways
    to build software, and no one (approximately) using it wants to build
    software on it, the norm is to distribute code in binary form for
    Windows. That works out fine for almost all Windows users. That
    includes libraries - even C programmers on Windows don't want to build
    "libjpeg" or whatever, they want a DLL.

    But without integrated package management, how do you keep it all up to
    date? If two separate apps use the same library, do they each end up with
    their own version, or do they share one version? Does each app have to run >> its own periodic background updater task to tell you there’s a new version >> available?

    The term is DLL hell.

    DLL hell is mostly historic term.

    DLL hell occured on Windows when programs installed DLL files in the
    System folder. In particular Common DLLs such Microsoft Visual Studio
    run-time files.

    Today, the situation that most closely fits the term is the situation on
    Linux distributions.

    The Glibc shared library loading mechanism doesn't implement the nice
    strategy of finding libraries in the same directory as the executable.

    Libraries must be dumped into a common respository, where you get issues
    when programs need different versions of the same library and those
    library versions are not cleanly distinguished by having a different
    soname.

    If a DLL changes, does that means that apps which called the old DLL and
    are were buggy should call the new DLL and will now be fixed?

    DLLs like kernel32.dll and user32.dll simply don't change, or not in
    such a way.

    Non-system things, you ship with the app, so they don't change unless
    you issue an update.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Kaz Kylheku on Tue Feb 6 10:08:32 2024
    XPost: comp.unix.programmer

    On 06/02/2024 01:07, Kaz Kylheku wrote:
    On 2024-02-05, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 5 Feb 2024 13:02:52 +0100, David Brown wrote:

    It /is/ a consumer platform, yes. And because it has no standard ways
    to build software, and no one (approximately) using it wants to build
    software on it, the norm is to distribute code in binary form for
    Windows. That works out fine for almost all Windows users. That
    includes libraries - even C programmers on Windows don't want to build
    "libjpeg" or whatever, they want a DLL.

    But without integrated package management, how do you keep it all up to
    date? If two separate apps use the same library, do they each end up with
    their own version, or do they share one version? Does each app have to run >> its own periodic background updater task to tell you there’s a new version >> available?

    Windows has solved this problem. Executables find .DLL libraries in
    their own directory.

    You ship a program with the exact libraries it needs which you
    tested with and those are the ones it will use.


    The two methods - a repository for common libraries, and individual
    copies of the libraries for each program - have their advantages and disadvantages.

    If you have copies of the libraries for each program, that is
    inefficient - bigger downloads and installs (which don't bother me, but
    do bother some people), and extra copies in ram when running (which can sometimes slow things down). The main problem, however, is that when a
    serious bug is fixed, you need to wait for every individual program that
    uses the library to be updated and provide a new release, then you have
    to download them all anew.

    If you have a common place for common libraries, updates of that library
    are easy and only need to be done once when there is a fix. But now the programs that are using it are not tested with the same version that you
    have on your system, and there can be trouble if the API details change.
    And you have the "DLL hell" possibility, though careful version
    numbering and symbol links reduces that risk significantly.

    There is no single "best" solution here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Tue Feb 6 13:59:50 2024
    XPost: comp.unix.programmer

    On Fri, 2 Feb 2024 16:26:12 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 02/02/2024 14:45, Michael S wrote:

    Actually, nowadays monolithic tools are solid majority in
    programming. I mean, programming in general, not C/C++/Fortran
    programming which by itself is a [sizable] minority.
    Even in C++, a majority uses non-monolithic tools well-hidden behind
    front end (IDE) that makes them indistinguishable from monolithic.


    It can often be helpful to have a single point of interaction - a
    front-end that combines tools. But usually these are made of parts.

    For many of the microcontrollers I work with, the manufacturer's
    standard development toolset is based around Eclipse and gcc. From
    the user point of view, it looks a lot like one monolithic IDE that
    lets you write your code, compile and link it, and download and debug
    it on the microcontroller. Under the hood, it is far from a
    monolithic application. Different bits come from many different
    places. This means the microcontroller manufacturer is only making
    the bits that are specific to /their/ needs - such as special views
    while debugging, or "wizards" for configuring chip pins. The Eclipse
    folk are experts at making an editor and IDE, the gcc folks are
    experts at the compiler, the openocd folks know about jtag debugging,
    and so on. And to a fair extent, advanced users can use the bits
    they want and leave out other bits. I sometimes use other editors,
    but might still use the toolchain provided with the manufacturer's
    tools. I might swap out the debugger connection. I might use the
    IDE for something completely different. I might install additional
    features in the IDE. I might use different toolchains.
    Manufacturers, when putting things together, might change where they
    get their toolchains, or what debugging connectors they use. It's
    even been known for them to swap out the base IDE while keeping most
    of the rest the same (VS Code has become a popular choice now, and a
    few use NetBeans rather than Eclipse).

    (Oh, and for those that don't believe "make" and "gcc" work on
    Windows, these development tools invariably have "make" and almost
    invariably use gcc as their toolchain, all working in almost exactly
    the same way on Linux and Windows. The only difference is builds are
    faster on Linux.)

    This is getting the best (or at least, trying to) from all worlds.
    It gives people the ease-of-use advantages of monolithic tools
    without the key disadvantages of real monolithic tools - half-arse
    editors, half-arsed project managers, half-arsed compilers, and poor extensibility because the suppliers are trying to do far too much
    themselves.

    I don't think it is common now to have /real/ monolithic development
    tools. But it is common to have front-ends aimed at making the
    underlying tools easier and more efficient to use, and to provide
    all-in-one base packages.




    First, you moved a goal post from monolithic compiler to monolithic IDE. Second, you are still talking about C/C++/Fortran.
    That's not where majority of software development goes this days.

    The first most used language is JavaScript. Where exactly JavaScript dev
    sees separate compiler and linker?
    The second most used language is python. The same question here.
    Even in more traditional compiled/jitted and mostly statically typed programming environments like Java/Cotlin, .net, Swift, go, Rust, even
    if they use separate tools for compiling, assembling, linking and build management, it's all integrated in a way that even die-hard command-line
    user does not know about separation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Tue Feb 6 13:14:14 2024
    XPost: comp.unix.programmer

    On 06/02/2024 12:59, Michael S wrote:
    On Fri, 2 Feb 2024 16:26:12 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 02/02/2024 14:45, Michael S wrote:

    Actually, nowadays monolithic tools are solid majority in
    programming. I mean, programming in general, not C/C++/Fortran
    programming which by itself is a [sizable] minority.
    Even in C++, a majority uses non-monolithic tools well-hidden behind
    front end (IDE) that makes them indistinguishable from monolithic.


    It can often be helpful to have a single point of interaction - a
    front-end that combines tools. But usually these are made of parts.

    For many of the microcontrollers I work with, the manufacturer's
    standard development toolset is based around Eclipse and gcc. From
    the user point of view, it looks a lot like one monolithic IDE that
    lets you write your code, compile and link it, and download and debug
    it on the microcontroller. Under the hood, it is far from a
    monolithic application. Different bits come from many different
    places. This means the microcontroller manufacturer is only making
    the bits that are specific to /their/ needs - such as special views
    while debugging, or "wizards" for configuring chip pins. The Eclipse
    folk are experts at making an editor and IDE, the gcc folks are
    experts at the compiler, the openocd folks know about jtag debugging,
    and so on. And to a fair extent, advanced users can use the bits
    they want and leave out other bits. I sometimes use other editors,
    but might still use the toolchain provided with the manufacturer's
    tools. I might swap out the debugger connection. I might use the
    IDE for something completely different. I might install additional
    features in the IDE. I might use different toolchains.
    Manufacturers, when putting things together, might change where they
    get their toolchains, or what debugging connectors they use. It's
    even been known for them to swap out the base IDE while keeping most
    of the rest the same (VS Code has become a popular choice now, and a
    few use NetBeans rather than Eclipse).

    (Oh, and for those that don't believe "make" and "gcc" work on
    Windows, these development tools invariably have "make" and almost
    invariably use gcc as their toolchain, all working in almost exactly
    the same way on Linux and Windows. The only difference is builds are
    faster on Linux.)

    This is getting the best (or at least, trying to) from all worlds.
    It gives people the ease-of-use advantages of monolithic tools
    without the key disadvantages of real monolithic tools - half-arse
    editors, half-arsed project managers, half-arsed compilers, and poor
    extensibility because the suppliers are trying to do far too much
    themselves.

    I don't think it is common now to have /real/ monolithic development
    tools. But it is common to have front-ends aimed at making the
    underlying tools easier and more efficient to use, and to provide
    all-in-one base packages.




    First, you moved a goal post from monolithic compiler to monolithic IDE. Second, you are still talking about C/C++/Fortran.

    I was thinking of compiled languages, yes.

    That's not where majority of software development goes this days.

    Agreed.


    The first most used language is JavaScript. Where exactly JavaScript dev
    sees separate compiler and linker?
    The second most used language is python. The same question here.

    Interpreted, byte-code compiled or JIT languages have a different model entirely. But again, you have a front-end that appears monolithic,
    hiding back-ends that are very far from monolithic. The language
    front-end can come from one place, libraries from somewhere else, the VM
    may be totally independent, and the JIT could be separate again. And if
    you are using an IDE for it, as many do (regardless of the language),
    you've got all the editors, revision control system, gui designer, HTML previewer, and whatever else from another dozen independent sources and
    all acting as one.

    It is not monolithic by any means - but it /looks/ that way for user convenience.

    Even in more traditional compiled/jitted and mostly statically typed programming environments like Java/Cotlin, .net, Swift, go, Rust, even
    if they use separate tools for compiling, assembling, linking and build management, it's all integrated in a way that even die-hard command-line
    user does not know about separation.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Tue Feb 6 14:32:29 2024
    XPost: comp.unix.programmer

    On Tue, 6 Feb 2024 13:14:14 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    It is not monolithic by any means - but it /looks/ that way for user convenience.


    And Bart wants the same for slightly extended variant of C, that's all. According to my understanding, he does not care deeply about
    distinction between "true monolithic" and integrated compiler + linker
    + build system as long as it looks like monolithic.
    Or may be I should say that he will certainly express his unhappiness
    about size and speed of looks-monolithic tool and about the fact that
    they have to be installed, if they have to be installed, at least 20
    times per week, but at least he will be reasonably satisfied with functionality.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Kaz Kylheku on Tue Feb 6 14:32:13 2024
    XPost: comp.unix.programmer

    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:


    The Glibc shared library loading mechanism doesn't implement the nice >strategy of finding libraries in the same directory as the executable.

    Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Scott Lurndal on Tue Feb 6 14:40:58 2024
    XPost: comp.unix.programmer

    scott@slp53.sl.home (Scott Lurndal) writes:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:


    The Glibc shared library loading mechanism doesn't implement the nice >>strategy of finding libraries in the same directory as the executable.

    Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.


    The tool we build consists of 157 shared objects. The libraries
    are stored in tool-specific library directory; the main application
    consists of a shared object and a very small executable containing
    'main' (or a python 'shim' built using swig(1)).

    The remaining shared objects are dynamically loaded if and as
    necessary. There is no possibility of a library clash with either other applications or different versions of the same application, yet
    all active instances of the same version of the tool executing
    on a given host will share the memory resident library text pages.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Tue Feb 6 14:16:59 2024
    XPost: comp.unix.programmer

    On 06/02/2024 12:32, Michael S wrote:
    On Tue, 6 Feb 2024 13:14:14 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    It is not monolithic by any means - but it /looks/ that way for user
    convenience.


    And Bart wants the same for slightly extended variant of C, that's all. According to my understanding, he does not care deeply about
    distinction between "true monolithic" and integrated compiler + linker
    + build system as long as it looks like monolithic.
    Or may be I should say that he will certainly express his unhappiness
    about size and speed of looks-monolithic tool and about the fact that
    they have to be installed, if they have to be installed, at least 20
    times per week, but at least he will be reasonably satisfied with functionality.



    The packaging of language installations is certainly one aspect that I'm interested in. And my preference is to have as few files involved as
    possible.

    There is a trend now for newer languages to come as one giant
    executable, although in practice they need a few more bits.

    My own language projects do each come in one self-contained executable,
    and are from 0.1MB to 0.6MB.

    My original C compiler was also a single file, about 1MB.

    My current one, because it is now private, is in 2-3 parts (mcc.exe,
    aa.exe, about 360KB, and a discrete windows.h which was what took up
    most of that 1MB). It behaves as though it is monolithic.

    Regarding Tiny C, as it seems to be distributed, it requires a minimum
    of 3 binaries (tcc.exe, libtcc.dll, libtcc1-64.a, about 220KB) in order
    to build any of my generated-C applications. But for general use, it
    needs a bunch of C header files too.

    There are also products like Pico C, an interpreter, about 130KB
    self-contained in one file, although it has limitations and is very slow
    even for an interpreter. It could be adequate though for scripting builds.

    I know David Brown doesn't like 'toy' implementations of C, but if you
    need to bundle something for example, then the smaller and more
    self-contained the better.

    (FWIW, if I apply UPX compression to those examples, then Pico C reduces
    to 32KB; my mcc/aa combo to 123KB, and tcc exe/lib/a combo to 141KB. But
    the latter still needs discrete std header files.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Tue Feb 6 17:02:12 2024
    XPost: comp.unix.programmer

    On 06/02/2024 15:16, bart wrote:
    There are also products like Pico C, an interpreter, about 130KB self-contained in one file, although it has limitations and is very slow
    even for an interpreter. It could be adequate though for scripting builds.

    I know David Brown doesn't like 'toy' implementations of C, but if you
    need to bundle something for example, then the smaller and more self-contained the better.


    Just to be clear - you can have as many "toy" implementations of C as
    you like. And sometimes small, fast, limited tools are useful - such as
    if you want to have a C "interpreter".

    (I can't see the point myself - there are better languages for
    scripting, and they are not difficult to learn to the level you need for scripting. Even your own scripting languages are a better choice than interpreted C for pretty much any use.)

    What I don't agree with is the idea that such small C implementations
    are a viable replace for, or even better than, serious tools like gcc,
    clang, icc, MSVC, Green Hills, Metrowerks, and other major compilers.

    I am quite happy to accept that "small and fast" is a good thing - I
    just don't give it anything like the weighting you do, at least for
    normal compiler use.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Scott Lurndal on Tue Feb 6 16:59:51 2024
    XPost: comp.unix.programmer

    On 2024-02-06, Scott Lurndal <scott@slp53.sl.home> wrote:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:


    The Glibc shared library loading mechanism doesn't implement the nice >>strategy of finding libraries in the same directory as the executable.

    Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.

    Ah, that has this $ORIGIN mechanism now.

    Even if the distro doesn't have that in its LD_LIBRARY_PATH,
    you can put that into your executable's rpath.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Kaz Kylheku on Tue Feb 6 19:20:06 2024
    XPost: comp.unix.programmer

    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-02-06, Scott Lurndal <scott@slp53.sl.home> wrote:
    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:


    The Glibc shared library loading mechanism doesn't implement the nice >>>strategy of finding libraries in the same directory as the executable.

    Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.

    Ah, that has this $ORIGIN mechanism now.

    Even if the distro doesn't have that in its LD_LIBRARY_PATH,
    you can put that into your executable's rpath.

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Tue Feb 6 20:32:49 2024
    XPost: comp.unix.programmer

    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Tue Feb 6 20:31:23 2024
    XPost: comp.unix.programmer

    On Tue, 6 Feb 2024 14:16:59 +0000, bart wrote:

    There is a trend now for newer languages to come as one giant
    executable ...

    I don’t know where you get that idea:

    ldo@theon:~> dpkg-query -L python3.11
    /.
    /usr
    /usr/bin
    /usr/bin/pydoc3.11
    /usr/bin/pygettext3.11
    /usr/lib
    /usr/lib/python3
    /usr/lib/python3/dist-packages
    /usr/lib/python3.11
    /usr/lib/python3.11/lib-dynload
    /usr/share
    /usr/share/applications
    /usr/share/applications/python3.11.desktop
    /usr/share/doc
    /usr/share/doc/python3.11
    /usr/share/doc/python3.11/ACKS.gz
    /usr/share/doc/python3.11/NEWS.gz
    /usr/share/doc/python3.11/README.Debian
    /usr/share/doc/python3.11/README.rst.gz
    /usr/share/doc/python3.11/README.venv
    /usr/share/doc/python3.11/changelog.Debian.gz
    /usr/share/doc/python3.11/copyright
    /usr/share/lintian
    /usr/share/lintian/overrides
    /usr/share/lintian/overrides/python3.11
    /usr/share/man
    /usr/share/man/man1
    /usr/share/man/man1/pdb3.11.1.gz
    /usr/share/man/man1/pydoc3.11.1.gz
    /usr/share/man/man1/pygettext3.11.1.gz
    /usr/share/man/man1/pysetup3.11.1.gz
    /usr/share/pixmaps
    /usr/share/pixmaps/python3.11.xpm
    /usr/bin/pdb3.11
    /usr/share/doc/python3.11/changelog.gz

    Of course, that isn’t all of it:

    ldo@theon:~> apt-cache depends python3.11
    python3.11
    Depends: python3.11-minimal
    Depends: libpython3.11-stdlib
    |Depends: media-types
    Depends: mime-support
    Breaks: python3-all
    Breaks: python3-dev
    Breaks: python3-venv
    Recommends: ca-certificates
    Suggests: python3.11-venv
    Suggests: python3.11-doc
    Suggests: binutils

    The actual “python3” executable is in here:

    ldo@theon:~> dpkg-query -L python3.11-minimal
    /.
    /usr
    /usr/bin
    /usr/bin/python3.11
    /usr/lib
    /usr/lib/binfmt.d
    /usr/lib/binfmt.d/python3.11.conf
    /usr/share
    /usr/share/binfmts
    /usr/share/binfmts/python3.11
    /usr/share/doc
    /usr/share/doc/python3.11-minimal
    /usr/share/doc/python3.11-minimal/README.Debian.gz
    /usr/share/doc/python3.11-minimal/changelog.Debian.gz
    /usr/share/doc/python3.11-minimal/copyright
    /usr/share/lintian
    /usr/share/lintian/overrides
    /usr/share/lintian/overrides/python3.11-minimal
    /usr/share/man
    /usr/share/man/man1
    /usr/share/man/man1/python3.11.1.gz

    Is this your idea of a “giant” executable?

    ldo@theon:~> ls -lL /usr/bin/python3
    -rwxr-xr-x 1 root root 6784920 Dec 9 03:22 /usr/bin/python3

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Tue Feb 6 20:34:31 2024
    XPost: comp.unix.programmer

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    LD_LIBRARY_PATH was originally a "sunos" thing, and was adopted
    by SVR4 when they added support for shared objects.

    Predates GNU.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Lawrence D'Oliveiro on Tue Feb 6 20:49:16 2024
    XPost: comp.unix.programmer

    On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    It's a UNIX thing. GNU supports it, as it supports other
    UNIX requirements.


    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Lew Pitcher on Tue Feb 6 21:09:14 2024
    XPost: comp.unix.programmer

    On 2024-02-06, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:
    On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    It's a UNIX thing. GNU supports it, as it supports other
    UNIX requirements.

    I can't find any mention of LD_LIBRARY_PATH in SuS.
    Not under dlopen or anywhere else.

    I'm looking at (pretty old) Solaris documentation. It has the $ORIGIN
    variable suppoted in both LD_LIBRARY_PATH and the internal path you can
    set in executables.

    I also found a 1998-08 commit from Ulrich Drepper adding the expansion
    support with ORIGIN.

    I think the documentation of it may have lagged behind, that's all,
    but we have had it "forever".

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Keith Thompson on Tue Feb 6 21:39:24 2024
    XPost: comp.unix.programmer

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Lew Pitcher <lew.pitcher@digitalfreehold.ca> writes:
    On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    It's a UNIX thing. GNU supports it, as it supports other
    UNIX requirements.

    Where is it documented as a UNIX requirement? POSIX doesn't seem to
    mention it.

    I suspect Lew meant lower-case unix, from whence it originated
    (sunos -> svr4 -> linux), rather than the Single Unix Specification
    from which the trademark derives and which has been folded in with
    POSIX 1003.

    The SuS doesn't discuss the details of run-time linking outside
    of specifying dlopen/dlsym/dlclose/dlerror:

    "The class of executable object files eligible for this operation
    and the manner of their construction are implementation-defined,
    though typically such files are shared libraries or programs."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Kaz Kylheku on Tue Feb 6 21:43:18 2024
    XPost: comp.unix.programmer

    Kaz Kylheku <433-929-6894@kylheku.com> writes:
    On 2024-02-06, Lew Pitcher <lew.pitcher@digitalfreehold.ca> wrote:
    On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    It's a UNIX thing. GNU supports it, as it supports other
    UNIX requirements.

    I can't find any mention of LD_LIBRARY_PATH in SuS.
    Not under dlopen or anywhere else.

    I'm looking at (pretty old) Solaris documentation. It has the $ORIGIN >variable suppoted in both LD_LIBRARY_PATH and the internal path you can
    set in executables.

    I also found a 1998-08 commit from Ulrich Drepper adding the expansion >support with ORIGIN.

    I think the documentation of it may have lagged behind, that's all,
    but we have had it "forever".

    At least since circa 1989 when sunos added it. SVR4 was a merge between the follow-on to SVR3 and sunos (which became Solaris), thus SVR4 inheritied LD_LIBRARY_PATH from
    sunos along with the sun dynamic linking capability (SVR3 had static
    shared libraries - very painful to use as each library had to be linked
    at a fixed VA, unique amongst all other shared libraries). Given a
    2GB user va space, choosing an address for a new library became very
    difficult.

    Whether linux got it from Solaris or SVR4 probably doesn't much matter.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Lawrence D'Oliveiro on Wed Feb 7 00:51:02 2024
    XPost: comp.unix.programmer

    On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    I think we've used it on AIX and HP-UX already.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Janis Papanagnou on Wed Feb 7 02:18:08 2024
    XPost: comp.unix.programmer

    On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    I think we've used it on AIX and HP-UX already.

    Some IBM documentation I was able to dig up on the web says that AIX 5.3
    [2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
    5.1, which continues to work. Nothing about the $SOURCE expansion.

    The GCC Compile Farm Project has an AIX machine. I'm logging in there
    now. Looks like the "load" and "dlopen" man pages reference
    LD_LIBRARY_PATH. None of them mention any interpolation of parameters
    being supported. It probably doesn't exist.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to All on Wed Feb 7 02:57:39 2024
    XPost: comp.unix.programmer

    On Sun, 04 Feb 2024 20:55:12 GMT, scott@slp53.sl.home (Scott Lurndal)
    wrote in <QQSvN.294647$Wp_8.94897@fx17.iad>:

    bart <bc@freeuk.com> writes:
    On 04/02/2024 17:48, David Brown wrote:
    On 03/02/2024 20:35, bart wrote:

    It is Windows that places more store by file extensions, which Linux
    people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect.

    Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker

    I've never seen a '.x ' suffix. Ever. And I use linker scripts
    regularly.

    This was the first I'd heard about them in this context, but Open
    Network Computing's RPC (ONCRPC, was SunRPC) does use .x files
    for its RPC specifications.

    ONCRPC is a system for generating C stubs for network
    services, and it is (was?) also used to specify
    UNIX services like NFS and NIS. The Sun of yore
    were, indeed, good denizens of the Net. (So, crossposting
    conditions satisfied...I think?)

    Anyway, if you have the "standard" .x files
    installed on Linux Mint, they live in

    /usr/include/rpcsvc/

    Also, there are linker scripts that end in ".x"
    which on my system live here:

    /usr/lib/x86_64-linux-gnu/ldscripts/

    Fascinating to read -- and way over my head. (The
    man page for GNU ld says they are
    "AT&T's Link Editor Command Language syntax".) I'm
    not sure how often an average programmer would look
    around in there.

    In any event, the ".x" files in that directory are in
    the minority...

    --
    -v
    (cue music for "The X Files")
    $ locate -r "\.x$"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Kaz Kylheku on Wed Feb 7 04:21:42 2024
    XPost: comp.unix.programmer

    On 07.02.2024 03:18, Kaz Kylheku wrote:
    On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    I think we've used it on AIX and HP-UX already.

    Some IBM documentation I was able to dig up on the web says that AIX 5.3 [2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
    5.1, which continues to work. Nothing about the $SOURCE expansion.

    My contact with AIX had been earlier, since the early 1990's, starting
    with 3.5/3.6 (IIRC) to 4.1/4.3. A quick search did not show up much but
    a later document (but earlier than your document) from 2001 explaining

    "The LIBPATH environment variable is a colon-separated list of
    directory paths, with the same syntax as the PATH environment
    variable and indicates the search path for libraries. It has
    the same function as the LD_LIBRARY_PATH environment variable
    on SystemV." [ AIX Linking and Loading Mechanisms ]


    Janis

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to vallor on Wed Feb 7 03:18:45 2024
    XPost: comp.unix.programmer

    On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:

    Also, there are linker scripts that end in ".x"
    which on my system live here:

    /usr/lib/x86_64-linux-gnu/ldscripts/

    Fascinating to read -- and way over my head. (The man page for GNU ld
    says they are "AT&T's Link Editor Command Language syntax".) I'm not
    sure how often an average programmer would look around in there.

    Documentation on the script language here <https://sourceware.org/binutils/docs/ld/Scripts.html>.

    An obvious example of the need for a custom linker script would be
    building the Linux kernel, where you need a special format for the
    resulting binary that can be loaded by a bootloader.

    I had a look through the Linux sources, and there is (no big surprise) a different version of this script for each architecture, which is supposed
    to have the name
    arch/«architecture»/kernel/vmlinux.lds. I think this generated from the corresponding vmlinux.lds.S file in the source tree.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Kaz Kylheku on Tue Feb 6 23:41:56 2024
    XPost: comp.unix.programmer

    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base system of a distro.

    Wait really?
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Harnden@21:1/5 to Kaz Kylheku on Wed Feb 7 07:17:29 2024
    XPost: comp.unix.programmer

    On 07/02/2024 02:18, Kaz Kylheku wrote:
    On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    I think we've used it on AIX and HP-UX already.

    Some IBM documentation I was able to dig up on the web says that AIX 5.3 [2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
    5.1, which continues to work. Nothing about the $SOURCE expansion.

    The GCC Compile Farm Project has an AIX machine. I'm logging in there
    now. Looks like the "load" and "dlopen" man pages reference
    LD_LIBRARY_PATH. None of them mention any interpolation of parameters
    being supported. It probably doesn't exist.


    Wasn't it SHLIB_PATH on HP/UX?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to vallor on Wed Feb 7 09:42:52 2024
    XPost: comp.unix.programmer

    On 07/02/2024 03:57, vallor wrote:
    On Sun, 04 Feb 2024 20:55:12 GMT, scott@slp53.sl.home (Scott Lurndal)
    wrote in <QQSvN.294647$Wp_8.94897@fx17.iad>:

    bart <bc@freeuk.com> writes:
    On 04/02/2024 17:48, David Brown wrote:
    On 03/02/2024 20:35, bart wrote:

    It is Windows that places more store by file extensions, which Linux >>>>> people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect.

    Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker

    I've never seen a '.x ' suffix. Ever. And I use linker scripts
    regularly.

    This was the first I'd heard about them in this context, but Open
    Network Computing's RPC (ONCRPC, was SunRPC) does use .x files
    for its RPC specifications.

    ONCRPC is a system for generating C stubs for network
    services, and it is (was?) also used to specify
    UNIX services like NFS and NIS. The Sun of yore
    were, indeed, good denizens of the Net. (So, crossposting
    conditions satisfied...I think?)

    Anyway, if you have the "standard" .x files
    installed on Linux Mint, they live in

    /usr/include/rpcsvc/

    Also, there are linker scripts that end in ".x"
    which on my system live here:

    /usr/lib/x86_64-linux-gnu/ldscripts/

    Fascinating to read -- and way over my head. (The
    man page for GNU ld says they are
    "AT&T's Link Editor Command Language syntax".) I'm
    not sure how often an average programmer would look
    around in there.

    In any event, the ".x" files in that directory are in
    the minority...


    If you look in that directory, you'll see all the files are ".x<flags>",
    where <flags> are letters. So you get ".x", ".xbn", ".xc", ".xce", and
    a dozen other combinations. I don't know the details of the flags, but
    they generally refer to different arrangements of code and data (for
    example, merging read-only data and executable code, or keeping them
    separate).

    There's no doubt that ".x", and ".x<flags>", are common extensions for
    linker files, but that they do not act as file extensions in the same
    way as for other source code. Instead, they are sets of flags. (That's
    why gcc treats any unknown extension as a linker file.)

    (Note to Bart - I am not saying I think this is a good idea - I am
    saying how it is.)

    I think most people writing their own linker scripts use different file extensions - I use ".ld" myself.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Wed Feb 7 10:40:40 2024
    XPost: comp.unix.programmer

    On 07/02/2024 08:42, David Brown wrote:
    On 07/02/2024 03:57, vallor wrote:
    On Sun, 04 Feb 2024 20:55:12 GMT, scott@slp53.sl.home (Scott Lurndal)
    wrote in <QQSvN.294647$Wp_8.94897@fx17.iad>:

    bart <bc@freeuk.com> writes:
    On 04/02/2024 17:48, David Brown wrote:
    On 03/02/2024 20:35, bart wrote:

    It is Windows that places more store by file extensions, which Linux >>>>>> people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect. >>>>
    Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker

    I've never seen a '.x ' suffix.  Ever.  And I use linker scripts
    regularly.

    This was the first I'd heard about them in this context, but Open
    Network Computing's RPC (ONCRPC, was SunRPC) does use .x files
    for its RPC specifications.

    ONCRPC is a system for generating C stubs for network
    services, and it is (was?) also used to specify
    UNIX services like NFS and NIS.  The Sun of yore
    were, indeed, good denizens of the Net.  (So, crossposting
    conditions satisfied...I think?)

    Anyway, if you have the "standard" .x files
    installed on Linux Mint, they live in

    /usr/include/rpcsvc/

    Also, there are linker scripts that end in ".x"
    which on my system live here:

    /usr/lib/x86_64-linux-gnu/ldscripts/

    Fascinating to read -- and way over my head.  (The
    man page for GNU ld says they are
    "AT&T's Link Editor Command Language syntax".)  I'm
    not sure how often an average programmer would look
    around in there.

    In any event, the ".x" files in that directory are in
    the minority...


    If you look in that directory, you'll see all the files are ".x<flags>", where <flags> are letters.  So you get ".x", ".xbn", ".xc", ".xce", and
    a dozen other combinations.  I don't know the details of the flags, but
    they generally refer to different arrangements of code and data (for
    example, merging read-only data and executable code, or keeping them separate).

    There's no doubt that ".x", and ".x<flags>", are common extensions for
    linker files, but that they do not act as file extensions in the same
    way as for other source code.  Instead, they are sets of flags.  (That's why gcc treats any unknown extension as a linker file.)

    A bit like my tools treat an unknown extension as a file of whatever
    language the tool primarily works with?

    Cool. But is gcc primarily used for linker files? I'm not even sure what
    a linker file is!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Malcolm McLean on Wed Feb 7 11:10:09 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.
    Wait really?

    If you install Windows you don't get Visual Studio and you have to install
    it separately. If you install Linux you get gcc and other development
    tools, and I don't think there's even a way of setting up the install to
    say you don't want them.

    Why do you say these things without checking? It's not uncommon to have
    Linux installs without gcc.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Ben Bacarisse on Wed Feb 7 11:13:24 2024
    On 2024-02-07, Ben Bacarisse wrote:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base >>>> system of a distro.
    Wait really?

    If you install Windows you don't get Visual Studio and you have to install >> it separately. If you install Linux you get gcc and other development
    tools, and I don't think there's even a way of setting up the install to
    say you don't want them.

    Why do you say these things without checking? It's not uncommon to have Linux installs without gcc.

    In fact, I haven't had Debian install build-essential (etc.) by default
    in at least the past decade. It *might* be offered during the
    installation phase, but the last time I reinstalled, I used their
    netinst image (so certainly stripped down).


    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Richard Harnden on Wed Feb 7 12:59:13 2024
    XPost: comp.unix.programmer

    On 07.02.2024 08:17, Richard Harnden wrote:
    On 07/02/2024 02:18, Kaz Kylheku wrote:
    On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    I think we've used it on AIX and HP-UX already.

    Some IBM documentation I was able to dig up on the web says that AIX 5.3
    [2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
    5.1, which continues to work. Nothing about the $SOURCE expansion.

    The GCC Compile Farm Project has an AIX machine. I'm logging in there
    now. Looks like the "load" and "dlopen" man pages reference
    LD_LIBRARY_PATH. None of them mention any interpolation of parameters
    being supported. It probably doesn't exist.


    Wasn't it SHLIB_PATH on HP/UX?

    Maybe, I don't recall. My point was not so much the concrete name
    of the environment variable but the availability of the function
    connected with the respective variables.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gary R. Schmidt@21:1/5 to Malcolm McLean on Wed Feb 7 23:46:42 2024
    XPost: comp.unix.programmer

    On 07/02/2024 20:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.

    If you install one of the Enterprise-aimed Linuxes, like RHEL or SLES,
    they default is basically a machine with a console, and some basic services.

    Cheers,
    Gary B-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gary R. Schmidt@21:1/5 to Richard Harnden on Wed Feb 7 23:53:22 2024
    XPost: comp.unix.programmer

    On 07/02/2024 18:17, Richard Harnden wrote:
    On 07/02/2024 02:18, Kaz Kylheku wrote:
    On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker.   The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    I think we've used it on AIX and HP-UX already.

    Some IBM documentation I was able to dig up on the web says that AIX 5.3
    [2004] introduced LD_LIBRARY_PATH; before that it was LIBPATH in AIX
    5.1, which continues to work. Nothing about the $SOURCE expansion.

    The GCC Compile Farm Project has an AIX machine. I'm logging in there
    now. Looks like the "load" and "dlopen" man pages reference
    LD_LIBRARY_PATH. None of them mention any interpolation of parameters
    being supported. It probably doesn't exist.


    Wasn't it SHLIB_PATH on HP/UX?

    It still is. (Yes, some of us have to maintain these boxes because,
    although they were all amortised a decade or two ago, someone in a bank/taxation department/insurance company/&c knows that replacing them
    will be an expensive and time consuming process. So they'll be replaced
    - after they collapse into a pile of rust - in a mad panic with Linux
    boxes with something written in a mad rush in Python/PHP/Perl - by
    people who don't understand the requirements, briefed by people who
    don't understand the requirements - that sort of does the same job the
    old machines did, if you squint really, really hard. And /don't/ get
    audited by anyone competent. However, that one's *really* unlikely. :-) )

    Cheers,
    Gary B-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Gary R. Schmidt on Wed Feb 7 15:45:09 2024
    XPost: comp.unix.programmer

    On Wed, 7 Feb 2024 23:53:22 +1100
    "Gary R. Schmidt" <grschmidt@acm.org> wrote:

    On 07/02/2024 18:17, Richard Harnden wrote:
    On 07/02/2024 02:18, Kaz Kylheku wrote:
    On 2024-02-06, Janis Papanagnou <janis_papanagnou+ng@hotmail.com>
    wrote:
    On 06.02.2024 21:32, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker.   The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    I think we've used it on AIX and HP-UX already.

    Some IBM documentation I was able to dig up on the web says that
    AIX 5.3 [2004] introduced LD_LIBRARY_PATH; before that it was
    LIBPATH in AIX 5.1, which continues to work. Nothing about the
    $SOURCE expansion.

    The GCC Compile Farm Project has an AIX machine. I'm logging in
    there now. Looks like the "load" and "dlopen" man pages reference
    LD_LIBRARY_PATH. None of them mention any interpolation of
    parameters being supported. It probably doesn't exist.


    Wasn't it SHLIB_PATH on HP/UX?

    It still is. (Yes, some of us have to maintain these boxes because, although they were all amortised a decade or two ago, someone in a bank/taxation department/insurance company/&c knows that replacing
    them will be an expensive and time consuming process. So they'll be
    replaced
    - after they collapse into a pile of rust - in a mad panic with
    Linux boxes with something written in a mad rush in Python/PHP/Perl -
    by people who don't understand the requirements, briefed by people
    who don't understand the requirements - that sort of does the same
    job the old machines did, if you squint really, really hard. And
    /don't/ get audited by anyone competent. However, that one's
    *really* unlikely. :-) )

    Cheers,
    Gary B-)

    It does not have to be replaced with new solution even after original
    hardware died.
    https://www.stromasys.com/solution/charon-par/

    For those that are currently on IPF variant of HP-UX, working hardware
    is still easily available. However when it wouldn't be, I'd expect
    that the same company will provide emulation solution. My theory is
    that they already have it done, but as long as "real" HW is available
    they are afraid to sell IPF emulators because of legal concerns.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Wed Feb 7 15:09:57 2024
    XPost: comp.unix.programmer

    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.


    There are several hundred Linux distributions, not including the niche
    ones or outdated ones. Have you tried them all?

    Most "normal user" oriented distros do not have gcc or related tools
    installed by default, nor do most server systems, or firewall systems,
    or small installations. Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent), but they are not
    included by default in the installation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Wed Feb 7 14:21:44 2024
    XPost: comp.unix.programmer

    On 07/02/2024 14:09, David Brown wrote:
    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base >>>> system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.


    There are several hundred Linux distributions, not including the niche
    ones or outdated ones.  Have you tried them all?

    Most "normal user" oriented distros do not have gcc or related tools installed by default, nor do most server systems, or firewall systems,
    or small installations.  Installing the tools is usually very simple ("apt-get install build-essentials", or equivalent), but they are not included by default in the installation.



    I've tried a fair number. The ones that used to come CDs for you to boot
    on a Windows PC. Ones installed on one or two crummy Linux notebooks.
    The ones you downloaded to use with Virtual Box. The various versions
    you downloaded and burned into an SD drive to plug into RPIs. And mostly recently the ones that come with WSL.

    I think pretty much all of them that I remember came with a C compiler.

    So it is easy to make the assumption that gcc is always available.

    But isn't this also supposed to be one big advantage of Linux over
    Windows that this stuff is built-in?

    Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent),

    Is 'apt-get' always available?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Wed Feb 7 15:37:09 2024
    XPost: comp.unix.programmer

    On 07/02/2024 11:40, bart wrote:
    On 07/02/2024 08:42, David Brown wrote:
    On 07/02/2024 03:57, vallor wrote:
    On Sun, 04 Feb 2024 20:55:12 GMT, scott@slp53.sl.home (Scott Lurndal)
    wrote in <QQSvN.294647$Wp_8.94897@fx17.iad>:

    bart <bc@freeuk.com> writes:
    On 04/02/2024 17:48, David Brown wrote:
    On 03/02/2024 20:35, bart wrote:

    It is Windows that places more store by file extensions, which Linux >>>>>>> people say is a bad thing.


    Windows is too dependent on them, and too trusting.

    But above you say that is the advantage of Linux.

    Yes, it's a hands-down win for Linux (and other *nix) in this aspect. >>>>>
    Yet it is Linux (manifested via gcc) where it ASSUMES .x is a linker

    I've never seen a '.x ' suffix.  Ever.  And I use linker scripts
    regularly.

    This was the first I'd heard about them in this context, but Open
    Network Computing's RPC (ONCRPC, was SunRPC) does use .x files
    for its RPC specifications.

    ONCRPC is a system for generating C stubs for network
    services, and it is (was?) also used to specify
    UNIX services like NFS and NIS.  The Sun of yore
    were, indeed, good denizens of the Net.  (So, crossposting
    conditions satisfied...I think?)

    Anyway, if you have the "standard" .x files
    installed on Linux Mint, they live in

    /usr/include/rpcsvc/

    Also, there are linker scripts that end in ".x"
    which on my system live here:

    /usr/lib/x86_64-linux-gnu/ldscripts/

    Fascinating to read -- and way over my head.  (The
    man page for GNU ld says they are
    "AT&T's Link Editor Command Language syntax".)  I'm
    not sure how often an average programmer would look
    around in there.

    In any event, the ".x" files in that directory are in
    the minority...


    If you look in that directory, you'll see all the files are
    ".x<flags>", where <flags> are letters.  So you get ".x", ".xbn",
    ".xc", ".xce", and a dozen other combinations.  I don't know the
    details of the flags, but they generally refer to different
    arrangements of code and data (for example, merging read-only data and
    executable code, or keeping them separate).

    There's no doubt that ".x", and ".x<flags>", are common extensions for
    linker files, but that they do not act as file extensions in the same
    way as for other source code.  Instead, they are sets of flags.
    (That's why gcc treats any unknown extension as a linker file.)

    A bit like my tools treat an unknown extension as a file of whatever
    language the tool primarily works with?

    Cool. But is gcc primarily used for linker files? I'm not even sure what
    a linker file is!


    gcc (the program, as distinct from GCC the project) is a front-end - a "driver". It passes its input files and flags on to the configured
    tools, adding to or changing flags as appropriate, to run the C
    compiler, C pre-processor, assembler, C++ compiler, other compilers
    (Fortran, Ada, etc.), the linker, and so on. (The assembler and linker
    are not part of the GCC project, but any given gcc build will usually be configured to use an appropriate assembler and linker.)

    Use of specific linker files is not common for building code on PC's -
    the standard linker setups are usually fine. But it is quite common in embedded development and other most specialised builds. Then you pass
    one or more linker files to the linker, generally via the gcc front-end.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lew Pitcher@21:1/5 to Keith Thompson on Wed Feb 7 15:02:07 2024
    XPost: comp.unix.programmer

    On Tue, 06 Feb 2024 13:07:31 -0800, Keith Thompson wrote:

    Lew Pitcher <lew.pitcher@digitalfreehold.ca> writes:
    On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:
    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:
    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It’s a GNU thing, I think.

    It's a UNIX thing. GNU supports it, as it supports other
    UNIX requirements.

    Where is it documented as a UNIX requirement? POSIX doesn't seem to
    mention it.

    You are correct; it's not mentioned in any of the POSIX or SUS
    documentation that I could get my hands on.

    However, it /is/ mentioned in the Solaris and SysV documentation,
    so it comes from (some specific) Unix (system).

    My (poorly made) point was that GNU didn't "invent" LD_LIBRARY_PATH
    (as Lawrence's post implied to me), but copied it from existing Unix
    systems. LD_LIBRARY_PATH is not a GNUism, but part of the Unix
    heritage.

    FWIW, GNU /could have/ used SHLIB_PATH (the HPUX equivalent of
    LD_LIBRARY_PATH) instead. And my point here is that, when "shared
    objects" became popular, Unix system authors/vendors tried to
    mitigate "DLL hell", often by "inventing" the same mechanism
    under different names.

    --
    Lew Pitcher
    "In Skills We Trust"

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed Feb 7 15:27:14 2024
    XPost: comp.unix.programmer

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:

    Also, there are linker scripts that end in ".x"
    which on my system live here:

    /usr/lib/x86_64-linux-gnu/ldscripts/

    Fascinating to read -- and way over my head. (The man page for GNU ld
    says they are "AT&T's Link Editor Command Language syntax".) I'm not
    sure how often an average programmer would look around in there.

    Documentation on the script language here ><https://sourceware.org/binutils/docs/ld/Scripts.html>.

    An obvious example of the need for a custom linker script would be
    building the Linux kernel, where you need a special format for the
    resulting binary that can be loaded by a bootloader.

    Indeed, that's been my primary use of custome linker scripts since
    1989. Various operating systems, hypervisors, and even today for
    processor firmware. Mainly we used the .ld suffix for such
    scripts.

    partial example for a bare-metal hypervisor written in C++:

    OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64") OUTPUT_ARCH(i386:x86-64)

    ENTRY(dvmmstart)

    SECTIONS
    {
    . = 0xffff808000000000;
    percpu.data : {
    *(percpu.data)
    }
    . = 0xffff830000100000;

    _start = .;

    . = ALIGN(16);
    _stext = .;
    .text : {
    *(inittext)
    *(.text)
    *(.text.*)
    *(.gnu.linkonce.t*)
    }
    _etext = .;

    . = ALIGN(32);
    _srodata = .;
    .rodata : {
    *(.rodata)
    *(.rodata.*)
    *(.gnu.linkonce.r*)
    *(.got)
    *(.got.*)

    __CTOR_LIST__ = .;
    LONG((__CTOR_END__ - __CTOR_LIST__) / 8 - 2)
    *(.ctors)
    LONG(0)
    __CTOR_END__ = .;

    __DTOR_LIST__ = .;
    LONG((__DTOR_END__ - __DTOR_LIST__) / 8 - 2)
    *(.dtors)
    LONG(0)
    __DTOR_END__ = .;
    }
    _erodata = .;

    . = ALIGN(32);
    _sdata = .;
    .data : {
    *(.data)
    *(.data.*)
    *(.gnu.linkonce.d*)
    }
    _edata = .;

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to no@thanks.net on Wed Feb 7 15:30:19 2024
    XPost: comp.unix.programmer

    candycanearter07 <no@thanks.net> writes:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?


    Yes, most distros won't install -devel packages, just the binary packages.

    Including ubuntu, just some of the packages we need to install
    on a clean ubuntu:

    apt-get update -y
    apt -y install ksh
    apt -y install csh
    apt -y install tcsh
    apt -y install nis
    apt -y install autofs
    apt -y install make
    apt -y install libedit
    apt -y install libedit-dev
    apt -y install zlib1g
    apt -y install zlib1g-dev
    apt -y install ghostscript
    apt -y install python3
    apt -y install python3-config
    apt -y install libelf-dev
    apt -y install libboost-all-dev
    apt -y install libpcap-dev
    apt -y install libssl-dev
    apt -y install libgmp-dev
    apt -y install libattr1-dev
    apt -y install environment-modules
    apt -y install tclsh
    apt -y install xterm
    apt -y install libnss3-dev
    apt -y install libatk1.0-0
    apt -y install libatk-bridge-2.0-0-udeb
    apt -y install libatk-bridge-2.0-0-udeb
    apt -y install libatk-bridge-2.0
    apt -y install libatk-bridge2.0-0
    apt -y install libgtk2.0-0
    apt -y install libgtk-3-0
    apt -y install libgbm-dev
    apt -y install libasound2
    apt -y install yum-utils
    apt -y install python-requests
    apt -y install python-pexpect
    apt -y install emacs
    apt -y install vim-gtk
    apt -y install numactl
    apt -y install libmotif-dev
    apt -y install tightvncserver
    apt -y install patchelf
    apt -y install p7zip-full
    apt -y install meld
    apt -y install ctags
    apt -y install clang-format
    apt -y install xfce4 xfce4-goodies
    ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Malcolm McLean on Wed Feb 7 15:31:28 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.

    That's 100% incorrect.

    "install Linux" is ambiguous, since there are dozens of different
    linux distributions, each with their own installation behavior.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to bart on Wed Feb 7 10:11:33 2024
    XPost: comp.unix.programmer

    On 2/7/24 08:21, bart wrote:
    On 07/02/2024 14:09, David Brown wrote:
    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the
    base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.


    There are several hundred Linux distributions, not including the niche
    ones or outdated ones.  Have you tried them all?

    Most "normal user" oriented distros do not have gcc or related tools
    installed by default, nor do most server systems, or firewall systems,
    or small installations.  Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent), but they are not
    included by default in the installation.



    I've tried a fair number. The ones that used to come CDs for you to boot
    on a Windows PC. Ones installed on one or two crummy Linux notebooks.
    The ones you downloaded to use with Virtual Box. The various versions
    you downloaded and burned into an SD drive to plug into RPIs. And mostly recently the ones that come with WSL.

    I think pretty much all of them that I remember came with a C compiler.

    So it is easy to make the assumption that gcc is always available.

    But isn't this also supposed to be one big advantage of Linux over
    Windows that this stuff is built-in?

    Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent),

    Is 'apt-get' always available?

    It depends on the distro. apt is exclusive to Debian/Ubuntu. Other
    distros do have their own package managers though.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Scott Lurndal on Wed Feb 7 10:12:41 2024
    XPost: comp.unix.programmer

    On 2/7/24 09:30, Scott Lurndal wrote:
    candycanearter07 <no@thanks.net> writes:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?


    Yes, most distros won't install -devel packages, just the binary packages.

    Including ubuntu, just some of the packages we need to install
    on a clean ubuntu:

    apt-get update -y
    apt -y install ksh
    apt -y install csh
    apt -y install tcsh
    apt -y install nis
    apt -y install autofs
    apt -y install make
    apt -y install libedit
    apt -y install libedit-dev
    apt -y install zlib1g
    apt -y install zlib1g-dev
    apt -y install ghostscript
    apt -y install python3
    apt -y install python3-config
    apt -y install libelf-dev
    apt -y install libboost-all-dev
    apt -y install libpcap-dev
    apt -y install libssl-dev
    apt -y install libgmp-dev
    apt -y install libattr1-dev
    apt -y install environment-modules
    apt -y install tclsh
    apt -y install xterm
    apt -y install libnss3-dev
    apt -y install libatk1.0-0
    apt -y install libatk-bridge-2.0-0-udeb
    apt -y install libatk-bridge-2.0-0-udeb
    apt -y install libatk-bridge-2.0
    apt -y install libatk-bridge2.0-0
    apt -y install libgtk2.0-0
    apt -y install libgtk-3-0
    apt -y install libgbm-dev
    apt -y install libasound2
    apt -y install yum-utils
    apt -y install python-requests
    apt -y install python-pexpect
    apt -y install emacs
    apt -y install vim-gtk
    apt -y install numactl
    apt -y install libmotif-dev
    apt -y install tightvncserver
    apt -y install patchelf
    apt -y install p7zip-full
    apt -y install meld
    apt -y install ctags
    apt -y install clang-format
    apt -y install xfce4 xfce4-goodies

    Weird. IG i haven't reinstalled in a while..
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Malcolm McLean on Wed Feb 7 16:15:15 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 14:09, David Brown wrote:
    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base >>>>> system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.

    There are several hundred Linux distributions, not including the niche
    ones or outdated ones. Have you tried them all?
    Most "normal user" oriented distros do not have gcc or related tools
    installed by default, nor do most server systems, or firewall systems, or
    small installations. Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent), but they are not
    included by default in the installation.

    I've installed Linux several times on a desktop machine. I can never
    remember being given an option to not install gcc.

    Which is beside the point. You said you "get gcc and other development
    tools". Which distribution(s) did you install?

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Malcolm McLean on Wed Feb 7 16:30:02 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    On 07/02/2024 15:27, Scott Lurndal wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:



    So what the hell is that? What does it mean? How am i supposed to fix it
    if it goes wrong?

    I suspect you've been the internet long enough to have seen the
    phrase RTFM...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Ben Bacarisse on Wed Feb 7 17:34:52 2024
    XPost: comp.unix.programmer

    Ben Bacarisse <ben.usenet@bsb.me.uk> writes:

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 14:09, David Brown wrote:
    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base >>>>>> system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.

    There are several hundred Linux distributions, not including the niche
    ones or outdated ones. Have you tried them all?
    Most "normal user" oriented distros do not have gcc or related tools
    installed by default, nor do most server systems, or firewall systems, or >>> small installations. Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent), but they are not
    included by default in the installation.

    I've installed Linux several times on a desktop machine. I can never
    remember being given an option to not install gcc.

    Which is beside the point. You said you "get gcc and other development tools". Which distribution(s) did you install?

    Did you reply via email by accident, or would you rather not answer
    here?

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Keith Thompson on Wed Feb 7 12:24:07 2024
    XPost: comp.unix.programmer

    On 2/7/24 10:40, Keith Thompson wrote:
    candycanearter07 <no@thanks.net> writes:
    On 2/7/24 09:30, Scott Lurndal wrote:
    candycanearter07 <no@thanks.net> writes:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base >>>>> system of a distro.

    Wait really?

    Yes, most distros won't install -devel packages, just the binary packages. >>> Including ubuntu, just some of the packages we need to install
    on a clean ubuntu:
    apt-get update -y
    apt -y install ksh
    [43 lines deleted]
    apt -y install clang-format
    apt -y install xfce4 xfce4-goodies

    Weird. IG i haven't reinstalled in a while..

    When you post a followup to a long article, please delete any quoted
    material that isn't relevant to your followup, as I've done here.
    Thanks.

    Sorry.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Malcolm McLean on Wed Feb 7 19:24:36 2024
    XPost: comp.unix.programmer

    On 2024-02-07, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other

    Which "Linux"?

    There are numerous distros which have their own install systems
    with their own rules.

    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.

    In 27 years, I dont' remember a major Linux distro foisting the compiler
    on me as a required base package.

    Typically it's an opt-in during package selection.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Lew Pitcher on Wed Feb 7 20:36:36 2024
    XPost: comp.unix.programmer

    On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:

    LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
    the lawyers and those with enough money to pay them. We just get on and do
    our work on “*nix” systems.

    And my point here is that, when "shared
    objects" became popular, Unix system authors/vendors tried to mitigate
    "DLL hell", often by "inventing" the same mechanism under different
    names.

    We don’t have “DLL hell” because we don’t have “DLLs”, we have “shared
    objects” which can be versioned.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Malcolm McLean on Wed Feb 7 20:44:05 2024
    XPost: comp.unix.programmer

    On Wed, 7 Feb 2024 09:56:03 +0000, Malcolm McLean wrote:

    If you install Linux you get gcc and other development tools ...

    Normally, no. Considering a distro like Debian, essentially you don’t
    get anything unless you ask for it.

    Also on distros based on prebuilt binaries, there is a distinction
    between a “runtime” library package (needed for running programs that
    use the library) and a corresponding “development” package (needed for building such programs).

    So, for example, looking at the Cairo 2D graphics library, the runtime
    package (called “libcairo2”) provides the actual shareable library:

    lrwxrwxrwx 1 root root 21 Jan 6 08:05 /usr/lib/x86_64-linux-gnu/libcairo.so.2 -> libcairo.so.2.11800.0
    -rw-r--r-- 1 root root 1325768 Jan 6 08:05 /usr/lib/x86_64-linux-gnu/libcairo.so.2.11800.0

    while the development package (called “libcairo-dev”) provides the C include files and other stuff, as well as this additional symlink to
    the same shareable library:

    lrwxrwxrwx 1 root root 13 Jan 6 08:05 /usr/lib/x86_64-linux-gnu/libcairo.so -> libcairo.so.2

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed Feb 7 20:48:56 2024
    XPost: comp.unix.programmer

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:

    LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
    the lawyers and those with enough money to pay them. We just get on and do >our work on “*nix” systems.

    That's why 'you' say it. Don't speak for others.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Wed Feb 7 20:46:18 2024
    XPost: comp.unix.programmer

    On Wed, 7 Feb 2024 14:21:44 +0000, bart wrote:

    The ones that used to come CDs for you to boot
    on a Windows PC.

    Those would be “live CDs”. They might be trying to cater to a wider audience, of both techy and non techy types.

    Normal distro installers can assume the user is non-techy, and set up
    defaults accordingly, since the techy ones would know how to ask for more.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Ben Bacarisse on Wed Feb 7 20:50:51 2024
    XPost: comp.unix.programmer

    On Wed, 07 Feb 2024 11:10:09 +0000, Ben Bacarisse wrote:

    It's not uncommon to have Linux installs without gcc.

    The very first non-Apple PC I bought was a Shuttle small-form-factor unit
    that came with a copy of Mandrake 9.1 “Discovery Edition” in the box. (Go on, look up that name and version. That should give you an idea of how
    long ago it was.)

    I soon discovered that “Discovery Edition” meant it was lacking the third CD containing GCC and other development tools. So my first lesson in
    hacking my new Linux system was figuring out a) where to find the relevant packages online, and b) how to download and install them. Preferably by
    not having to do so one at a time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Wed Feb 7 21:53:35 2024
    XPost: comp.unix.programmer

    On 07/02/2024 15:21, bart wrote:
    On 07/02/2024 14:09, David Brown wrote:
    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the
    base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to
    install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them.


    There are several hundred Linux distributions, not including the niche
    ones or outdated ones.  Have you tried them all?

    Most "normal user" oriented distros do not have gcc or related tools
    installed by default, nor do most server systems, or firewall systems,
    or small installations.  Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent), but they are not
    included by default in the installation.



    I've tried a fair number. The ones that used to come CDs for you to boot
    on a Windows PC. Ones installed on one or two crummy Linux notebooks.
    The ones you downloaded to use with Virtual Box. The various versions
    you downloaded and burned into an SD drive to plug into RPIs. And mostly recently the ones that come with WSL.

    I think pretty much all of them that I remember came with a C compiler.

    So it is easy to make the assumption that gcc is always available.

    But isn't this also supposed to be one big advantage of Linux over
    Windows that this stuff is built-in?

    No, but it is all very easily available. I don't know of any distros
    that don't have an easy way to install a C compiler.


    Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent),

    Is 'apt-get' always available?

    It is on any Debian-based distros. There are plenty of other package
    managers, with different details in their commands (and the names of
    packages), which is why I wrote "or equivalent".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Wed Feb 7 21:59:16 2024
    XPost: comp.unix.programmer

    On 07/02/2024 16:48, Malcolm McLean wrote:
    On 07/02/2024 15:27, Scott Lurndal wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:

    Also, there are linker scripts that end in ".x"
    which on my system live here:

    /usr/lib/x86_64-linux-gnu/ldscripts/

    Fascinating to read -- and way over my head.  (The man page for GNU ld >>>> says they are "AT&T's Link Editor Command Language syntax".)  I'm not >>>> sure how often an average programmer would look around in there.

    Documentation on the script language here
    <https://sourceware.org/binutils/docs/ld/Scripts.html>.

    An obvious example of the need for a custom linker script would be
    building the Linux kernel, where you need a special format for the
    resulting binary that can be loaded by a bootloader.

    Indeed, that's been my primary use of custome linker scripts since
    1989.   Various operating systems, hypervisors, and even today for
    processor firmware.     Mainly we used the .ld suffix for such
    scripts.

    partial example for a bare-metal hypervisor written in C++:

    OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64", "elf64-x86-64")
    OUTPUT_ARCH(i386:x86-64)

    ENTRY(dvmmstart)

    SECTIONS
    {
         . = 0xffff808000000000;
         percpu.data : {
             *(percpu.data)
         }
         . = 0xffff830000100000;

         _start = .;

         . = ALIGN(16);
         _stext = .;
         .text : {
             *(inittext)
             *(.text)
             *(.text.*)
             *(.gnu.linkonce.t*)
         }
         _etext = .;
    <snip>
    So what the hell is that? What does it mean? How am i supposed to fix it
    if it goes wrong?

    It's all pretty straightforward if you are willing to spend a little
    effort learning it.

    But you can also consider it as just "under the bonnet magic" that the
    compiler and linker know about and get right - that will be fine for the
    kind of things you do. (If it were not, then you would already know
    about linker scripts.) You don't need to know how /everything/ works in
    order to use a tool.

    And if you are curious, the binutils ld manual is online.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Wed Feb 7 21:15:16 2024
    XPost: comp.unix.programmer

    On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:

    LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
    the lawyers and those with enough money to pay them. We just get on and
    do our work on “*nix” systems.

    That's why 'you' say it. Don't speak for others.

    I certainly wouldn’t speak for those who weren’t even alive when I first started using a *nix system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Lawrence D'Oliveiro on Wed Feb 7 22:48:52 2024
    XPost: comp.unix.programmer

    On 07.02.2024 21:36, Lawrence D'Oliveiro wrote:
    On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:

    LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
    the lawyers and those with enough money to pay them. We just get on and do our work on “*nix” systems.

    The trademark is UNIX.

    I've since long used "Unix" as a generic name, and meanwhile (for
    quite some time now) "Unix" also seems to have got used much more
    widely for that; I see it even described in common sources, like
    in various Wikipedias.

    (Some folks prefer using a '*' in a descriptive name. Feel free.
    Not worth a dispute, IMO.)

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed Feb 7 23:15:43 2024
    XPost: comp.unix.programmer

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Wed, 7 Feb 2024 15:02:07 -0000 (UTC), Lew Pitcher wrote:

    LD_LIBRARY_PATH is not a GNUism, but part of the Unix heritage.

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark to
    the lawyers and those with enough money to pay them. We just get on and >>>do our work on “*nix” systems.

    That's why 'you' say it. Don't speak for others.

    I certainly wouldn’t speak for those who weren’t even alive when I first >started using a *nix system.

    I doubt you'll find many of those here. I was using computers in 1974 and unix in 1979.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Janis Papanagnou on Wed Feb 7 23:44:22 2024
    XPost: comp.unix.programmer

    On Wed, 7 Feb 2024 22:48:52 +0100, Janis Papanagnou wrote:

    I've since long used "Unix" as a generic name ...

    Many do. For example, CMake, the build tool we have discussed elsewhere.
    As a result of which you see CMake directives like this in some cross-
    platform software:

    if(UNIX AND NOT APPLE)

    Anybody surprised that the one system still in common use that is entitled
    to use the “Unix” trademark does not behave sufficiently like that generic sense you were talking about?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Wed Feb 7 23:58:29 2024
    XPost: comp.unix.programmer

    On Wed, 07 Feb 2024 23:15:43 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark
    to the lawyers and those with enough money to pay them. We just get on >>>>and do our work on “*nix” systems.

    That's why 'you' say it. Don't speak for others.

    I certainly wouldn’t speak for those who weren’t even alive when I first >>started using a *nix system.

    I doubt you'll find many of those here. I was using computers in 1974
    and unix in 1979.

    With such a long history of being so cavalier about the term, you must
    have been cautioned at some point about the legal implications of such trademark usage. It would have been mentioned in just about every AT&T publication.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to All on Thu Feb 8 00:39:54 2024
    XPost: comp.unix.programmer

    On Wed, 07 Feb 2024 16:30:02 GMT, scott@slp53.sl.home (Scott Lurndal)
    wrote in <eeOwN.308695$7sbb.80772@fx16.iad>:

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    On 07/02/2024 15:27, Scott Lurndal wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 7 Feb 2024 02:57:39 -0000 (UTC), vallor wrote:



    So what the hell is that? What does it mean? How am i supposed to fix it
    if it goes wrong?

    I suspect you've been the internet long enough to have seen the
    phrase RTFM...

    He quoted the link to the documentation that Lawrence provided (thank
    you, Lawrence).

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Thu Feb 8 01:33:53 2024
    XPost: comp.unix.programmer

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 07 Feb 2024 23:15:43 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark
    to the lawyers and those with enough money to pay them. We just get on >>>>>and do our work on “*nix” systems.

    That's why 'you' say it. Don't speak for others.

    I certainly wouldn’t speak for those who weren’t even alive when I first >>>started using a *nix system.

    I doubt you'll find many of those here. I was using computers in 1974
    and unix in 1979.

    With such a long history of being so cavalier about the term, you must
    have been cautioned at some point about the legal implications of such >trademark usage. It would have been mentioned in just about every AT&T >publication.

    I spent four years working with the USL engineers directly, and another six
    or so years working on the XPG working group. I'm quite aware of the
    legal ramifications of the use of the trademark.

    None of those ramifications matter in casual usage, such as here in this newsgroup.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Thu Feb 8 01:34:48 2024
    XPost: comp.unix.programmer

    On Thu, 08 Feb 2024 01:33:53 GMT, Scott Lurndal wrote:

    None of those ramifications matter in casual usage, such as here in this newsgroup.

    You won’t object if some of us feel otherwise.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to ldo@nz.invalid on Thu Feb 8 01:50:54 2024
    XPost: comp.unix.programmer

    On Wed, 7 Feb 2024 23:58:29 -0000 (UTC), Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote in <uq15f5$1l0eh$1@dont-email.me>:

    On Wed, 07 Feb 2024 23:15:43 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Wed, 07 Feb 2024 20:48:56 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    This is why we say “*nix”, not “Unix”. We leave the “Unix” trademark
    to the lawyers and those with enough money to pay them. We just get on >>>>>and do our work on “*nix” systems.

    That's why 'you' say it. Don't speak for others.

    I certainly wouldn’t speak for those who weren’t even alive when I first >>>started using a *nix system.

    I doubt you'll find many of those here. I was using computers in 1974
    and unix in 1979.

    With such a long history of being so cavalier about the term, you must
    have been cautioned at some point about the legal implications of such trademark usage. It would have been mentioned in just about every AT&T publication.

    As Janis hints at elsethread: at some point
    it was decided (adjudicated?) that "Unix"
    is a generic term, and UNIX(R) is
    the actual trademark.

    So Linux is a Unix but not UNIX(R)...

    (MacOS Darwin FreeBSD might be
    UNIX(R) -- is it certified, and
    do they pay the licensing fee?)

    [ I feel dirty posting this without any
    actual C topics in comp.lang.c, so I've set
    the followup to comp.unix.programmer... ]

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gary R. Schmidt@21:1/5 to Michael S on Thu Feb 8 12:56:21 2024
    XPost: comp.unix.programmer

    On 08/02/2024 00:45, Michael S wrote:
    On Wed, 7 Feb 2024 23:53:22 +1100
    "Gary R. Schmidt" <grschmidt@acm.org> wrote:

    On 07/02/2024 18:17, Richard Harnden wrote:
    [SNIP]

    Wasn't it SHLIB_PATH on HP/UX?

    It still is. (Yes, some of us have to maintain these boxes because,
    although they were all amortised a decade or two ago, someone in a
    bank/taxation department/insurance company/&c knows that replacing
    them will be an expensive and time consuming process. So they'll be
    replaced
    - after they collapse into a pile of rust - in a mad panic with
    Linux boxes with something written in a mad rush in Python/PHP/Perl -
    by people who don't understand the requirements, briefed by people
    who don't understand the requirements - that sort of does the same
    job the old machines did, if you squint really, really hard. And
    /don't/ get audited by anyone competent. However, that one's
    *really* unlikely. :-) )

    Cheers,
    Gary B-)

    It does not have to be replaced with new solution even after original hardware died.
    https://www.stromasys.com/solution/charon-par/

    For those that are currently on IPF variant of HP-UX, working hardware
    is still easily available. However when it wouldn't be, I'd expect
    that the same company will provide emulation solution. My theory is
    that they already have it done, but as long as "real" HW is available
    they are afraid to sell IPF emulators because of legal concerns.

    Oh, we know about Charon. It's not that great, and does not give what
    is needed in the enterprise space.

    Also, it costs real money, not the sort of monopoly money that can be
    hidden away in a few more VMs and paying consluttants. :-)

    And while there are still sources of used metal out there, the stuff is
    getting very long in the tooth, I expect that once what is in use starts
    to die then most of what can be sourced will have the same problems.
    And it's not like they're planning ahead for the inevitable failures,
    that's far too expensive to be considered, this year, at least[1].

    Hopefully I'll be well-recovered before the rust hits the floor. ;-)

    Cheers,
    Gary B-)

    1 - FVO "this year" that equal "every year".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to vallor on Thu Feb 8 02:17:07 2024
    XPost: comp.unix.programmer

    On Thu, 8 Feb 2024 01:50:54 -0000 (UTC), vallor wrote:

    As Janis hints at elsethread: at some point it was decided
    (adjudicated?) that "Unix"
    is a generic term, and UNIX(R) is the actual trademark.

    Too similar to get away with that excuse.

    So Linux is a Unix but not UNIX(R)...

    I prefer “*nix”, or alternatively as a recognition that Linux is very much an ecosystem unto itself now, just define “Linux-compatible” as the new standard.

    (MacOS Darwin FreeBSD might be UNIX(R) -- is it certified, and do they
    pay the licensing fee?)

    No. Only Apple does. So the BSDs are no more “Unix” (or “UNIX®”) than Linux is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Lawrence D'Oliveiro on Thu Feb 8 11:05:36 2024
    XPost: comp.unix.programmer

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Wed, 07 Feb 2024 11:10:09 +0000, Ben Bacarisse wrote:

    It's not uncommon to have Linux installs without gcc.

    The very first non-Apple PC I bought was a Shuttle small-form-factor unit that came with a copy of Mandrake 9.1 “Discovery Edition” in the box. (Go on, look up that name and version. That should give you an idea of how
    long ago it was.)

    No need to look. I has been using Mandrake for a while at that time!

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Malcolm McLean on Thu Feb 8 11:55:02 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 17:34, Ben Bacarisse wrote:
    Ben Bacarisse <ben.usenet@bsb.me.uk> writes:

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 14:09, David Brown wrote:
    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely >>>>>>>>> rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to >>>>>> install it separately. If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up >>>>>> the install to say you don't want them.

    There are several hundred Linux distributions, not including the niche >>>>> ones or outdated ones. Have you tried them all?
    Most "normal user" oriented distros do not have gcc or related tools >>>>> installed by default, nor do most server systems, or firewall systems, or >>>>> small installations. Installing the tools is usually very simple
    ("apt-get install build-essentials", or equivalent), but they are not >>>>> included by default in the installation.

    I've installed Linux several times on a desktop machine. I can never
    remember being given an option to not install gcc.

    Which is beside the point. You said you "get gcc and other development
    tools". Which distribution(s) did you install?
    Did you reply via email by accident, or would you rather not answer
    here?

    Me. Yes sorry.
    I've lost Google groups. Thunderbird has a "reply" button whch means
    "email" and its too easy to press "reply" if you're not terribly used to
    it. I did that to KT as well and he wondered why I was replying via
    email.

    An easy mistake to make. So what Linux distributions did you install
    that gave you gcc by default? The ones I've used, don't (though it's
    trivial to add build tools later).

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to thiago on Thu Feb 8 13:50:15 2024
    On 07/02/2024 20:36, thiago wrote:
    On Mon, 29 Jan 2024 16:03:45 +0000, bart wrote:

    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    This proposal comes under 'convenient' rather than 'automatic'. (I did
    try an automatic scheme in the past, but that only worked for specially
    written projects.)

    We already had some similar topics here. I think I have sugested
    pragma source.




    I am using a build system that is a C program.

    This is the "build" file I use to build cake. It works on
    windows and linux. gcc etc..

    https://github.com/thradams/cake/blob/main/src/build.c


    That certainly looks easier on the eye than most makefiles I've seen!

    You seem to have solved the problem I had where here:

    xcc build.c
    build

    I had to transfer the name of the compiler used (xcc) and make it known
    to build.exe.

    I tried to do it by looking at args[0]. You use compiler-specific macros
    that each compiler exposes, and bake the results into the generated
    build.exe.

    But this limits the supported compilers to the ones you enumerated.

    (I tried adding mine, but I found a bug where its identifier, __MCC__,
    which is a predefined macro, doesn't work with 'defined'. I got around
    that temporarily, but I haven't yet tried it out on a project.)

    You've also easily turned what looked to me a two-step process into one
    by using &&:

    xcc build.c && build

    with variations depending on compiler.

    Of course makefile diehards will say this doesn't beat just typing:

    make

    but that doesn't really have cross-compiler support unless it's
    built-in, somehow, to each makefile. (You can use any compiler you like
    so long it's called 'gcc'!)

    The answer here is to just supply a 2-line makefile that contains
    something like 'xcc build.c && build'.




    I will call "pragma module" as automaticaly source discover.
    We can break the build in sub problems, one of then is source
    code discovery.
    The build I am using has a manual list of sources.

    #define SOURCE_FILES \
    " file1.c " \
    " file2.c " \
    ...


    The other problems are for instance, settings, like flags etc.


    I also have "#pragma directory" to inform where the include dir are.

    I think everthing should be controled with pragmas then we have a
    choice to use a separated file, for instance a file just with
    pragma modulo, or include pragma module inside normal source code.

    I am not sure you realized this, but it is possible to create a tool,
    with a C preprocessor that can scan source and discovery all the
    sources automatically.

    My experimental code posted in the other Build thread used an array like
    this instead of #defines:

    char* source_files[] = {
    "file1.c",
    ...

    What's the advantage of using the preprocessor? (Where you have to be
    more careful with syntax.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Thu Feb 8 16:35:17 2024
    XPost: comp.unix.programmer

    On 08/02/2024 13:32, Malcolm McLean wrote:
    On 08/02/2024 11:55, Ben Bacarisse wrote:

    An easy mistake to make.  So what Linux distributions did you install
    that gave you gcc by default?  The ones I've used, don't (though it's
    trivial to add build tools later).

    Whilst I've installed Linux many times the names of the distributions
    aren't very meaningful to me, the machines are mostly long since
    discarded, and I couldn't rightly tell you. But one name I remember is "Ubuntu". You take what is usually an old machine which has come to the
    end of its useful life as Windows computer, but still has a bit of kick
    in it and can become a Linux box. So I try to go for a lightweight distribution which won't stress it out. It chugs through and gives an install. And don't think there is any tick box or option which says
    "don't install gcc". Now other people have said I'm wrong about this,
    and of course as programmer I need gcc and wouldn't be interested in
    that tick box anyway. But I'm pretty sure you do get gcc by default and
    if you had to take special action I would have remembered it.

    If you install Ubuntu desktop, then it might have gcc by default (it's a
    long time since I've used "pure" Ubuntu). Other distributions may be different.

    People who use Linux as their preferred system usually pick their
    distributions with a bit of care and thought, and use it on appropriate computers. While it is certainly true that an old and outdated Windows
    machine can be given new life when the Windows installation is scraped
    and replaced by Linux, for developers using Linux it is normally an
    active choice. The last three main development machines I have had at
    work have never had Windows on them - they were bought for Linux, and
    used only with Linux.

    Basically, what you are saying is that your entire Linux experience is a
    few installations long ago, to briefly play around with it on throw-away machines. And you think that is sufficient to insist that /you/ know
    details when actual long-term Linux users tell you differently?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Purgert@21:1/5 to Ben Bacarisse on Thu Feb 8 17:04:31 2024
    XPost: comp.unix.programmer

    On 2024-02-08, Ben Bacarisse wrote:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:
    Whilst I've installed Linux many times the names of the distributions
    aren't very meaningful to me, the machines are mostly long since
    discarded, and I couldn't rightly tell you. But one name I remember
    is "Ubuntu".

    Ubuntu does not, as far as I can tell, install gcc by default.

    It doesn't. At least not in the past decade (14.04 forward). It may
    have at one point in the past; but I only started using it *maybe* at
    10.04 (more likely 12.04; certainly by 14.04).

    Now, that is not to say you might not get the option at the tail end of
    the installation process to select it (amongst other tools), ala
    Debian's tasksel step.

    --
    |_|O|_|
    |_|_|O| Github: https://github.com/dpurgert
    |O|O|O| PGP: DDAB 23FB 19FA 7D85 1CC1 E067 6D65 70E5 4CE7 2860

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Malcolm McLean on Thu Feb 8 16:50:06 2024
    XPost: comp.unix.programmer

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 08/02/2024 11:55, Ben Bacarisse wrote:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 17:34, Ben Bacarisse wrote:
    Ben Bacarisse <ben.usenet@bsb.me.uk> writes:

    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    On 07/02/2024 14:09, David Brown wrote:
    On 07/02/2024 10:56, Malcolm McLean wrote:
    On 07/02/2024 05:41, candycanearter07 wrote:
    On 2/5/24 12:13, Kaz Kylheku wrote:
    On 2024-02-05, candycanearter07 <no@thanks.net> wrote:
    But the tools are *still preinstalled*, so installers can definitely
    rely on compiling stuff.

    No, they aren't. It's common for devel tools not to be part of the base
    system of a distro.

    Wait really?

    If you install Windows you don't get Visual Studio and you have to >>>>>>>> install it separately. If you install Linux you get gcc and other >>>>>>>> development tools, and I don't think there's even a way of setting up >>>>>>>> the install to say you don't want them.

    There are several hundred Linux distributions, not including the niche >>>>>>> ones or outdated ones. Have you tried them all?
    Most "normal user" oriented distros do not have gcc or related tools >>>>>>> installed by default, nor do most server systems, or firewall systems, or
    small installations. Installing the tools is usually very simple >>>>>>> ("apt-get install build-essentials", or equivalent), but they are not >>>>>>> included by default in the installation.

    I've installed Linux several times on a desktop machine. I can never >>>>>> remember being given an option to not install gcc.

    Which is beside the point. You said you "get gcc and other development >>>>> tools". Which distribution(s) did you install?
    Did you reply via email by accident, or would you rather not answer
    here?

    Me. Yes sorry.
    I've lost Google groups. Thunderbird has a "reply" button whch means
    "email" and its too easy to press "reply" if you're not terribly used to >>> it. I did that to KT as well and he wondered why I was replying via
    email.
    An easy mistake to make. So what Linux distributions did you install
    that gave you gcc by default? The ones I've used, don't (though it's
    trivial to add build tools later).

    Whilst I've installed Linux many times the names of the distributions
    aren't very meaningful to me, the machines are mostly long since discarded, and I couldn't rightly tell you. But one name I remember is "Ubuntu".

    Ubuntu does not, as far as I can tell, install gcc by default.

    But I'm pretty sure you do
    get gcc by default and if you had to take special action I would have remembered it.

    You remember that gcc was installed by default often enough that you
    were prepared to claim it as a general rule about Linux, but you can't
    remember any of the distributions that did it... Oh well, we'll never
    know now.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Ben Bacarisse on Thu Feb 8 17:10:21 2024
    XPost: comp.unix.programmer

    On 08/02/2024 16:50, Ben Bacarisse wrote:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    But I'm pretty sure you do
    get gcc by default and if you had to take special action I would have
    remembered it.

    You remember that gcc was installed by default often enough that you
    were prepared to claim it as a general rule about Linux, but you can't remember any of the distributions that did it... Oh well, we'll never
    know now.

    You're being unfair.

    Let's say I've used a dozen versions of prepackaged Linux (eg. as
    monolithic image, or already installed), which have always had gcc. And
    another dozen that I've had to install myself.

    If those asked whether I wanted gcc added, then I really can't remember. Usually there were 1000 packages to install; you just let it get on with
    it and install the lot.

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not
    install them?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Ben Bacarisse on Thu Feb 8 17:15:03 2024
    XPost: comp.unix.programmer

    Ben Bacarisse <ben.usenet@bsb.me.uk> writes:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:


    Whilst I've installed Linux many times the names of the distributions
    aren't very meaningful to me, the machines are mostly long since discarded, >> and I couldn't rightly tell you. But one name I remember is "Ubuntu".

    Ubuntu does not, as far as I can tell, install gcc by default.

    You are correct, Ubuntu does not install gcc by default.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Thu Feb 8 17:25:34 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not
    install them?

    $ rpm -q -f /usr/include/stdio.h
    glibc-headers-2.18-19.fc20.x86_64

    An optional package. glibc will be installed, of course,
    but the delvelopment package(s) (containing headers, link-time
    libraries, etc) are optional.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Thu Feb 8 17:38:28 2024
    XPost: comp.unix.programmer

    On 2024-02-08, bart <bc@freeuk.com> wrote:
    On 08/02/2024 16:50, Ben Bacarisse wrote:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    But I'm pretty sure you do
    get gcc by default and if you had to take special action I would have
    remembered it.

    You remember that gcc was installed by default often enough that you
    were prepared to claim it as a general rule about Linux, but you can't
    remember any of the distributions that did it... Oh well, we'll never
    know now.

    You're being unfair.

    Let's say I've used a dozen versions of prepackaged Linux (eg. as
    monolithic image, or already installed), which have always had gcc. And another dozen that I've had to install myself.

    One of the main points of GNU/Linux distros having developed binary
    packaging system was so that users didn't have to get sources and build
    stuff themselves.

    That was the the big thing: use our distro and everything is prebuilt
    and easy!

    The distros made a point of the compiler being unnecessary, and just
    being another set of packages.

    Finding a dozen different binary distros that package the compiler
    as a non-optional base package seems like a historic impossibility.

    Malcolm probably said yes when prompted for dev tools and doesn't
    remember.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Feb 8 21:30:22 2024
    XPost: comp.unix.programmer

    On 08/02/2024 18:10, bart wrote:
    On 08/02/2024 16:50, Ben Bacarisse wrote:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    But I'm pretty sure you do
    get gcc by default and if you had to take special action I would have
    remembered it.

    You remember that gcc was installed by default often enough that you
    were prepared to claim it as a general rule about Linux, but you can't
    remember any of the distributions that did it...  Oh well, we'll never
    know now.

    You're being unfair.

    Let's say I've used a dozen versions of prepackaged Linux (eg. as
    monolithic image, or already installed), which have always had gcc. And another dozen that I've had to install myself.

    If those asked whether I wanted gcc added, then I really can't remember. Usually there were 1000 packages to install; you just let it get on with
    it and install the lot.


    It's fine to say you can't remember - or that you don't care. It is not
    fine to say I've installed Linux a couple of times in the past, and gcc
    is always included in the default install - and repeat the assertion
    when others (who know better) say differently.

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not
    install them?


    You might get some headers, but not many - most come as part of
    development tool packages, or "development" versions of libraries, or as
    a "kernel headers" package. You might accidentally install some of
    these, or you might install something else that has a dependency on a
    compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Malcolm McLean on Thu Feb 8 21:24:19 2024
    XPost: comp.unix.programmer

    On 08/02/2024 17:31, Malcolm McLean wrote:
    On 08/02/2024 15:35, David Brown wrote:
    On 08/02/2024 13:32, Malcolm McLean wrote:
    On 08/02/2024 11:55, Ben Bacarisse wrote:

    An easy mistake to make.  So what Linux distributions did you install >>>> that gave you gcc by default?  The ones I've used, don't (though it's >>>> trivial to add build tools later).

    Whilst I've installed Linux many times the names of the distributions
    aren't very meaningful to me, the machines are mostly long since
    discarded, and I couldn't rightly tell you. But one name I remember
    is "Ubuntu". You take what is usually an old machine which has come
    to the end of its useful life as Windows computer, but still has a
    bit of kick in it and can become a Linux box. So I try to go for a
    lightweight distribution which won't stress it out. It chugs through
    and gives an install. And don't think there is any tick box or option
    which says "don't install gcc". Now other people have said I'm wrong
    about this, and of course as programmer I need gcc and wouldn't be
    interested in that tick box anyway. But I'm pretty sure you do get
    gcc by default and if you had to take special action I would have
    remembered it.

    If you install Ubuntu desktop, then it might have gcc by default (it's
    a long time since I've used "pure" Ubuntu).  Other distributions may
    be different.

    People who use Linux as their preferred system usually pick their
    distributions with a bit of care and thought, and use it on
    appropriate computers.  While it is certainly true that an old and
    outdated Windows machine can be given new life when the Windows
    installation is scraped and replaced by Linux, for developers using
    Linux it is normally an active choice.  The last three main
    development machines I have had at work have never had Windows on them
    - they were bought for Linux, and used only with Linux.

    Basically, what you are saying is that your entire Linux experience is
    a few installations long ago, to briefly play around with it on
    throw-away machines.  And you think that is sufficient to insist that
    /you/ know details when actual long-term Linux users tell you
    differently?

    Baby X was developed for Linux. I've used it seriously and not just
    played around. But whilst I've been given powerful Linux machines to use
    at university, I've never felt the need for a powerful Linux system for
    hobby use. But you can run a lot of extremely interesting programs on
    fairly low powered machines.

    Certainly you can.

    I don't often install Linux. Usually only when I retire a Windows
    machine, though I have tried virtual Linux installations under Windows.
    Sadly this doesn't work well. I don't have no experience at all. Because
    of the realities of UK economic life, whilst I can easily afford to buy
    a second computer I can't easily afford to buy a bigger house, and I've
    only got room for one computer, and I find I can't work on laptops. So
    whilst I have a Linux machine, it's not currently set up and usable.

    OK.

    Of course it is completely up to you what you use, and how you use it.
    But it is wrong to view /your/ ways of doing things as though they are
    the "normal" way. You can freely install Linux on an older machine and
    use Windows on newer ones, and you would not be the only one doing that
    - but it is not by any means the normal situation for people who do
    serious development on Linux.

    And it is absolutely fine if you only install Linux very occasionally,
    and only one or two distros, and only long ago - but it is /not/ fine
    for you to think that you are right, and others are wrong, about the
    process.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Ben Bacarisse on Thu Feb 8 23:29:02 2024
    XPost: comp.unix.programmer

    On Thu, 08 Feb 2024 11:55:02 +0000, Ben Bacarisse wrote:

    So what Linux distributions did you install that gave you gcc by
    default?

    An obvious one would be something like Gentoo, where you build everything
    you install from source. So development tools would naturally be
    considered an essential part of the base install.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Thu Feb 8 23:30:22 2024
    XPost: comp.unix.programmer

    On Thu, 08 Feb 2024 17:25:34 GMT, Scott Lurndal wrote:

    $ rpm -q -f /usr/include/stdio.h

    Hah. Never typed that often enough to discover that “rpm -q -f” can be shortened to “rpm -qf”? ;)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to bart on Fri Feb 9 00:58:47 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:

    On 08/02/2024 16:50, Ben Bacarisse wrote:
    Malcolm McLean <malcolm.arthur.mclean@gmail.com> writes:

    But I'm pretty sure you do
    get gcc by default and if you had to take special action I would have
    remembered it.
    You remember that gcc was installed by default often enough that you
    were prepared to claim it as a general rule about Linux, but you can't
    remember any of the distributions that did it... Oh well, we'll never
    know now.

    You're being unfair.

    Let's say I've used a dozen versions of prepackaged Linux (eg. as
    monolithic image, or already installed), which have always had gcc. And another dozen that I've had to install myself.

    If those asked whether I wanted gcc added, then I really can't
    remember. Usually there were 1000 packages to install; you just let it get
    on with it and install the lot.

    Would you then claim, knowing that you can't really remember, that (and
    you cut this part) "If you install Linux you get gcc and other
    development tools, and I don't think there's even a way of setting up
    the install to say you don't want them"? I contend you probably
    shouldn't.

    And if, in the face of quite a few responses pointing out that this is
    not usual, would you simply say "sorry, I may be misremembering"? I
    contend you probably should.

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not install them?

    I can't say what happens without specifics. There are hundreds of Linux distributions.

    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to
    my limited experience.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Ben Bacarisse on Fri Feb 9 01:14:57 2024
    XPost: comp.unix.programmer

    On 09/02/2024 00:58, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not install
    them?

    I can't say what happens without specifics. There are hundreds of Linux distributions.

    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to
    my limited experience.

    Not to mine. I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Thu Feb 8 17:22:47 2024
    XPost: comp.unix.programmer

    scott@slp53.sl.home (Scott Lurndal) writes:

    Kaz Kylheku <433-929-6894@kylheku.com> writes:

    On 2024-02-05, Malcolm McLean <malcolm.arthur.mclean@gmail.com> wrote:
    [...]
    The Glibc shared library loading mechanism doesn't implement the nice
    strategy of finding libraries in the same directory as the executable.

    Sure it does, if you tell it to. viz. LD_LIBRARY_PATH.

    I would appreciate if folks posting stuff that pertains
    almost exclusively to comp.unix.programmer would take
    comp.lang.c off of those postings. Thank you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Fri Feb 9 01:18:55 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 09/02/2024 00:58, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not install >>> them?

    I can't say what happens without specifics. There are hundreds of Linux
    distributions.

    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to
    my limited experience.

    Not to mine. I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?


    Now, you are just being disingenuous. It's on the installation media
    and the user installing it selects whether to include it or not. Yes,
    linux distributions come with compilers, headers and libraries.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Lew Pitcher on Thu Feb 8 17:23:33 2024
    XPost: comp.unix.programmer

    Lew Pitcher <lew.pitcher@digitalfreehold.ca> writes:

    On Tue, 06 Feb 2024 20:32:49 +0000, Lawrence D'Oliveiro wrote:

    On Tue, 06 Feb 2024 19:20:06 GMT, Scott Lurndal wrote:

    LD_LIBRARY_PATH isn't a distro thing, its a shell thing
    interpreted by the dynamic linker. The dynamic linker has
    a set of default paths that it uses, set by the distro,
    which can be overridden in LD_LIBRARY_PATH by each user.

    It?s a GNU thing, I think.

    It's a UNIX thing. GNU supports it, as it supports other
    UNIX requirements.

    I would appreciate if folks posting stuff that pertains
    almost exclusively to comp.unix.programmer would take
    comp.lang.c off of those postings. Thank you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 9 01:27:01 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 00:58, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not install >>> them?

    I can't say what happens without specifics. There are hundreds of Linux
    distributions.

    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to
    my limited experience.

    Not to mine. I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Desktop GNU/Linux distributions come with thousands of packages.
    You don't get all of them by default. That's what the thread is about.

    Depending on the distro, getting the compilers that could be as little
    as answering one question: like for instance that your installation
    type is that of a development workstation.

    People just looking to surf the web and use e-mail don't require
    compilers.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to bart on Fri Feb 9 01:30:51 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:

    On 09/02/2024 00:58, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not install >>> them?
    I can't say what happens without specifics. There are hundreds of Linux
    distributions.
    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to
    my limited experience.

    Not to mine.

    I thought you said you can't remember either? Frankly, I don't think
    you can recall enough of your experience to be able to say this
    honestly. But if you can recall, then tell me -- what distributions
    install gcc and the other development tools by default? Malcolm can't
    help, but maybe you can.

    (As has been pointed out, some distributions are built from source by
    the user doing the install. These are not the kind we are discussing.)

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Feb 9 02:07:25 2024
    XPost: comp.unix.programmer

    On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:

    I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?

    Even when you add these for Windows, you do seem to have trouble building
    C programs though, don’t you, as evidenced by your past complaints? So clearly there must be a bit more to it than that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Fri Feb 9 09:21:50 2024
    XPost: comp.unix.programmer

    On 09/02/2024 02:14, bart wrote:
    On 09/02/2024 00:58, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not
    install
    them?

    I can't say what happens without specifics.  There are hundreds of Linux
    distributions.

    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to
    my limited experience.

    Not to mine. I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?



    The key distinction is that Linux /distributions/ invariably (or almost invariably - I haven't checked them all) come with compilers, headers
    and libraries. But Linux /installations/ often don't.

    Distributions typically contain enormous quantities of software, and any
    one user will only want a fraction of that. Default installs will have
    a common base (bigger or smaller, depending on the flavour of the distribution), and may have options for conveniently installing bunches
    of software.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Ben Bacarisse on Fri Feb 9 10:32:23 2024
    XPost: comp.unix.programmer

    On 09/02/2024 01:30, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    On 09/02/2024 00:58, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C
    headers in /usr/include? If so, what do you have to select to not install >>>> them?
    I can't say what happens without specifics. There are hundreds of Linux >>> distributions.
    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to >>> my limited experience.

    Not to mine.

    I thought you said you can't remember either? Frankly, I don't think
    you can recall enough of your experience to be able to say this
    honestly.

    "In MY limited experience" - the bits I can remember - whenever I needed
    to compile any C code on Linux, then gcc was always there.

    But if you can recall, then tell me -- what distributions
    install gcc and the other development tools by default? Malcolm can't
    help, but maybe you can.

    All the various Linuxes I used on RPi1 and RPi4, 32-bit and the odd
    64-bit, had gcc. I know because that was the primary reason for using
    those boards.

    Those OSes were downloaded in one lump, or sometimes came as plug-in SD
    cards.

    The same for all the various Linuxes I used on my PC via VirtualBox. The
    same with the pre-installed OS on a Linux notebook I once tried.

    Further back, I can't remember if the Linuxes I used to install on my PC
    via CDs, which were done a package at a time, definitely had gcc since I
    can't remember if I ever tried to compile C code on them. (I had enough
    trouble just doing the basics, like a working screen and keyword.)

    But don't ask me exactly which distributions they are; to me Linux is
    Linux and they are all a blur.

    So, /this/ is my limited experience. Why are you trying to accuse me of
    pulling a fast one?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Fri Feb 9 11:44:13 2024
    XPost: comp.unix.programmer

    On 05/02/2024 12:02, David Brown wrote:
    On 05/02/2024 01:07, Malcolm McLean wrote:
    On 04/02/2024 22:46, Lawrence D'Oliveiro wrote:
    On Sun, 4 Feb 2024 14:01:08 +0000, bart wrote:

    But it does seem as though Unix was a breeding ground for multitudinous >>>> developer tools. Plus there was little demarcation between user
    commands, C development tools, C libraries and OS.

    Somebody who's used to that environment is surely going to have trouble >>>> on an OS like MSDOS or Windows where they have to start from nothing.
    Even if most of the tools are now free.

    Yet it seems like even someone like you, who is supposed to be “used to”
    Windows rather than *nix, still has the same trouble. So maybe it’s not >>> about being “used to” *nix at all, there really is something inherent in
    the fundamental design of that environment that makes development work
    easier.
    On Windows you can't assume that the end user will be interested in
    development or have any develoment tools available. Or that he'll be
    able to do anything other than the most basic installation. It's a
    consumer platform.

    It /is/ a consumer platform, yes.  And because it has no standard ways
    to build software, and no one (approximately) using it wants to build software on it, the norm is to distribute code in binary form for
    Windows.  That works out fine for almost all Windows users.  That
    includes libraries - even C programmers on Windows don't want to build "libjpeg" or whatever, they want a DLL.

    20+ years ago I depended on an Intel JPEG library called IJL15.DLL, a
    32-bit binary of 370KB.

    Then they decided to withdraw it; you couldn't find binaries anywhere,
    although I had copies. Its replacement was buried inside a massive 75MB developer's package (at a time when modems worked at 14.4Kbaud), and I
    think had to be built from source.

    I remember that it was totally impractical and highly inconvenient. And
    later on I needed a 64-bit version, which is when I started looking at libraries like LibJPEG.

    As it turned out, these libraries were over-the-top anyway. (The 64-bit JPEG-load libraries I use now are about 20KB, and JPEG-save about 15KB)

    And thus there is much less effort put into making projects easy to
    build on Windows.  People on Windows fall mostly into two categories -
    those that neither know nor care about building software and want ready-to-use binaries (that's almost all of them), and people who do development work and are willing and able to invest time and effort
    reading the readmes and install.txt files, looking at the structure of
    the code, running the makefiles or CMakes, importing the project into
    their favourite IDE, and whatever else.

    I literally ran my fingers through my hair and groaned aloud just
    reading about makefiles and CMake.

    It's not that Linux software developers go out of their way to annoy
    Windows developers (well, /some/ do, but not many).  But on Linux, and widening to other modern *nix systems, there are standard ways to build software.  You know the people building it will have make, and gcc (or a compatible compiler with many of the same extensions and flags, like
    clang or icc), and development versions of countless libraries either installed or a quick apt-get away.  On Windows, however, they might have MSVC, or cygwin, or mingw64, or TDM gcc, or lccwin, or tcc, or Borland
    C++ builder.  They might have a "make", but it could be MS's more
    limited "nmake" version.

    Windows works on binaries. There is a format called 'DLL' that will work
    on any Windows OS and for any language that has a suitable FFI.

    At worst there will be 32-bit and 64-bit versions, but these days you
    only need one.

    Even if a developer wanted to make it available as C source code only,
    then that is easy: just write in it portable C code. Here, nano.c is
    such a library (to decode jpeg) with an accompanying test function in
    main():

    c:\cx>mcc nano.c
    Compiling nano.c to nano.exe

    c:\cx>wsl
    ...
    root@XXX:/mnt/c/cx# gcc nano.c

    And it runs on both:

    root@XXX:/mnt/c/cx# ./a.out /mnt/c/jpeg/card2.jpg
    root@DESKTOP-11:/mnt/c/cx# ls *.ppm
    nanojpeg_out.ppm
    ...

    c:\cx>nano \jpeg\card2.jpg
    c:\cx>dir *.ppm
    09/02/2024 11:31 6,220,817 nanojpeg_out.ppm

    So, what's the problem? This just needs ANY C compiler, and doesn't need
    make or anything else:

    c:\cx>tcc nano.c
    c:\cx>mcc nano.c
    Compiling nano.c to nano.exe
    c:\cx>gcc nano.c
    c:\cx>\dm\bin\dmc nano.c

    Why is everyone so intent on making this harder than necessary? Forget
    CYGWIN, MSYS2, WSL, mingw64, or make. Either supply one DLL (that will
    work on every 64-bit Windows machine in the world), or supply portable C
    code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to bart on Fri Feb 9 13:16:10 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:

    On 09/02/2024 01:30, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    On 09/02/2024 00:58, Ben Bacarisse wrote:
    bart <bc@freeuk.com> writes:

    BTW if gcc /isn't/ installed, do you still get a bunch of standard C >>>>> headers in /usr/include? If so, what do you have to select to not install >>>>> them?
    I can't say what happens without specifics. There are hundreds of Linux >>>> distributions.
    This is exactly why I was curious about what prompted Malcolm's
    confident statement about what comes with "Linux" -- it runs contrary to >>>> my limited experience.

    Not to mine.
    I thought you said you can't remember either? Frankly, I don't think
    you can recall enough of your experience to be able to say this
    honestly.

    "In MY limited experience" - the bits I can remember - whenever I needed to compile any C code on Linux, then gcc was always there.

    That's not the point. The question is whether it got there by default.
    That's not what most people are saying.

    But if you can recall, then tell me -- what distributions
    install gcc and the other development tools by default? Malcolm can't
    help, but maybe you can.

    All the various Linuxes I used on RPi1 and RPi4, 32-bit and the odd 64-bit, had gcc. I know because that was the primary reason for using those
    boards.

    That's probably not what Malcolm was using, but they may well install
    gcc by default.

    Those OSes were downloaded in one lump, or sometimes came as plug-in SD cards.

    The same for all the various Linuxes I used on my PC via VirtualBox.

    Which ones installed gcc by default?

    The same with the pre-installed OS on a Linux notebook I once tried.

    How can that be relevant to what is installed by default? Someone else
    chose what to install.

    Further back, I can't remember if the Linuxes I used to install on my PC
    via CDs, which were done a package at a time, definitely had gcc since I can't remember if I ever tried to compile C code on them. (I had enough trouble just doing the basics, like a working screen and keyword.)

    But don't ask me exactly which distributions they are; to me Linux is Linux and they are all a blur.

    Yes, that's why I wondered how you could be so sure that I was being
    unfair to Malcolm.

    So, /this/ is my limited experience. Why are you trying to accuse me of pulling a fast one?

    No. I was taking you at your word -- that you did not remember: "If
    those asked whether I wanted gcc added, then I really can't remember."

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Fri Feb 9 15:49:45 2024
    XPost: comp.unix.programmer

    On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
    On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:

    I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?

    Even when you add these for Windows, you do seem to have trouble building
    C programs though, don’t you, as evidenced by your past complaints? So clearly there must be a bit more to it than that.


    Linux (by which I mean all such Unix-related OSes) is a C machine.

    Not only is it all implemented in C, but it won't let you forget that,
    with little demarcation between the OS, C libraries, C headers, C
    compilers, and a myriad assorted routines that are all amazingly chummy.

    Some people here seem to think that POSIX is an essential part of C, yet windows.c is not considered part of C on Windows.

    So, having a C compiler on Windows is not enough, if an application has
    an over-reliance on that ecosystem as part of its build process.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 9 17:13:14 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
    On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:

    I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?

    Even when you add these for Windows, you do seem to have trouble building
    C programs though, don’t you, as evidenced by your past complaints? So
    clearly there must be a bit more to it than that.

    Linux (by which I mean all such Unix-related OSes) is a C machine.

    Not only is it all implemented in C, but it won't let you forget that,
    with little demarcation between the OS, C libraries, C headers, C
    compilers, and a myriad assorted routines that are all amazingly chummy.

    Windows is built on C. API's expressed in C with function prototypes
    and data structures.

    I have worked in several Windows shops as a C++ developer. The culture
    was just as steeped in C and C++ as development on Unix. The feeling
    was almost as if Microsoft had invented C and Unix didn't exist.

    Modern Windows now even has the equivalent of a C library: the UCRT
    (universal C run time), a public library, okay to use by applications,
    which has your malloc, printf and all that.

    Some people here seem to think that POSIX is an essential part of C, yet windows.c is not considered part of C on Windows.

    Anyone who claims that POSIX is part of C in comp.lang.c will be
    corrected, and if they persist, ultimately ridiculed and ostracized. :)

    POSIX has a relationship to ISO C in that it includes the standard by reference. Everything in C is in POSIX. POSIX redundantly documents
    numerous C library functions. Or not always redundantly; here and there
    it has additional requirements. POSIX also extends the standard C
    headers with its own functions. Some features of POSIX are accessible
    via <stdio.h> or <stdlib.h>.

    C is an essential part of POSIX, but the reverse isn't true.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Fri Feb 9 18:25:11 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
    On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:

    I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?

    Even when you add these for Windows, you do seem to have trouble building
    C programs though, don’t you, as evidenced by your past complaints? So
    clearly there must be a bit more to it than that.


    Linux (by which I mean all such Unix-related OSes) is a C machine.

    The linux operating system is written in a mix of assembler, C (and now Rust) and supports a large and varied set of processor architectures.

    The linux desktop and server applications are written in a mix of languages, including C, C++, Python, Java, APL, ADA, COBOL, Fortran, Haskell, Pascal,
    C#, D, and a host of other languages for which linux development
    environments exist.

    Some people here seem to think that POSIX is an essential part of C,

    I don't recall anyone other than you thinking that.

    POSIX is an essential component in the applications I write,
    just like libgmp or libreadline. C or C++.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Fri Feb 9 18:24:18 2024
    XPost: comp.unix.programmer

    On 09/02/2024 17:13, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
    On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:

    I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?

    Even when you add these for Windows, you do seem to have trouble building >>> C programs though, don’t you, as evidenced by your past complaints? So >>> clearly there must be a bit more to it than that.

    Linux (by which I mean all such Unix-related OSes) is a C machine.

    Not only is it all implemented in C, but it won't let you forget that,
    with little demarcation between the OS, C libraries, C headers, C
    compilers, and a myriad assorted routines that are all amazingly chummy.

    Windows is built on C. API's expressed in C with function prototypes
    and data structures.

    But the demarcation is better. The C-ness doesn't leak through, except
    via the OS APIs.

    There are no C standard headers provided (nor in a centralised location;
    if a C compiler is installed, they are local to the compiler).

    It doesn't enforce C's case-sensitivity in the file system and shell
    programs.

    At a program's entry point, you won't find 'argn', 'argv' conveniently
    pushed onto the stack.

    Even if you look at the API, it uses a set of precisely defined Windows
    data types which wrap the raw C types. (BTW that API is now marked as
    being C++.)

    And of course you don't routinely have C compilers, assemblers and
    linkers provided.

    With Linux, everything screams 'C'.


    I have worked in several Windows shops as a C++ developer. The culture
    was just as steeped in C and C++ as development on Unix.

    It would be if they used C++.

    The feeling
    was almost as if Microsoft had invented C and Unix didn't exist.

    There's Windows. There's Linux. And then there are MS development tools.

    People tend to mix up those tools with Windows. Note that MS also have languages such as VB, F# and C#, all working on top of CLI/CIL
    (whichever it is, perhaps both), and something called .NET. I'm not sure
    they even acknowledge the existence of C anymore.


    Modern Windows now even has the equivalent of a C library: the UCRT (universal C run time), a public library, okay to use by applications,
    which has your malloc, printf and all that.

    It's called now MSVCRT.DLL. I've used that since the 90s, simply because
    it was simpler than WinAPI. I was only vaguely aware then that it was
    also to do with C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 9 18:42:15 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 17:13, Kaz Kylheku wrote:
    The feeling
    was almost as if Microsoft had invented C and Unix didn't exist.

    There's Windows. There's Linux. And then there are MS development tools.

    People tend to mix up those tools with Windows.

    Yes, just like people tend to post to newsgroups saying that
    everything screams "C" in Linux.

    Note that MS also have
    languages such as VB, F# and C#, all working on top of CLI/CIL
    (whichever it is, perhaps both), and something called .NET. I'm not sure
    they even acknowledge the existence of C anymore.

    Modern Windows now even has the equivalent of a C library: the UCRT
    (universal C run time), a public library, okay to use by applications,
    which has your malloc, printf and all that.

    It's called now MSVCRT.DLL.

    Umm, no.

    I've used that since the 90s, simply because
    it was simpler than WinAPI. I was only vaguely aware then that it was
    also to do with C.

    You didn't use /that/ since the 90s. UCRT is a new thing that ships with Windows 10, and is available as an add on for as back as Windows 7, an
    that's it.

    MSVCRT.DLL is not documented for public use; when you link to it,
    you're sticking a fork into the proverbial toaster. UCRT is different.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Fri Feb 9 20:41:10 2024
    XPost: comp.unix.programmer

    On 09/02/2024 18:42, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 17:13, Kaz Kylheku wrote:
    The feeling
    was almost as if Microsoft had invented C and Unix didn't exist.

    There's Windows. There's Linux. And then there are MS development tools.

    People tend to mix up those tools with Windows.

    Yes, just like people tend to post to newsgroups saying that
    everything screams "C" in Linux.

    Note that MS also have
    languages such as VB, F# and C#, all working on top of CLI/CIL
    (whichever it is, perhaps both), and something called .NET. I'm not sure
    they even acknowledge the existence of C anymore.

    Modern Windows now even has the equivalent of a C library: the UCRT
    (universal C run time), a public library, okay to use by applications,
    which has your malloc, printf and all that.

    It's called now MSVCRT.DLL.

    Umm, no.

    I've used that since the 90s, simply because
    it was simpler than WinAPI. I was only vaguely aware then that it was
    also to do with C.

    You didn't use /that/ since the 90s. UCRT is a new thing that ships with Windows 10, and is available as an add on for as back as Windows 7, an
    that's it.

    MSVCRT.DLL is not documented for public use; when you link to it,
    you're sticking a fork into the proverbial toaster. UCRT is different.


    What exactly to you mean by UCRT; ucrtbase.dll?

    That's missing a few useful things, like '_getmainargs' (used to get
    argn, argv for main()), and obscure functions like 'printf'.

    Meanwhile, if I look at programs such as gcc.exe, as.exe, ld.exe, not
    only do THEY import msvcrt.dll, but the EXEs produced by gcc.exe do so too.

    So they didn't get the memo.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Feb 9 21:03:00 2024
    XPost: comp.unix.programmer

    On Fri, 9 Feb 2024 11:44:13 +0000, bart wrote:

    Then [Intel] decided to withdraw it; you couldn't find binaries
    anywhere, although I had copies. Its replacement was buried inside a
    massive 75MB developer's package (at a time when modems worked at
    14.4Kbaud), and I think had to be built from source.

    I remember that it was totally impractical and highly inconvenient.

    Why not extract it into its own source project and just build that?

    And thus there is much less effort put into making projects easy to
    build on Windows.

    Ironic that your example is from Intel, from all people. If they cannot
    support Windows properly, who can?

    Windows works on binaries. There is a format called 'DLL' that will work
    on any Windows OS and for any language that has a suitable FFI.

    But DLLs have no versioning mechanism, do they? Hence, “DLL Hell”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Fri Feb 9 20:55:19 2024
    XPost: comp.unix.programmer

    On 09/02/2024 18:25, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 09/02/2024 02:07, Lawrence D'Oliveiro wrote:
    On Fri, 9 Feb 2024 01:14:57 +0000, bart wrote:

    I thought the big deal with Linux compared with Windows was
    that it came with compilers, headers and libraries at least for C.

    Now that advantage may be just by chance?

    Even when you add these for Windows, you do seem to have trouble building >>> C programs though, don’t you, as evidenced by your past complaints? So >>> clearly there must be a bit more to it than that.


    Linux (by which I mean all such Unix-related OSes) is a C machine.

    The linux operating system is written in a mix of assembler, C (and now Rust) and supports a large and varied set of processor architectures.

    The linux desktop and server applications are written in a mix of languages, including C, C++, Python, Java, APL, ADA, COBOL, Fortran, Haskell, Pascal, C#, D, and a host of other languages for which linux development
    environments exist.

    Some people here seem to think that POSIX is an essential part of C,

    I don't recall anyone other than you thinking that.

    It's mentioned a LOT. Half the open source programs I try seem to use
    calls like 'open' instead of 'fopen', suggesting that the author seemed
    to think such a function is standard C, or that can be used as though it
    was standard.

    While here (https://en.wikipedia.org/wiki/C_POSIX_library):

    "The C POSIX library is a specification of a C standard library for
    POSIX systems. It was developed at the same time as the ANSI C standard."

    The table that follows is a list of headers with lots of standard C
    headers included.

    That looks quite chummy to me, and even nepotic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Feb 9 21:04:58 2024
    XPost: comp.unix.programmer

    On Fri, 9 Feb 2024 15:49:45 +0000, bart wrote:

    Some people here seem to think that POSIX is an essential part of C, yet windows.c is not considered part of C on Windows.

    Precisely. C was born on Unix. “Unix®” as such may be dead, but its successor, POSIX, can be seen as basically the greatest run-time library
    for C.

    If Windows NT had not been created by a Unix hater, you would be having
    much less trouble with it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Feb 9 21:06:10 2024
    XPost: comp.unix.programmer

    On Fri, 9 Feb 2024 20:55:19 +0000, bart wrote:

    That looks quite chummy to me, and even nepotic.

    It’s your choice to be on the outside, looking in.

    Come ... join us ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 9 22:09:27 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    It's mentioned a LOT. Half the open source programs I try seem to use
    calls like 'open' instead of 'fopen', suggesting that the author seemed
    to think such a function is standard C, or that can be used as though it
    was standard.

    Open source programs tend to use POSIX functions because POSIX is
    associated with open source.

    The most successful open source operating systems we have are all
    POSIX-based. Open source programs target open source platforms.
    It's a no-brainer.

    If you need functionality beyond what is in ISO C, POSIX is more
    portable than the alternatives.

    For instance, suppose we would like to write a cross-platform programs
    that works with serial ports. It has to control baud rate, framing, and
    whether hardware handshaking is enable and all that.

    If we write that using the POSIX API in <termios.h>, it can easily
    be made to run on Linux, Windows, Mac, BSD, ...

    If we use the Win32 API for serial control, it will run on Windows.
    Maybe on Linux using Wine, and ReactOS, ...

    In the TXR project, I use <termios.h> to put the TTY in raw mode for the editing features in the REPL, together with ANSI escape sequences.
    This works fine on Windows: there is a termios implementation in Cygwin,
    as well as a translation layer that converts ANSI escapes to
    Win32 Console API calls. I can ship this stand-alone without Cygwin and
    it runs fine in a cmd.exe Window.

    I'd be silly not to use POSIX, because then I would have to write
    that separately for Windows.

    Among other things I can do in POSIX (including on Windows) is use poll
    for all file descriptors, including sockets.

    I can open a FILE * stream on a socket using fdopen(fd, "...mode...").

    POSIX is the most widely implemented operating system interface
    standard.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 9 21:56:50 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    MSVCRT.DLL is not documented for public use; when you link to it,
    you're sticking a fork into the proverbial toaster. UCRT is different.


    What exactly to you mean by UCRT; ucrtbase.dll?

    That's missing a few useful things, like '_getmainargs' (used to get
    argn, argv for main()), and obscure functions like 'printf'.

    I believe printf is in there.

    _getmainargs isn't; that's in a VC run time library.

    Meanwhile, if I look at programs such as gcc.exe, as.exe, ld.exe, not
    only do THEY import msvcrt.dll, but the EXEs produced by gcc.exe do so too.

    Umm, no; you must be talking specifically about the MinGW ones.

    So they didn't get the memo.

    They got the memo. The issue is that even though MSVCRT.DLL is
    undocumented, it constitutes a "system library". This is important.

    The GNU Public license prohibits programs from being linked to
    proprietary code --- but it has an exception for system libraries
    (libraries that are part of the target platform where the program runs).

    Using MSVCRT.DLL is like sticking a fork in the toaster, but all those
    programs being linked to MSVCRT.DLL means the GPL isn't violated.

    Compilers under Cygwin don't link to MSVCRT.DLL --- including the ones
    in the Cygwin MingW package. (Yes, Cygwin has a package of MinGW
    compilers. If you have Cygwin, you just install that, and then you can
    build MinGW programs. The built programs probably still link to
    MSVCRT.DLL as far as I know. Cygwin itself uses this MinGW compiler
    package for compiling some of its components, like the setup.exe program
    and I think the cygwin1.dll also.)

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Fri Feb 9 22:43:21 2024
    XPost: comp.unix.programmer

    On 09/02/2024 21:56, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:
    MSVCRT.DLL is not documented for public use; when you link to it,
    you're sticking a fork into the proverbial toaster. UCRT is different.


    What exactly to you mean by UCRT; ucrtbase.dll?

    That's missing a few useful things, like '_getmainargs' (used to get
    argn, argv for main()), and obscure functions like 'printf'.

    I believe printf is in there.

    Not under my ucrtbase.dll if that's the right file. If it was there, it
    would go somewhere in here:

    2332 00031F90 204688 Fun pow
    2333 00033630 210480 Fun powf
    2334 00058740 362304 Fun putc
    2335 00080BE0 527328 Fun putchar
    2336 00059660 366176 Fun puts

    I suspect this is used by MS programs which may have their own wrappers
    around 'printf'.


    _getmainargs isn't; that's in a VC run time library.

    Meanwhile, if I look at programs such as gcc.exe, as.exe, ld.exe, not
    only do THEY import msvcrt.dll, but the EXEs produced by gcc.exe do so too.

    Umm, no; you must be talking specifically about the MinGW ones.

    I'm talking about lots of binaries, for example:

    raylib.dll
    opengl.dll
    sdl2.dll
    sqlite3_32.dll
    s7.exe
    tcc.exe
    nim.exe
    nasm.exe

    So they didn't get the memo.

    They got the memo. The issue is that even though MSVCRT.DLL is
    undocumented, it constitutes a "system library". This is important.

    If they got rid of it, half the programs that run under Windows would
    stop working.

    The GNU Public license prohibits programs from being linked to
    proprietary code --- but it has an exception for system libraries
    (libraries that are part of the target platform where the program runs).

    Using MSVCRT.DLL is like sticking a fork in the toaster, but all those programs being linked to MSVCRT.DLL means the GPL isn't violated.

    Compilers under Cygwin don't link to MSVCRT.DLL --- including the ones
    in the Cygwin MingW package. (Yes, Cygwin has a package of MinGW
    compilers. If you have Cygwin, you just install that, and then you can
    build MinGW programs. The built programs probably still link to
    MSVCRT.DLL as far as I know. Cygwin itself uses this MinGW compiler
    package for compiling some of its components, like the setup.exe program
    and I think the cygwin1.dll also.)


    I'm no idea what the point of CYGWIN is. Never have done either. What
    does it bring to the table? Presumably to make some programs that
    originate on Linux feel more at home, instead of making the effort to
    make them more portable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Feb 9 23:12:11 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 21:56, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:
    MSVCRT.DLL is not documented for public use; when you link to it,
    you're sticking a fork into the proverbial toaster. UCRT is different. >>>>

    What exactly to you mean by UCRT; ucrtbase.dll?

    That's missing a few useful things, like '_getmainargs' (used to get
    argn, argv for main()), and obscure functions like 'printf'.

    I believe printf is in there.

    Not under my ucrtbase.dll if that's the right file. If it was there, it
    would go somewhere in here:

    2332 00031F90 204688 Fun pow
    2333 00033630 210480 Fun powf
    2334 00058740 362304 Fun putc
    2335 00080BE0 527328 Fun putchar
    2336 00059660 366176 Fun puts

    Hmm. Lots of hits for identifiers containing printf:

    $ strings ucrtbase.dll | grep printf
    __conio_common_vcprintf
    __conio_common_vcprintf_p
    __conio_common_vcprintf_s
    __conio_common_vcwprintf
    [ SNIP ]
    __stdio_common_vfprintf
    __stdio_common_vfprintf_p
    [ SNIP ]
    __stdio_common_vfprintf_s
    __stdio_common_vfwprintf
    __stdio_common_vfwprintf_p
    [ SNIP ]
    _get_printf_count_output
    [ SNIP ]
    _o___conio_common_vcprintf
    [ SNIP]
    _o___stdio_common_vfwprintf_p
    [ SNIP]
    _set_printf_count_output

    I suspect this is used by MS programs which may have their own wrappers around 'printf'.

    This seems to be documented:

    "In Visual Studio 2015 The printf and scanf family of functions were
    declared as inline and moved to the <stdio.h> and <conio.h> headers."

    https://learn.microsoft.com/en-us/cpp/c-runtime-library/format-specification-syntax-printf-and-wprintf-functions?view=msvc-170

    I'm guessing that these inline functions call one of those symbols
    found above, like maybe __stdio_common_vfprintf?

    only do THEY import msvcrt.dll, but the EXEs produced by gcc.exe do so too. >>
    Umm, no; you must be talking specifically about the MinGW ones.

    I'm talking about lots of binaries, for example:

    raylib.dll
    opengl.dll
    sdl2.dll
    sqlite3_32.dll
    s7.exe
    tcc.exe
    nim.exe
    nasm.exe

    So they didn't get the memo.

    They got the memo. The issue is that even though MSVCRT.DLL is
    undocumented, it constitutes a "system library". This is important.

    If they got rid of it, half the programs that run under Windows would
    stop working.

    I suspect, mainly just these fledgling ports of unix cruft to Windows.

    Most of the Windows world wouldn't notice.

    I'm no idea what the point of CYGWIN is. Never have done either. What
    does it bring to the table? Presumably to make some programs that
    originate on Linux feel more at home, instead of making the effort to
    make them more portable.

    The effort is significant. Many things have to be coded twice.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Fri Feb 9 23:47:06 2024
    XPost: comp.unix.programmer

    On 09/02/2024 23:12, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:

    I'm no idea what the point of CYGWIN is. Never have done either. What
    does it bring to the table? Presumably to make some programs that
    originate on Linux feel more at home, instead of making the effort to
    make them more portable.

    The effort is significant. Many things have to be coded twice.


    Well, you need to support both OSes.

    My stuff uses one module in the language library which is OS-specific.

    At one point I had three versions for three OS targets. It is
    effectively a mini cross-platform wrapper around some OS functions.

    So for example, to get the address of a function in a shared library
    given an instance handle to the library, the Windows version is:

    export func os_getdllprocaddr(int hinst, ichar name)ref void=
    GetProcAddress(cast(hinst), name)
    end

    The Linux version is this:

    export func os_getdllprocaddr(int hlib, ichar name)ref void=
    dlsym(cast(int(hlib)), name)
    end

    (The third one was OS-neutral. Neither GetProcAddress or dlsym were
    available, but to be able to run some programs, 20 or so addresses of
    common functions in the C library were hardcoded in a table.)

    It is then just a question of including the right OS-specific module.

    If rendered to C source code, there would be one .c version for Linux,
    and one for Windows, since I dislike using conditional blocks.

    Or sometimes (using that third version) a single .c file worked for
    both, but with limited functionality. It was for demos.


    As I understand your view, you want the Linux program to just call
    dlsym(), and need all these subsystems on Windows to make it appear as
    though that was natively available.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Fri Feb 9 23:53:34 2024
    XPost: comp.unix.programmer

    On 09/02/2024 23:41, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 09/02/2024 21:56, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:
    [...]
    So they didn't get the memo.
    They got the memo. The issue is that even though MSVCRT.DLL is
    undocumented, it constitutes a "system library". This is important.

    If they got rid of it, half the programs that run under Windows would
    stop working.

    Who suggested getting rid of it?

    [...]

    I'm no idea what the point of CYGWIN is. Never have done either.

    Are you asking?

    What
    does it bring to the table? Presumably to make some programs that
    originate on Linux feel more at home, instead of making the effort to
    make them more portable.

    It provides an environment, running under Windows, that resembles a
    typical Linux desktop environment. I use it every day myself, because
    that's a valuable thing for me. If it's not valuable for you, that's
    fine. (I also use WSL for some things.)

    I use a lot of programs that happen to rely on a POSIX interface.
    Cygwin lets those programs run under Windows.

    Run or build? If you have a binary, then having a way to run that on a different OS under some emulation layer is fair enough.

    Some people run Windows programs under Linux, or even Macs (although
    I've never had much luck with 'wine' myself).

    But it seems to be a lot more than that; I can't say more as it's so
    long since I tried to use it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Sat Feb 10 00:16:06 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 23:41, Keith Thompson wrote:
    I use a lot of programs that happen to rely on a POSIX interface.
    Cygwin lets those programs run under Windows.

    Run or build? If you have a binary, then having a way to run that on a different OS under some emulation layer is fair enough.

    It's both. Cygwin provides the build environment with all the tools,
    plus the run-time environment also.

    Cygwin isn't an emulation layer. It's a run-time library. The main
    one is cygwin1.dll. A Cygwin program uses that as its C run time,
    instead of ucrtbase.dll or msvcrt.dll.

    Nothing is "emulated". The library has various POSIX functions in it, and
    those are implemented in terms of the Win32 APIs.

    Microsoft's libraries also have some features like this. For instance, Microsoft has POSIX-like functions (with leading underscores added) like _popen, and _dup and whatnot. Cygwin removes the underscores and does a
    much more thorough job of implementing many more functions, with greater fidelity.

    I maintain a very useful fork of the Cygwin run-time called Cygnal.

    Cygnal took a pretty low amount of effort do develop. From time to
    time I rebase it to the current Cygwin.

    Cygnal claws back some of the "too POSIXY" conventions in the Cygwin
    run-time, making it suitable as a run-time library for "native" Windows programs. Cygnal programs won't suprise Windows users with foreign conventions, or certain broken behaviors due to being run outside of the
    Cygwin installation.

    All the changes are documented in a table presented in the home page?

    https://www.kylheku.com/cygnal/

    Some people run Windows programs under Linux, or even Macs (although
    I've never had much luck with 'wine' myself).

    Cygwin won't run Linux programs on Windows; it's not like Wine.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Sat Feb 10 00:28:43 2024
    XPost: comp.unix.programmer

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    On 09/02/2024 23:12, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:

    I'm no idea what the point of CYGWIN is. Never have done either. What
    does it bring to the table? Presumably to make some programs that
    originate on Linux feel more at home, instead of making the effort to
    make them more portable.

    The effort is significant. Many things have to be coded twice.


    Well, you need to support both OSes.

    My stuff uses one module in the language library which is OS-specific.

    At one point I had three versions for three OS targets. It is
    effectively a mini cross-platform wrapper around some OS functions.

    So for example, to get the address of a function in a shared library
    given an instance handle to the library, the Windows version is:

    export func os_getdllprocaddr(int hinst, ichar name)ref void=
    GetProcAddress(cast(hinst), name)
    end

    The Linux version is this:

    export func os_getdllprocaddr(int hlib, ichar name)ref void=
    dlsym(cast(int(hlib)), name)
    end

    With cygwin, the Linux and Windows version is:

    dlsym(handle, name);

    And that's just a small thing.

    How would you write serial control? Say the task is to open a serial
    port, set the baud rate to 115200 kbps, 8N1 parity, 8-bit-clean raw
    mode, and send a few bytes?

    There is a POSIX way to do this, and so I have to write the code just
    once, if I have a POSIX library on all the target machines.

    Or, say, recursively walk a directory tree. I write the POSIX way to do
    it, just once.

    Create a hardlink: link(this, that). Will create an NTFS hard link
    on Windows.

    As I understand your view, you want the Linux program to just call
    dlsym(), and need all these subsystems on Windows to make it appear as
    though that was natively available.

    Well, yes; just like a Visual C program can call printf, as if printf
    were natively available in Windows. Or call ShellExecuteEx() so that it magically looks like Windows can natively "launch" a .PDF or .HTML file
    in the same way it can run an .EXE.

    That's how abstraction works!

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Sat Feb 10 02:26:55 2024
    XPost: comp.unix.programmer

    On Fri, 09 Feb 2024 16:33:04 -0800, Keith Thompson wrote:

    Cygwin ... [is] a POSIX environment running under Windows.

    And in some ways, more capable than Microsoft’s native-based efforts along the same lines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Lawrence D'Oliveiro on Sat Feb 10 02:47:06 2024
    XPost: comp.unix.programmer

    On 2024-02-10, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Fri, 09 Feb 2024 16:33:04 -0800, Keith Thompson wrote:

    Cygwin ... [is] a POSIX environment running under Windows.

    And in some ways, more capable than Microsoft’s native-based efforts along the same lines.

    Cygwin programs are just Win32 programs, and can be deployed without
    a Cygwin installation. They are just .exe files that depend on
    one or more DLLs from Cygwin, and not an entire OS layer.

    Cygwin programs can use Win32 functions: you can call fork() and
    CreateWindow() in one file.

    Ther is also a project (maintained by me) which offers a patched
    cygwin1.dll that suppresses certain POSIX conventions in the way a
    program interacts with the environment. This is particularly suitable
    for deploying programs built under Cygwin as stand-alone Windows
    programs.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Sat Feb 10 15:24:02 2024
    XPost: comp.unix.programmer

    bart <bc@freeuk.com> writes:
    On 09/02/2024 18:25, Scott Lurndal wrote:
    elopment
    environments exist.

    Some people here seem to think that POSIX is an essential part of C,

    I don't recall anyone other than you thinking that.

    It's mentioned a LOT. Half the open source programs I try seem to use
    calls like 'open' instead of 'fopen', suggesting that the author seemed
    to think such a function is standard C, or that can be used as though it
    was standard.

    While here (https://en.wikipedia.org/wiki/C_POSIX_library):

    Never heard of the "C POSIX library". The wiki page is just a list
    of header files that are present in both C standards and POSIX standards.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Keith Thompson on Sat Feb 10 20:09:14 2024
    XPost: comp.unix.programmer

    On Fri, 09 Feb 2024 15:41:57 -0800
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    bart <bc@freeuk.com> writes:
    On 09/02/2024 21:56, Kaz Kylheku wrote:
    On 2024-02-09, bart <bc@freeuk.com> wrote:
    [...]
    So they didn't get the memo.
    They got the memo. The issue is that even though MSVCRT.DLL is
    undocumented, it constitutes a "system library". This is
    important.

    If they got rid of it, half the programs that run under Windows
    would stop working.

    Who suggested getting rid of it?

    [...]

    I'm no idea what the point of CYGWIN is. Never have done either.

    Are you asking?

    What
    does it bring to the table? Presumably to make some programs that
    originate on Linux feel more at home, instead of making the effort
    to make them more portable.

    It provides an environment, running under Windows, that resembles a
    typical Linux desktop environment. I use it every day myself, because
    that's a valuable thing for me. If it's not valuable for you, that's
    fine. (I also use WSL for some things.)

    I use a lot of programs that happen to rely on a POSIX interface.
    Cygwin lets those programs run under Windows. Modifying those
    programs to work more directly under Windows would be a tremendous
    amount of work that nobody is going to do.


    I didn't encounter situation in which, given the choice, I would prefer
    cygwin over msys2 at least for decade. I still have to use cygwin with Altera/Intel Nios2 development environment (non-pro variant), because
    there is no alternative. I don't like it at all.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Kaz Kylheku on Sat Feb 10 19:51:51 2024
    XPost: comp.unix.programmer

    On Fri, 9 Feb 2024 21:56:50 -0000 (UTC)
    Kaz Kylheku <433-929-6894@kylheku.com> wrote:

    On 2024-02-09, bart <bc@freeuk.com> wrote:
    MSVCRT.DLL is not documented for public use; when you link to it,
    you're sticking a fork into the proverbial toaster. UCRT is
    different.

    What exactly to you mean by UCRT; ucrtbase.dll?

    That's missing a few useful things, like '_getmainargs' (used to
    get argn, argv for main()), and obscure functions like 'printf'.

    I believe printf is in there.

    _getmainargs isn't; that's in a VC run time library.

    Meanwhile, if I look at programs such as gcc.exe, as.exe, ld.exe,
    not only do THEY import msvcrt.dll, but the EXEs produced by
    gcc.exe do so too.

    Umm, no; you must be talking specifically about the MinGW ones.

    So they didn't get the memo.

    They got the memo. The issue is that even though MSVCRT.DLL is
    undocumented, it constitutes a "system library". This is important.

    The GNU Public license prohibits programs from being linked to
    proprietary code --- but it has an exception for system libraries
    (libraries that are part of the target platform where the program
    runs).

    Using MSVCRT.DLL is like sticking a fork in the toaster, but all those programs being linked to MSVCRT.DLL means the GPL isn't violated.


    Today Mingw64 provides two separate gcc programming environments.
    One is based on old MSVCRT.DLL, probably from 2013 and other based on
    ucrt. One installation package is called mingw-w64-x86_64-gcc and other
    called mingw-w64-ucrt-x86_64-gcc.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Sat Feb 10 20:17:26 2024
    XPost: comp.unix.programmer

    On Sat, 10 Feb 2024 02:26:55 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Fri, 09 Feb 2024 16:33:04 -0800, Keith Thompson wrote:

    Cygwin ... [is] a POSIX environment running under Windows.

    And in some ways, more capable than Microsoft’s native-based efforts
    along the same lines.

    Many basic things, like file I/O and creation of processes is several
    time slower under cygwin than under native Windows.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Sat Feb 10 21:02:23 2024
    XPost: comp.unix.programmer

    On Sat, 10 Feb 2024 20:17:26 +0200, Michael S wrote:

    On Sat, 10 Feb 2024 02:26:55 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Fri, 09 Feb 2024 16:33:04 -0800, Keith Thompson wrote:

    Cygwin ... [is] a POSIX environment running under Windows.

    And in some ways, more capable than Microsoft’s native-based efforts
    along the same lines.

    Many basic things, like file I/O and creation of processes is several
    time slower under cygwin than under native Windows.

    Nevertheless, it manages to get some things working properly, like poll/
    select on pipes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to bart on Mon Feb 12 02:18:55 2024
    On 29/01/2024 16:03, bart wrote:
    By 'Build System', I mean a convenient or automatic way to tell a
    compiler which source and library files comprise a project, one that
    doesn't involve extra dependencies.

    This proposal comes under 'convenient' rather than 'automatic'. (I did
    try an automatic scheme in the past, but that only worked for specially written projects.)

    This is a summary of the module scheme in my two main languages:

    https://github.com/sal55/langs/blob/master/Modules24.md

    The idea for this lite C version came from that, in listing the project
    files at the start of the now /only/ module submitted to the compiler.

    That C scheme needs revising, and could also use some ideas of Thiago
    Adams in being able to work with other compilers like this:


    gcc xxx.c -oxxx && xxx # build with gcc

    mcc xxx # build with mcc

    With mcc, xxx is the lead module. With gcc/etc, it's not clear what xxx
    is; it might need to be a fixed file 'build.c' configured for this
    project, as otherwise the extra support module name and project lead
    name will clash.

    It needs a bit more thought...

    For for the time being, I'm sticking with my '#pragma module' scheme
    which seems to work fine for the projects I've tried it on:

    mcc lua
    mcc bbx
    mcc cjpeg

    etc. What more could you want? This is beautifully simple.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)