• Re: Duplicate identifiers in a single namespace

    From Bill Sloman@21:1/5 to Don Y on Sun Sep 29 23:15:08 2024
    On 29/09/2024 10:15 pm, Don Y wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    What makes you think that they do?

    Whenever I inadvertently try to duplicate file names, the second file
    name gets longer.

    And, what *value* to supporting this capability?

    None.

    --
    Bill Sloman, Sydney

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to All on Sun Sep 29 05:15:12 2024
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Jeroen Belleman on Sun Sep 29 11:10:53 2024
    On 9/29/2024 11:06 AM, Jeroen Belleman wrote:
    On 9/29/24 14:15, Don Y wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?


    Is that an electronics subject?

    Yes -- in that one uses computers (electronic devices, for the
    most part) in the design and fabrication of other electronic
    products. As such, where and how such things are implemented
    is of importance.

    Considerably more on-topic than visualization or quantum
    speculations.

    Granted, folks here are probably not as qualified as they might be
    in other forums to contribute to such answers -- beyond their
    personal experiences USING (e.g. Windows) same.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeroen Belleman@21:1/5 to Don Y on Sun Sep 29 20:06:52 2024
    On 9/29/24 14:15, Don Y wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?


    Is that an electronics subject?

    Jeroen Belleman

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Cursitor Doom@21:1/5 to jeroen@nospam.please on Sun Sep 29 20:53:00 2024
    On Sun, 29 Sep 2024 20:06:52 +0200, Jeroen Belleman
    <jeroen@nospam.please> wrote:

    On 9/29/24 14:15, Don Y wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?


    Is that an electronics subject?

    What language was it written in?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Don Y on Sun Sep 29 13:32:50 2024
    On 9/29/2024 11:10 AM, Don Y wrote:
    On 9/29/2024 11:06 AM, Jeroen Belleman wrote:
    On 9/29/24 14:15, Don Y wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?


    Is that an electronics subject?

    Yes -- in that one uses computers (electronic devices, for the
    most part) in the design and fabrication of other electronic
    products.  As such, where and how such things are implemented
    is of importance.

    Considerably more on-topic than visualization or quantum
    speculations.

    Granted, folks here are probably not as qualified as they might be
    in other forums to contribute to such answers -- beyond their
    personal experiences USING (e.g. Windows) same.

    Here, for example, is a situation that I suspect many folks have
    encountered on their own workstations -- but ignored due to a curiosity deficit:

    <https://mega.nz/file/krwmlBCL#Im_HcJFa6i6IaR6m3ziL4GadXO3uZnam0iAAWk-xkPI>

    Note the two icons having the same name (desktop.ini) but different
    contents. I can create many such "duplicates" presented in the
    "Desktop" namespace.

    Inspecting the properties of each reveal that they are, in fact, different objects in the *file* system. Their coexistence in the "Desktop"
    namespace is the result of the desktop being (effectively) implemented
    as a union mount. You can note the pathnames to the individual files.

    [Similar behaviors exist with the "Start Menu", etc.]

    This has been a feature of computer systems for at least 35 years,
    though most folks aren't savvy enough to have availed themselves of
    its utility.

    In a more conventional union mount, the name collision would be resolved through some sort of predefined precedence. E.g., *layering* the mounts
    so any names/identifiers present on an upper layer hide any objects having
    the same names on *lower* layers.

    This is a win in writing applications as it lets the developer create
    immutable objects that define "APPLICATION defaults" without fear
    that the user may inadvertently alter them.

    The *user's* "settings" are stored in a file having the same name as
    the application's. If the file doesn't exist, the file containing
    the defaults is exposed (by the absence of anything having the same
    layered name ON TOP of it). If the user's file *does* exist, then
    the defaults file is hidden.

    As nothing dictates the size of number of layers of such a hierarchy,
    one can create a folder called "application_X_settings" that has
    "files" (i.e., identifiers) bearing names whose presence (or absence)
    indicates the state of binary options; and, whose *contents* indicate
    values for those named settings. So, "voltage" could contain "12.7"
    while "current_limit" could contain "3.5".

    [This is becoming increasingly commonplace as folks use the typical
    file system as a central namespace for everything that might want to
    be named: "write '9600' to .../COM1/baudrate and '8N1' into ../COM1/character_format; and, if you ever want to KNOW what these
    settings are, currently, just *read* those 'files'!"]

    The user can then have *his* "application_X_settings" layered atop
    the immutable "defaults" defined by the developer. Or, a set of
    site-specific (or project-specific) settings.

    Otherwise, the application embeds a set of defaults in its binary.
    Then, goes looking for a set of "global" defaults that reside in
    a file in the filesystem. Then, a set that reside in the user's
    $HOME, etc. Deliberately writing code to parse each of these
    "sources", in turn.

    For each application.

    However, MS's implementation exposes every object in the union
    without a way of (easily) identifying which is which. E.g., how
    do YOU know that the PROPERTIES windows I displayed haven't been
    reversed?

    But, hey, if your products only need 1970's vintage software
    design techniques, why learn something new, right? :> Was a
    time when BASIC, FoRTRAN and COBOL ruled the world -- why change?!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jasen Betts@21:1/5 to Don Y on Sat Oct 5 23:48:25 2024
    On 2024-09-29, Don Y <blockedofcourse@foo.invalid> wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?

    You have a tendency to be misunderstood when you start a new thread.
    Do you have any exaples?

    --
    Jasen.
    🇺🇦 Слава Україні

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Jasen Betts on Sat Oct 5 17:28:59 2024
    On 10/5/2024 4:48 PM, Jasen Betts wrote:
    On 2024-09-29, Don Y <blockedofcourse@foo.invalid> wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?

    You have a tendency to be misunderstood when you start a new thread.
    Do you have any exaples?

    Read the rest of the thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jasen Betts@21:1/5 to Don Y on Sun Oct 6 01:40:41 2024
    On 2024-09-29, Don Y <blockedofcourse@foo.invalid> wrote:
    On 9/29/2024 11:10 AM, Don Y wrote:

    <https://mega.nz/file/krwmlBCL#Im_HcJFa6i6IaR6m3ziL4GadXO3uZnam0iAAWk-xkPI>

    Note the two icons having the same name (desktop.ini) but different
    contents. I can create many such "duplicates" presented in the
    "Desktop" namespace.

    I notice two files one called "C:\Users\Asministrator\Desktop\desktop.ini"
    and the other called "C:\Users\Public\Desktop\desktop.ini"

    What's a ""Desktop" namespace" ?

    Is there some way to retreive those files from that namespace by name?

    However, MS's implementation exposes every object in the union
    without a way of (easily) identifying which is which. E.g., how
    do YOU know that the PROPERTIES windows I displayed haven't been
    reversed?

    Microsoft making an ambiguous user inteface is neither surprising nor interesting to me.

    Two icons at the top have the same writing under them, but that
    writing is not their name, it's only a partial representation.
    their actual names on the desktop are their screen co-ordinates.

    --
    Jasen.
    🇺🇦 Слава Україні

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Jasen Betts on Sat Oct 5 19:59:06 2024
    On 10/5/2024 6:40 PM, Jasen Betts wrote:
    On 2024-09-29, Don Y <blockedofcourse@foo.invalid> wrote:
    On 9/29/2024 11:10 AM, Don Y wrote:

    <https://mega.nz/file/krwmlBCL#Im_HcJFa6i6IaR6m3ziL4GadXO3uZnam0iAAWk-xkPI>

    Note the two icons having the same name (desktop.ini) but different
    contents. I can create many such "duplicates" presented in the
    "Desktop" namespace.

    I notice two files one called "C:\Users\Asministrator\Desktop\desktop.ini" and the other called "C:\Users\Public\Desktop\desktop.ini"

    What's a ""Desktop" namespace" ?

    It's the set of names that are visible to a user when looking at
    their desktop.

    Is there some way to retreive those files from that namespace by name?

    I have no idea what APIs MS supports for this. Obviously, Explorer
    (or whatever it is that I am interacting with when I click on an icon
    displayed on the "Desktop") can map the screen location of my click
    to a specific object (the file associated with the icon).

    Is this supported in an API? Or, does an application wanting to interact
    with "The Desktop" have to effectively implement the union internally?

    However, MS's implementation exposes every object in the union
    without a way of (easily) identifying which is which. E.g., how
    do YOU know that the PROPERTIES windows I displayed haven't been
    reversed?

    Microsoft making an ambiguous user inteface is neither surprising nor interesting to me.

    *My* concern is to whether this would ever have value:
    "And, what *value* to supporting this capability?"
    Most union mounts apply some default sense of priority to namespace
    conflicts. So, "A_Name" always resolves the same. There *is*
    value to that, even in a union mount (as I illustrated).

    MS could have similarly applied some priority to this resolver
    but chose not to. Whether that is a conscious decision on
    their part or a negligent oversight is hard to tell. Dismissing
    everything they do as folly is an arrogant approach.

    Two icons at the top have the same writing under them, but that
    writing is not their name, it's only a partial representation.
    their actual names on the desktop are their screen co-ordinates.

    To the piece of *code*, that is the case. But, to the human user,
    the coordinates are insignificant. I could swap those two icons
    (indeed, a bug in Windows causes desktop contents to magically
    reshuffle) while you are distracted and you would have no way of
    identifying which is the one you "wanted" -- without consulting
    meta information.

    Similar ambiguity exists in the Start Menu.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jasen Betts@21:1/5 to Don Y on Sun Oct 6 04:37:41 2024
    On 2024-10-06, Don Y <blockedofcourse@foo.invalid> wrote:
    On 10/5/2024 6:40 PM, Jasen Betts wrote:
    On 2024-09-29, Don Y <blockedofcourse@foo.invalid> wrote:
    On 9/29/2024 11:10 AM, Don Y wrote:

    <https://mega.nz/file/krwmlBCL#Im_HcJFa6i6IaR6m3ziL4GadXO3uZnam0iAAWk-xkPI> >>
    Note the two icons having the same name (desktop.ini) but different
    contents. I can create many such "duplicates" presented in the
    "Desktop" namespace.

    I notice two files one called "C:\Users\Asministrator\Desktop\desktop.ini" >> and the other called "C:\Users\Public\Desktop\desktop.ini"

    What's a ""Desktop" namespace" ?

    It's the set of names that are visible to a user when looking at
    their desktop.

    Is there some way to retreive those files from that namespace by name?

    I have no idea what APIs MS supports for this. Obviously, Explorer
    (or whatever it is that I am interacting with when I click on an icon displayed on the "Desktop") can map the screen location of my click
    to a specific object (the file associated with the icon).

    Is this supported in an API? Or, does an application wanting to interact with "The Desktop" have to effectively implement the union internally?

    However, MS's implementation exposes every object in the union
    without a way of (easily) identifying which is which. E.g., how
    do YOU know that the PROPERTIES windows I displayed haven't been
    reversed?

    Microsoft making an ambiguous user inteface is neither surprising nor
    interesting to me.

    *My* concern is to whether this would ever have value:
    "And, what *value* to supporting this capability?"

    I'm not convinced that it even exists, desktop icons have coordinates
    and a keyboard navigation order, I'm not a aware of any way to reference
    by "name".

    Two icons at the top have the same writing under them, but that
    writing is not their name, it's only a partial representation.
    their actual names on the desktop are their screen co-ordinates.

    To the piece of *code*, that is the case. But, to the human user,
    the coordinates are insignificant. I could swap those two icons
    (indeed, a bug in Windows causes desktop contents to magically
    reshuffle) while you are distracted and you would have no way of
    identifying which is the one you "wanted" -- without consulting
    meta information.

    Similar ambiguity exists in the Start Menu.

    Suppose that in your class there are two people called "Mohammad Wong".
    what are you going to do? Are we even still talking about name
    spaces?

    --
    Jasen.
    🇺🇦 Слава Україні

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Jasen Betts on Sat Oct 5 22:17:11 2024
    On 10/5/2024 9:37 PM, Jasen Betts wrote:
    Is this supported in an API? Or, does an application wanting to interact
    with "The Desktop" have to effectively implement the union internally?

    However, MS's implementation exposes every object in the union
    without a way of (easily) identifying which is which. E.g., how
    do YOU know that the PROPERTIES windows I displayed haven't been
    reversed?

    Microsoft making an ambiguous user inteface is neither surprising nor
    interesting to me.

    *My* concern is to whether this would ever have value:
    "And, what *value* to supporting this capability?"

    I'm not convinced that it even exists, desktop icons have coordinates
    and a keyboard navigation order, I'm not a aware of any way to reference
    by "name".

    I have a clock gadget colocated with the icon for a PDF.
    It could just as easily be a folder icon, etc. -- MS seems
    to like to slip things *under* my "gadgets".

    What is the (x,y) name of each?

    Two icons at the top have the same writing under them, but that
    writing is not their name, it's only a partial representation.
    their actual names on the desktop are their screen co-ordinates.

    To the piece of *code*, that is the case. But, to the human user,
    the coordinates are insignificant. I could swap those two icons
    (indeed, a bug in Windows causes desktop contents to magically
    reshuffle) while you are distracted and you would have no way of
    identifying which is the one you "wanted" -- without consulting
    meta information.

    Similar ambiguity exists in the Start Menu.

    Suppose that in your class there are two people called "Mohammad Wong".
    what are you going to do?

    Isn't that the question I originally asked:
    "How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?"

    I can suggest several different ways that a multiplicity of "objects"
    having identical identifiers could be disambiguated. But, is there
    **value** in supporting multiple distinct objects having identical
    identifiers in a single namespace? Is there ever a case where the
    hassle of disambiguating is outweighed by some perceived added value?

    Are we even still talking about name
    spaces?


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Don Y on Wed Oct 9 16:22:32 2024
    On 10/5/2024 10:17 PM, Don Y wrote:
    I can suggest several different ways that a multiplicity of "objects"
    having identical identifiers could be disambiguated.  But, is there **value** in supporting multiple distinct objects having identical identifiers in a single namespace?  Is there ever a case where the
    hassle of disambiguating is outweighed by some perceived added value?

    Actually, we've decided that duplicity of names is only a problem due
    to design decisions in *legacy* systems. And, that supporting
    name collisions is actually a feature that can be exploited in
    more modern architectures! (in addition to being more intuitive)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From albert@spenarnc.xs4all.nl@21:1/5 to blockedofcourse@foo.invalid on Wed Oct 16 13:00:48 2024
    In article <vdbgch$1ob5k$1@dont-email.me>,
    Don Y <blockedofcourse@foo.invalid> wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?

    This is not a Windows question. It is a language question.
    C
    C is weird. Some identifiers can be declared multiply
    as forward. You can have the same name sometimes for
    things that are of different type. Not to speak of macro's

    Pascal
    If you declare multiple identifiers in the same namespace
    you are hit on the head. You can have nested namespaces
    and there is no conflict, the inner namespace counts.

    Forth
    You can use the same name for multiple objects.
    The name last defined counts. You get at most a warning.

    Groetjes Albert
    --
    Temu exploits Christians: (Disclaimer, only 10 apostles)
    Last Supper Acrylic Suncatcher - 15Cm Round Stained Glass- Style Wall
    Art For Home, Office And Garden Decor - Perfect For Windows, Bars,
    And Gifts For Friends Family And Colleagues.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to albert@spenarnc.xs4all.nl on Wed Oct 16 04:58:05 2024
    On 10/16/2024 4:00 AM, albert@spenarnc.xs4all.nl wrote:
    In article <vdbgch$1ob5k$1@dont-email.me>,
    Don Y <blockedofcourse@foo.invalid> wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?

    This is not a Windows question. It is a language question.
    C
    C is weird. Some identifiers can be declared multiply
    as forward. You can have the same name sometimes for
    things that are of different type. Not to speak of macro's

    As well as within different scopes created by the user.
    A scope effectively creates a new namespace.

    Pascal
    If you declare multiple identifiers in the same namespace
    you are hit on the head. You can have nested namespaces
    and there is no conflict, the inner namespace counts.

    The same applies in C. But, in each of these cases, the
    developer is aware that there *are* different namespaces
    even though the same name APPEARS to be used for different
    things "on the same piece of paper".

    Forth
    You can use the same name for multiple objects.
    The name last defined counts. You get at most a warning.

    It is more abstract than that.

    What "(programming) language" do you associate with a file system
    hierarchy (the most common/legacy "namespace" in use in computers)?
    Why must "/a/file/name" refer to *exactly* one object (note objects
    are often not "files" in the strictest sense of the word)? In a
    union mount, there can be two (or more) such "name" objects sourced
    from different parts of the filesystem (fore example, "/a/file/name"
    and "/another/name") with different schemes using to determine which
    object is accessed as "name" in that namespace.

    But, if you think about it, why is such disambiguation necessary?

    Ignoring computers (as there is nothing special about
    "namespaces" that confines them to use *in* computers),
    what are the individual names (identifiers) of the 12 eggs
    in this carton in my refrigerator? Or, the drinking
    glasses in the cupboard?

    I.e., we are perfectly capable of dealing with items that
    have non-unique identifiers everyday. We aren't paralyzed when
    confronted with the task of fetching "egg", so, why impose
    that constraint on objects?

    [In the computer context, the uniqueness is a consequence of
    legacy approaches to system designs where there was only one
    way to reference an object]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to albert@spenarnc.xs4all.nl on Wed Oct 16 10:36:33 2024
    On Wed, 16 Oct 2024 13:00:48 +0200, albert@spenarnc.xs4all.nl wrote:

    In article <vdbgch$1ob5k$1@dont-email.me>,
    Don Y <blockedofcourse@foo.invalid> wrote:
    How does (e.g., Windows) tolerate/differentiate between
    multiple *identical* identifiers in a given namespace/context?

    And, what *value* to supporting this capability?

    This is not a Windows question. It is a language question.
    C
    C is weird. Some identifiers can be declared multiply
    as forward. You can have the same name sometimes for
    things that are of different type. Not to speak of macro's

    Look into C/C++ Namespaces. That explains things reasonably well.


    Pascal
    If you declare multiple identifiers in the same namespace
    you are hit on the head. You can have nested namespaces
    and there is no conflict, the inner namespace counts.

    Pascal is compiled in a single pass through the source code, so
    everything must be defied before it is first used. Unlike C/C++,
    which has a multipass compiler and linker. This was done because
    Pascal was intended for teaching programming, and the load in the
    university's computers was from compiling buggy student homework code
    time and time again, while C was intended to replace assembly in the
    Unix operating system.


    Forth
    You can use the same name for multiple objects.
    The name last defined counts. You get at most a warning.

    Forth is interpreted, and is a pure pushdown language, like a HP
    calculator using RPN.

    Joe Gwinn


    Groetjes Albert

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Joe Gwinn on Wed Oct 16 14:17:54 2024
    On 10/16/2024 7:36 AM, Joe Gwinn wrote:
    Pascal
    If you declare multiple identifiers in the same namespace
    you are hit on the head. You can have nested namespaces
    and there is no conflict, the inner namespace counts.

    Pascal is compiled in a single pass through the source code, so
    everything must be defied before it is first used. Unlike C/C++,
    which has a multipass compiler and linker. This was done because
    Pascal was intended for teaching programming, and the load in the university's computers was from compiling buggy student homework code
    time and time again, while C was intended to replace assembly in the
    Unix operating system.

    I think you are misassigning effect to cause. Wirth was
    obsessed (?) with simplicity. Even at the expense of
    making things *harder* for the developer! (shortsighted,
    IMHO).

    Requiring the developer to declare his *future* intention
    to reference an object *could* be seen as simplifying the
    process -- at least from the compiler's point of view.

    But, I've no fond memories of *any* language where I was
    forced to do something that the compiler could very obviously
    do; how is making MORE work for me going to make things better?

    I share his belief that things should be "simpler instead of
    more complex". But, that only applies to the decomposition
    of a problem; the problem itself defines its complexity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Joe Gwinn on Wed Oct 16 16:54:52 2024
    On 10/16/2024 4:30 PM, Joe Gwinn wrote:
    I once had the suicidal job of choosing which language to write a
    large mass of code in. When my name was made public, the phone
    immediately blew off the hook with special pleaders for the two
    candidates, Pascal and plain K&R C. I'm doomed - no matter which is
    chosen, there will be war. And blood.

    Tee hee hee...

    We had performed something like six prior benchmarks studies that
    showed that compiled C code was 1.5 times faster than compiled Pascal
    code, and Pascal had all sorts of awkward limitation when used to
    implement large systems, in this case hundreds of thousands of lines
    of code.

    That is true of many languages and systems. *But*, is predicated on
    the competency of the developers deployed. IMO, this has been the
    bane of software for much of the past few decades... the belief that
    you can just change some aspect of the paradigm and magically make
    "better" developers out of people that really don't have the discipline
    or skillsets for the job.

    An average joe (no offense) can use a paint roller to paint a room
    quicker than using a brush. But, that same person would likely
    not notice how poorly the trim was "cut in", howmuch paint ended up
    on windows, etc.

    Will an average *coder* (someone who has managed to figure out how to
    get a program to run and proclaimed himself a coder thereafter)
    see the differences in his "product" (i.e., the code he has written)
    and the blemishes/shortcomings it contains?

    That didn't settle the issue, because the Ada mafia saw Pascal as the stepping stone to Ada nirvana, and C as the devil incarnate.

    C is "running with scissors". You can either prohibit it *or*
    teach people how to SAFELY run with scissors! The problem with
    programming languages -- unlike scissors that tend to inflict injury
    on the "runner" -- is the folks who are doing the coding are
    oblivious to the defects they create.

    I was still scratching my head about why Pascal was so different than
    C, so I looked for the original intent of the founders. Which I found
    in the Introductions in the Pascal Report and K&R C: Pascal was
    intended for teaching Computer Science students their first
    programming language, while C was intended for implementing large
    systems, like the Unix kernel.

    Wirth maintained a KISS attitude in ALL of his endeavors. He
    failed to see that requiring forward declarations wasn't really
    making it any simpler /for the coders/. Compilers get written
    and revised "a few times" but *used* thousands of times. Why
    favor the compiler writer over the developer?

    Prior operating systems were all written in assembly code, and so were
    not portable between vendors, so Unix needed to be written in
    something that could be ported, and yet was sufficient to implement a
    OS kernel. Nor can one write an OS in Pascal.

    You can write an OS in Pascal -- but with lots of "helper functions"
    that defeat the purpose of the HLL's "safety mechanisms".

    This did work - only something like 4% of Unix had to be written in
    assembly, and it was simply rewritten for each new family of
    computers.

    So the Pascal crowd fell silent, and C was chosen and successfully
    used.

    The Ada Mandate was rescinded maybe ten years later. The ISO-OSI
    mandate fell a year or so later, slain by TCP/IP.

    I had to make a similar decision, early on. It's really easy to get
    on a soapbox and preach how it *should* be done. But, if you expect
    (and want) others to adopt and embelish your work, you have to choose
    an implementation that they will accept, if not "embrace".

    And, this without requiring scads of overhead (people and other
    resources) to accomplish a particular goal.

    Key in this is figuring out how to *hide* complexity so a user
    (of varying degrees of capability across a wide spectrum) can
    get something to work within the constraints you've laid out.

    E.g., as I allow end users to write code (scripts), I can't
    assume they understand things like operator precedence, cancellation,
    etc. *I* have to address those issues in a way that allows them
    to remain ignorant and still get the results they expect/desire.

    The same applies to other "more advanced" levels of software
    development; the more minutiae that the developer has to contend with,
    the less happy he will be about the experience.

    [E.g., I modified my compiler to support a syntax of the form:
    handle=>method(arguments)
    an homage to:
    pointer->member(arguments)
    where "handle" is an identifier (small integer) that uniquely references
    an object in the local context /that may reside on another processor/
    (which means the "pointer" approach is inappropriate) so the developer
    doesn't have to deal with the RMI mechanisms.]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to blockedofcourse@foo.invalid on Wed Oct 16 19:30:50 2024
    On Wed, 16 Oct 2024 14:17:54 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/16/2024 7:36 AM, Joe Gwinn wrote:
    Pascal
    If you declare multiple identifiers in the same namespace
    you are hit on the head. You can have nested namespaces
    and there is no conflict, the inner namespace counts.

    Pascal is compiled in a single pass through the source code, so
    everything must be defied before it is first used. Unlike C/C++,
    which has a multipass compiler and linker. This was done because
    Pascal was intended for teaching programming, and the load in the
    university's computers was from compiling buggy student homework code
    time and time again, while C was intended to replace assembly in the
    Unix operating system.

    I think you are misassigning effect to cause. Wirth was
    obsessed (?) with simplicity. Even at the expense of
    making things *harder* for the developer! (shortsighted,
    IMHO).

    Requiring the developer to declare his *future* intention
    to reference an object *could* be seen as simplifying the
    process -- at least from the compiler's point of view.

    But, I've no fond memories of *any* language where I was
    forced to do something that the compiler could very obviously
    do; how is making MORE work for me going to make things better?

    I share his belief that things should be "simpler instead of
    more complex". But, that only applies to the decomposition
    of a problem; the problem itself defines its complexity.

    I once had the suicidal job of choosing which language to write a
    large mass of code in. When my name was made public, the phone
    immediately blew off the hook with special pleaders for the two
    candidates, Pascal and plain K&R C. I'm doomed - no matter which is
    chosen, there will be war. And blood.

    We had performed something like six prior benchmarks studies that
    showed that compiled C code was 1.5 times faster than compiled Pascal
    code, and Pascal had all sorts of awkward limitation when used to
    implement large systems, in this case hundreds of thousands of lines
    of code.

    That didn't settle the issue, because the Ada mafia saw Pascal as the
    stepping stone to Ada nirvana, and C as the devil incarnate.

    I was still scratching my head about why Pascal was so different than
    C, so I looked for the original intent of the founders. Which I found
    in the Introductions in the Pascal Report and K&R C: Pascal was
    intended for teaching Computer Science students their first
    programming language, while C was intended for implementing large
    systems, like the Unix kernel.

    Prior operating systems were all written in assembly code, and so were
    not portable between vendors, so Unix needed to be written in
    something that could be ported, and yet was sufficient to implement a
    OS kernel. Nor can one write an OS in Pascal.

    This did work - only something like 4% of Unix had to be written in
    assembly, and it was simply rewritten for each new family of
    computers.

    So the Pascal crowd fell silent, and C was chosen and successfully
    used.

    The Ada Mandate was rescinded maybe ten years later. The ISO-OSI
    mandate fell a year or so later, slain by TCP/IP.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to blockedofcourse@foo.invalid on Sat Oct 19 18:26:31 2024
    On Wed, 16 Oct 2024 16:54:52 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/16/2024 4:30 PM, Joe Gwinn wrote:
    I once had the suicidal job of choosing which language to write a
    large mass of code in. When my name was made public, the phone
    immediately blew off the hook with special pleaders for the two
    candidates, Pascal and plain K&R C. I'm doomed - no matter which is
    chosen, there will be war. And blood.

    Tee hee hee...

    We had performed something like six prior benchmarks studies that
    showed that compiled C code was 1.5 times faster than compiled Pascal
    code, and Pascal had all sorts of awkward limitation when used to
    implement large systems, in this case hundreds of thousands of lines
    of code.

    That is true of many languages and systems. *But*, is predicated on
    the competency of the developers deployed. IMO, this has been the
    bane of software for much of the past few decades... the belief that
    you can just change some aspect of the paradigm and magically make
    "better" developers out of people that really don't have the discipline
    or skillsets for the job.

    An average joe (no offense) can use a paint roller to paint a room
    quicker than using a brush. But, that same person would likely
    not notice how poorly the trim was "cut in", howmuch paint ended up
    on windows, etc.

    Will an average *coder* (someone who has managed to figure out how to
    get a program to run and proclaimed himself a coder thereafter)
    see the differences in his "product" (i.e., the code he has written)
    and the blemishes/shortcomings it contains?

    Well, we had the developers we had, and the team was large enough that
    they cannot all be superstars in any language.


    That didn't settle the issue, because the Ada mafia saw Pascal as the
    stepping stone to Ada nirvana, and C as the devil incarnate.

    C is "running with scissors". You can either prohibit it *or*
    teach people how to SAFELY run with scissors! The problem with
    programming languages -- unlike scissors that tend to inflict injury
    on the "runner" -- is the folks who are doing the coding are
    oblivious to the defects they create.

    Integration soon beats that out of them.


    I was still scratching my head about why Pascal was so different than
    C, so I looked for the original intent of the founders. Which I found
    in the Introductions in the Pascal Report and K&R C: Pascal was
    intended for teaching Computer Science students their first
    programming language, while C was intended for implementing large
    systems, like the Unix kernel.

    Wirth maintained a KISS attitude in ALL of his endeavors. He
    failed to see that requiring forward declarations wasn't really
    making it any simpler /for the coders/. Compilers get written
    and revised "a few times" but *used* thousands of times. Why
    favor the compiler writer over the developer?

    Because computers were quite expensive then (circa 1982), and so
    Pascal was optimized to eliminate as much of the compiler task as
    possible, given that teaching languages are used to solve toy
    problems's, the focus being learning to program, not to deliver
    efficient working code for something industrial-scale in nature.


    Prior operating systems were all written in assembly code, and so were
    not portable between vendors, so Unix needed to be written in
    something that could be ported, and yet was sufficient to implement a
    OS kernel. Nor can one write an OS in Pascal.

    You can write an OS in Pascal -- but with lots of "helper functions"
    that defeat the purpose of the HLL's "safety mechanisms".

    Yes, lots. They were generally written in assembler, and it was
    estimated that about 20% of the code would have to be in assembly if
    Pascal were used, based on a prior project that had done just that a
    few years earlier.

    The target computers were pretty spare, multiple Motorola 68000
    single-board computers in a VME crate or the like. I recall that a
    one megahertz instruction rate was considered really fast then.

    Much was made by the Pascal folk of the cost of software maintenance,
    but on the scale of a radar, maintenance was dominated by the
    hardware, and software maintenance was a roundoff error on the total
    cost of ownership. The electric bill was also larger.


    This did work - only something like 4% of Unix had to be written in
    assembly, and it was simply rewritten for each new family of
    computers. (Turned out to be 6%.)

    The conclusion was to use C: It was designed for the implementation
    of large realtime systems, while Pascal was designed as a teaching
    language, and is somewhat slow and awkward for realtime systems,
    forcing the use of various sidesteps, and much assembly code. Speed
    and the ability to drive hardware directly are the dominant issues
    controlling that part of development cost and risk that is sensitive
    to choice of implementation language.


    So the Pascal crowd fell silent, and C was chosen and successfully
    used.

    The Ada Mandate was rescinded maybe ten years later. The ISO-OSI
    mandate fell a year or so later, slain by TCP/IP.

    I had to make a similar decision, early on. It's really easy to get
    on a soapbox and preach how it *should* be done. But, if you expect
    (and want) others to adopt and embelish your work, you have to choose
    an implementation that they will accept, if not "embrace".

    And, this without requiring scads of overhead (people and other
    resources) to accomplish a particular goal.

    Key in this is figuring out how to *hide* complexity so a user
    (of varying degrees of capability across a wide spectrum) can
    get something to work within the constraints you've laid out.

    Hidden complexity is still complexity, with complex failure modes
    rendered incomprehensible and random-looking to those unaware of
    what's going on behind the pretty facade.

    I prefer to eliminate such complexity. And not to confuse the
    programmers, or treat them like children.

    War story from the days of Fortran, when I was the operating system
    expert: I had just these debates with the top application software
    guy, who claimed that all you needed was the top-level design of the
    software to debug the code.

    He had been struggling with a mysterious bug, where the code would
    crash soon after launch, every time. Code inspection and path tracing
    had all failed, for months. He challenged me to figure it out. I
    figured it out in ten minutes, by using OS-level tools, which provide
    access to a world completely unknown to the application software folk.
    The problem was how the compiler handled subroutines referenced in one
    module but not provided to the linker. Long story, but the resulting
    actual execution path was unrelated to the design of application the
    software, and one had to see things in assembly to understand what was happening.

    (This war story has been repeated in one form or another many time
    over the following years. Have kernel debugger, will travel.)


    E.g., as I allow end users to write code (scripts), I can't
    assume they understand things like operator precedence, cancellation,
    etc. *I* have to address those issues in a way that allows them
    to remain ignorant and still get the results they expect/desire.

    The same applies to other "more advanced" levels of software
    development; the more minutiae that the developer has to contend with,
    the less happy he will be about the experience.

    [E.g., I modified my compiler to support a syntax of the form:
    handle=>method(arguments)
    an homage to:
    pointer->member(arguments)
    where "handle" is an identifier (small integer) that uniquely references
    an object in the local context /that may reside on another processor/
    (which means the "pointer" approach is inappropriate) so the developer >doesn't have to deal with the RMI mechanisms.]

    Pascal uses this exact approach. The absence of true pointers is
    crippling for hardware control, which is a big part of the reason that
    C prevailed.

    I assume that RMI is Remote Module or Method Invocation. These are
    inherently synchronous (like Ada rendezvous) and are crippling for
    realtime software of any complexity - the software soon ends up
    deadlocked, with everybody waiting for everybody else to do something.

    This is driven by the fact that the real world has uncorrelated
    events, capable of happening in any order, so no program that requires
    that event be ordered can survive.

    There is a benchmark for message-passing in realtime software where
    there is ring of threads or processes passing message around the ring
    any number of times. This is modeled on the central structure of many
    kinds of radar. Even one remote invocation will cause it to jam. As
    will sending a message to oneself. Only asynchronous message passing
    will work.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Joe Gwinn on Sat Oct 19 17:15:24 2024
    On 10/19/2024 3:26 PM, Joe Gwinn wrote:
    Will an average *coder* (someone who has managed to figure out how to
    get a program to run and proclaimed himself a coder thereafter)
    see the differences in his "product" (i.e., the code he has written)
    and the blemishes/shortcomings it contains?

    Well, we had the developers we had, and the team was large enough that
    they cannot all be superstars in any language.

    And business targets "average performers" as the effort to hire
    and retain "superstars" limits what the company can accomplish.

    I was still scratching my head about why Pascal was so different than
    C, so I looked for the original intent of the founders. Which I found
    in the Introductions in the Pascal Report and K&R C: Pascal was
    intended for teaching Computer Science students their first
    programming language, while C was intended for implementing large
    systems, like the Unix kernel.

    Wirth maintained a KISS attitude in ALL of his endeavors. He
    failed to see that requiring forward declarations wasn't really
    making it any simpler /for the coders/. Compilers get written
    and revised "a few times" but *used* thousands of times. Why
    favor the compiler writer over the developer?

    Because computers were quite expensive then (circa 1982), and so
    Pascal was optimized to eliminate as much of the compiler task as
    possible, given that teaching languages are used to solve toy
    problems's, the focus being learning to program, not to deliver
    efficient working code for something industrial-scale in nature.

    I went to school in the mid 70's. Each *course* had its own
    computer system (in addition to the school-wide "computing service")
    because each professor had his own slant on how he wanted to
    teach his courseware. We wrote code in Pascal, PL/1, LISP, Algol,
    Fortran, SNOBOL, and a variety of "toy" languages designed to
    illustrate specific concepts and OS approaches. I can't recall
    compile time ever being an issue (but, the largest classes had
    fewer than 400 students)

    Prior operating systems were all written in assembly code, and so were
    not portable between vendors, so Unix needed to be written in
    something that could be ported, and yet was sufficient to implement a
    OS kernel. Nor can one write an OS in Pascal.

    You can write an OS in Pascal -- but with lots of "helper functions"
    that defeat the purpose of the HLL's "safety mechanisms".

    Yes, lots. They were generally written in assembler, and it was
    estimated that about 20% of the code would have to be in assembly if
    Pascal were used, based on a prior project that had done just that a
    few years earlier.

    Yes. The same is true of eeking out the last bits of performance
    from OSs written in C. There are too many hardware oddities that
    languages can't realistically address (without tying themselves
    unduly to a particular architecture).

    The target computers were pretty spare, multiple Motorola 68000
    single-board computers in a VME crate or the like. I recall that a
    one megahertz instruction rate was considered really fast then.

    Even the 645 ran at ~500KHz (!). Yet, supported hundreds of users
    doing all sorts of different tasks. (I think the 6180 ran at
    ~1MHz). But, each of these could exploit the fact that users
    don't consume all of the resources available /at any instant/
    on a processor.

    Contrast that with moving to the private sector and having
    an 8b CPU hosting your development system (with dog slow
    storage devices).

    Much was made by the Pascal folk of the cost of software maintenance,
    but on the scale of a radar, maintenance was dominated by the
    hardware, and software maintenance was a roundoff error on the total
    cost of ownership. The electric bill was also larger.

    There likely is less call for change in such an "appliance".
    Devices with richer UIs tend to see more feature creep.
    This was one of Wirth's pet peeves; the fact that "designers"
    were just throwing features together instead of THINKING about
    which were truly needed. E.g., Oberon looks like something
    out of the 1980's...

    This did work - only something like 4% of Unix had to be written in
    assembly, and it was simply rewritten for each new family of
    computers. (Turned out to be 6%.)

    The conclusion was to use C: It was designed for the implementation
    of large realtime systems, while Pascal was designed as a teaching
    language, and is somewhat slow and awkward for realtime systems,
    forcing the use of various sidesteps, and much assembly code. Speed
    and the ability to drive hardware directly are the dominant issues controlling that part of development cost and risk that is sensitive
    to choice of implementation language.

    One can write reliable code in C. But, there has to be discipline
    imposed (self or otherwise). Having an awareness of the underlying
    hardware goes a long way to making this adjustment.

    I had to write a driver for a PROM Programmer in Pascal. It was
    a dreadful experience! And, required an entirely different
    mindset. Things that you would do in C (or ASM) had incredibly
    inefficient analogs in Pascal.

    E.g., you could easily create an ASCII character for a particular
    hex-digit and concatenate these to form a "byte"; then those
    to form a word/address, etc. (imagine doing that for every byte
    you have to ship across to the programmer!) In Pascal, you spent
    all your time in call/return instead of actually doing any *work*!

    So the Pascal crowd fell silent, and C was chosen and successfully
    used.

    The Ada Mandate was rescinded maybe ten years later. The ISO-OSI
    mandate fell a year or so later, slain by TCP/IP.

    I had to make a similar decision, early on. It's really easy to get
    on a soapbox and preach how it *should* be done. But, if you expect
    (and want) others to adopt and embelish your work, you have to choose
    an implementation that they will accept, if not "embrace".

    And, this without requiring scads of overhead (people and other
    resources) to accomplish a particular goal.

    Key in this is figuring out how to *hide* complexity so a user
    (of varying degrees of capability across a wide spectrum) can
    get something to work within the constraints you've laid out.

    Hidden complexity is still complexity, with complex failure modes
    rendered incomprehensible and random-looking to those unaware of
    what's going on behind the pretty facade.

    If you can't explain the bulk of a solution "seated, having a drink",
    then it is too complex. "Complex is anything that doesn't fit in a
    single brain".

    Explain how the filesystem on <whatever> works, internally. How
    does it layer onto storage media? How are "devices" hooked into it?
    Abstract mechanisms like pipes? Where does buffering come into
    play? ACLs?

    This is "complex" because a legacy idea has been usurped to tie
    all of these things together.

    I prefer to eliminate such complexity. And not to confuse the
    programmers, or treat them like children.

    By picking good abstractions, you don't have to do either.
    But, you can't retrofit those abstractions to existing
    systems. And, too often, those systems have "precooked"
    mindsets.

    War story from the days of Fortran, when I was the operating system
    expert: I had just these debates with the top application software
    guy, who claimed that all you needed was the top-level design of the
    software to debug the code.

    He had been struggling with a mysterious bug, where the code would
    crash soon after launch, every time. Code inspection and path tracing
    had all failed, for months. He challenged me to figure it out. I
    figured it out in ten minutes, by using OS-level tools, which provide
    access to a world completely unknown to the application software folk.
    The problem was how the compiler handled subroutines referenced in one
    module but not provided to the linker. Long story, but the resulting
    actual execution path was unrelated to the design of application the software, and one had to see things in assembly to understand what was happening.

    (This war story has been repeated in one form or another many time
    over the following years. Have kernel debugger, will travel.)

    E.g., as I allow end users to write code (scripts), I can't
    assume they understand things like operator precedence, cancellation,
    etc. *I* have to address those issues in a way that allows them
    to remain ignorant and still get the results they expect/desire.

    The same applies to other "more advanced" levels of software
    development; the more minutiae that the developer has to contend with,
    the less happy he will be about the experience.

    [E.g., I modified my compiler to support a syntax of the form:
    handle=>method(arguments)
    an homage to:
    pointer->member(arguments)
    where "handle" is an identifier (small integer) that uniquely references
    an object in the local context /that may reside on another processor/
    (which means the "pointer" approach is inappropriate) so the developer
    doesn't have to deal with the RMI mechanisms.]

    Pascal uses this exact approach. The absence of true pointers is
    crippling for hardware control, which is a big part of the reason that
    C prevailed.

    I don't eschew pointers. Rather, if the object being referenced can
    be remote, then a pointer is meaningless; what value should the pointer
    have if the referenced object resides in some CPU at some address in
    some address space at the end of a network cable?

    I assume that RMI is Remote Module or Method Invocation. These are

    The latter. Like RPC (instead of IPC) but in an OOPS context.

    inherently synchronous (like Ada rendezvous) and are crippling for
    realtime software of any complexity - the software soon ends up
    deadlocked, with everybody waiting for everybody else to do something.

    There is nothing that inherently *requires* an RMI to be synchronous.
    This is only necessary if the return value is required, *there*.
    E.g., actions that likely will take a fair bit of time to execute
    are often more easily implemented as asynchronous invocations
    (e.g., node127=>PowerOn()). But, these need to be few enough that the developer can keep track of "outstanding business"; expecting every
    remote interaction to be asynchronous means you end up having to catch
    a wide variety of diverse replies and sort out how they correlate
    with your requests (that are now "gone"). Many developers have a hard
    time trying to deal with this decoupled cause-effect relationship...
    especially if the result is a failure indication (How do I
    recover now that I've already *finished* executing that bit of code?)

    But, synchronous programming is far easier to debug as you don't
    have to keep track of outstanding asynchronous requests that
    might "return" at some arbitrary point in the future. As the
    device executing the method is not constrained by the realities
    of the local client, there is no way to predict when it will
    have a result available.

    This is driven by the fact that the real world has uncorrelated
    events, capable of happening in any order, so no program that requires
    that event be ordered can survive.

    You only expect the first event you await to happen before you
    *expect* the second. That, because the second may have some (opaque) dependence on the first.

    There is a benchmark for message-passing in realtime software where
    there is ring of threads or processes passing message around the ring
    any number of times. This is modeled on the central structure of many
    kinds of radar.

    So, like most benchmarks, is of limited *general* use.

    Even one remote invocation will cause it to jam. As
    will sending a message to oneself. Only asynchronous message passing
    will work.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to blockedofcourse@foo.invalid on Sun Oct 20 16:21:08 2024
    On Sat, 19 Oct 2024 17:15:24 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/19/2024 3:26 PM, Joe Gwinn wrote:
    Will an average *coder* (someone who has managed to figure out how to
    get a program to run and proclaimed himself a coder thereafter)
    see the differences in his "product" (i.e., the code he has written)
    and the blemishes/shortcomings it contains?

    Well, we had the developers we had, and the team was large enough that
    they cannot all be superstars in any language.

    And business targets "average performers" as the effort to hire
    and retain "superstars" limits what the company can accomplish.

    It's a little bit deeper than that. Startups can afford to have a
    large fraction of superstars (so long as they like each other) because
    the need for spear carriers is minimal in that world.

    But for industrial scale, there are lots of simpler and more boring
    jobs that must also be done, thus diluting the superstars.

    War story: I used to run an Operating System Section, and one thing
    we needed to develop was hardware memory test programs for use in the
    factory. We had a hell of a lot of trouble getting this done because
    our programmers point-blank refused to do such test programs.

    One fine day, it occurred to me that the problem was that we were
    trying to use race horses to pull plows. So I went out to get the
    human equivalent of a plow horse, one that was a tad autistic and so
    would not be bored. This worked quite well. Fit the tool to the job.


    I was still scratching my head about why Pascal was so different than
    C, so I looked for the original intent of the founders. Which I found >>>> in the Introductions in the Pascal Report and K&R C: Pascal was
    intended for teaching Computer Science students their first
    programming language, while C was intended for implementing large
    systems, like the Unix kernel.

    Wirth maintained a KISS attitude in ALL of his endeavors. He
    failed to see that requiring forward declarations wasn't really
    making it any simpler /for the coders/. Compilers get written
    and revised "a few times" but *used* thousands of times. Why
    favor the compiler writer over the developer?

    Because computers were quite expensive then (circa 1982), and so
    Pascal was optimized to eliminate as much of the compiler task as
    possible, given that teaching languages are used to solve toy
    problems's, the focus being learning to program, not to deliver
    efficient working code for something industrial-scale in nature.

    I went to school in the mid 70's. Each *course* had its own
    computer system (in addition to the school-wide "computing service")
    because each professor had his own slant on how he wanted to
    teach his courseware. We wrote code in Pascal, PL/1, LISP, Algol,
    Fortran, SNOBOL, and a variety of "toy" languages designed to
    illustrate specific concepts and OS approaches. I can't recall
    compile time ever being an issue (but, the largest classes had
    fewer than 400 students)

    I graduated in 1969, and there were no computer courses on offer near
    me except Basic programming, which I took.

    Ten years later, I got a night-school masters degree in Computer
    Science.


    Prior operating systems were all written in assembly code, and so were >>>> not portable between vendors, so Unix needed to be written in
    something that could be ported, and yet was sufficient to implement a
    OS kernel. Nor can one write an OS in Pascal.

    You can write an OS in Pascal -- but with lots of "helper functions"
    that defeat the purpose of the HLL's "safety mechanisms".

    Yes, lots. They were generally written in assembler, and it was
    estimated that about 20% of the code would have to be in assembly if
    Pascal were used, based on a prior project that had done just that a
    few years earlier.

    Yes. The same is true of eeking out the last bits of performance
    from OSs written in C. There are too many hardware oddities that
    languages can't realistically address (without tying themselves
    unduly to a particular architecture).

    The target computers were pretty spare, multiple Motorola 68000
    single-board computers in a VME crate or the like. I recall that a
    one megahertz instruction rate was considered really fast then.

    Even the 645 ran at ~500KHz (!). Yet, supported hundreds of users
    doing all sorts of different tasks. (I think the 6180 ran at
    ~1MHz).

    Those were the days. Our computers did integer arithmetic only,
    because floating-point was done only in software and was dog slow.

    And we needed multi-precision integer arithmetic for many things,
    using scaled binary to handle the needed precision and dynamic range.


    But, each of these could exploit the fact that users
    don't consume all of the resources available /at any instant/
    on a processor.

    Contrast that with moving to the private sector and having
    an 8b CPU hosting your development system (with dog slow
    storage devices).

    A realtime system can definitely consume a goodly fraction of the
    computers.


    Much was made by the Pascal folk of the cost of software maintenance,
    but on the scale of a radar, maintenance was dominated by the
    hardware, and software maintenance was a roundoff error on the total
    cost of ownership. The electric bill was also larger.

    There likely is less call for change in such an "appliance".
    Devices with richer UIs tend to see more feature creep.
    This was one of Wirth's pet peeves; the fact that "designers"
    were just throwing features together instead of THINKING about
    which were truly needed. E.g., Oberon looks like something
    out of the 1980's...

    In the 1970s, there was no such thing as such an appliance.

    Nor did appliances like stoves and toasters possess a computer.


    This did work - only something like 4% of Unix had to be written in
    assembly, and it was simply rewritten for each new family of
    computers. (Turned out to be 6%.)

    The conclusion was to use C: It was designed for the implementation
    of large realtime systems, while Pascal was designed as a teaching
    language, and is somewhat slow and awkward for realtime systems,
    forcing the use of various sidesteps, and much assembly code. Speed
    and the ability to drive hardware directly are the dominant issues
    controlling that part of development cost and risk that is sensitive
    to choice of implementation language.

    One can write reliable code in C. But, there has to be discipline
    imposed (self or otherwise). Having an awareness of the underlying
    hardware goes a long way to making this adjustment.

    I had to write a driver for a PROM Programmer in Pascal. It was
    a dreadful experience! And, required an entirely different
    mindset. Things that you would do in C (or ASM) had incredibly
    inefficient analogs in Pascal.

    E.g., you could easily create an ASCII character for a particular
    hex-digit and concatenate these to form a "byte"; then those
    to form a word/address, etc. (imagine doing that for every byte
    you have to ship across to the programmer!) In Pascal, you spent
    all your time in call/return instead of actually doing any *work*!

    Yes, a bullet dodged:

    So the Pascal crowd fell silent, and C was chosen and successfully
    used.

    The Ada Mandate was rescinded maybe ten years later. The ISO-OSI
    mandate fell a year or so later, slain by TCP/IP.

    I had to make a similar decision, early on. It's really easy to get
    on a soapbox and preach how it *should* be done. But, if you expect
    (and want) others to adopt and embelish your work, you have to choose
    an implementation that they will accept, if not "embrace".

    And, this without requiring scads of overhead (people and other
    resources) to accomplish a particular goal.

    Key in this is figuring out how to *hide* complexity so a user
    (of varying degrees of capability across a wide spectrum) can
    get something to work within the constraints you've laid out.

    Hidden complexity is still complexity, with complex failure modes
    rendered incomprehensible and random-looking to those unaware of
    what's going on behind the pretty facade.

    If you can't explain the bulk of a solution "seated, having a drink",
    then it is too complex. "Complex is anything that doesn't fit in a
    single brain".

    Well, current radar systems (and all manner of commercial products)
    contain many millions of lines of code. Fitting this into a few
    brains is kinda achieved using layered abstractions.

    This falls apart in the integration lab, when that which is hidden
    turns on its creators. Progress is paced by having some people who do
    know how it really works, despite the abstractions, visible and
    hidden.


    Explain how the filesystem on <whatever> works, internally. How
    does it layer onto storage media? How are "devices" hooked into it?
    Abstract mechanisms like pipes? Where does buffering come into
    play? ACLs?

    There are people who do know these things.

    This is "complex" because a legacy idea has been usurped to tie
    all of these things together.

    I prefer to eliminate such complexity. And not to confuse the
    programmers, or treat them like children.

    By picking good abstractions, you don't have to do either.
    But, you can't retrofit those abstractions to existing
    systems. And, too often, those systems have "precooked"
    mindsets.

    Yes. Actually it's always. And they don't know what they don't know,
    the unknown unknowns.


    War story from the days of Fortran, when I was the operating system
    expert: I had just these debates with the top application software
    guy, who claimed that all you needed was the top-level design of the
    software to debug the code.

    He had been struggling with a mysterious bug, where the code would
    [hang] soon after launch, every time. Code inspection and path tracing
    had all failed, for months. He challenged me to figure it out. I
    figured it out in ten minutes, by using OS-level tools, which provide
    access to a world completely unknown to the application software folk.
    The problem was how the compiler handled subroutines referenced in one
    module but not provided to the linker. Long story, but the resulting
    actual execution path was unrelated to the design of application
    software, and one had to see things in assembly to understand what was
    happening.

    (This war story has been repeated in one form or another many time
    over the following years. Have kernel debugger, will travel.)

    E.g., as I allow end users to write code (scripts), I can't
    assume they understand things like operator precedence, cancellation,
    etc. *I* have to address those issues in a way that allows them
    to remain ignorant and still get the results they expect/desire.

    The same applies to other "more advanced" levels of software
    development; the more minutiae that the developer has to contend with,
    the less happy he will be about the experience.

    [E.g., I modified my compiler to support a syntax of the form:
    handle=>method(arguments)
    an homage to:
    pointer->member(arguments)
    where "handle" is an identifier (small integer) that uniquely references >>> an object in the local context /that may reside on another processor/
    (which means the "pointer" approach is inappropriate) so the developer
    doesn't have to deal with the RMI mechanisms.]

    Pascal uses this exact approach. The absence of true pointers is
    crippling for hardware control, which is a big part of the reason that
    C prevailed.

    I don't eschew pointers. Rather, if the object being referenced can
    be remote, then a pointer is meaningless; what value should the pointer
    have if the referenced object resides in some CPU at some address in
    some address space at the end of a network cable?

    Remote meaning accessed via a comms link or LAN is not done using RMIs
    in my world - too slow and too asynchronous. Round-trip transit delay
    would kill you. Also, not all messages need guaranteed delivery, and
    it's expensive to provide that guarantee, so there need to be
    distinctions.


    I assume that RMI is Remote Module or Method Invocation. These are

    The latter. Like RPC (instead of IPC) but in an OOPS context.

    Object-Oriented stuff had its own set of problems, especially as
    originally implemented. My first encounter ended badly for the
    proposed system, as it turned out that the OO overhead was so high
    that the context switches between objects (tracks in this case) would
    over consume the computers, leaving not enough time to complete a
    horizon scan, never mind do anything useful. But that's a story for
    another day.


    inherently synchronous (like Ada rendezvous) and are crippling for
    realtime software of any complexity - the software soon ends up
    deadlocked, with everybody waiting for everybody else to do something.

    There is nothing that inherently *requires* an RMI to be synchronous.
    This is only necessary if the return value is required, *there*.
    E.g., actions that likely will take a fair bit of time to execute
    are often more easily implemented as asynchronous invocations
    (e.g., node127=>PowerOn()). But, these need to be few enough that the >developer can keep track of "outstanding business"; expecting every
    remote interaction to be asynchronous means you end up having to catch
    a wide variety of diverse replies and sort out how they correlate
    with your requests (that are now "gone"). Many developers have a hard
    time trying to deal with this decoupled cause-effect relationship... >especially if the result is a failure indication (How do I
    recover now that I've already *finished* executing that bit of code?)

    At the time, RMI was implemented synchronously only, and it did not
    matter if a response was required, you would always stall at that call
    until it completed. Meaning that you could not respond to the random
    arrival of an unrelated event.

    War story: Some years later, in the late 1980s, I was asked to assess
    an academic operating system called Alpha for possible use in realtime applications. It was strictly synchronous. Turned out that if you
    made a typing mistake or the like, one could not stop the stream of
    error message without doing a full reboot. There was a Control-Z
    command, but it could not be processed because the OS was otherwise
    occupied with an endless loop. Oops. End of assessment.

    When I developed the message-passing ring test, it was to flush out
    systems that were synchronous at the core, regardless of marketing
    bafflegab.


    But, synchronous programming is far easier to debug as you don't
    have to keep track of outstanding asynchronous requests that
    might "return" at some arbitrary point in the future. As the
    device executing the method is not constrained by the realities
    of the local client, there is no way to predict when it will
    have a result available.

    Well, the alternative is to use a different paradigm entirely, where
    for every event type there is a dedicated responder, which takes the appropriate course of action. Mostly this does not involve any other
    action type, but if necessary it is handled here. Typically, the
    overall architecture of this approach is a Finite State Machine.


    This is driven by the fact that the real world has uncorrelated
    events, capable of happening in any order, so no program that requires
    that event be ordered can survive.

    You only expect the first event you await to happen before you
    *expect* the second. That, because the second may have some (opaque) >dependence on the first.

    Or, more commonly, be statistically uncorrelated random. Like
    airplanes flying into coverage as weather patterns drift by as flocks
    of geese flap on by as ...


    There is a benchmark for message-passing in realtime software where
    there is ring of threads or processes passing message around the ring
    any number of times. This is modeled on the central structure of many
    kinds of radar.

    So, like most benchmarks, is of limited *general* use.

    True, but what's the point? It is general for that class of problems,
    when the intent is to eliminate operating systems that cannot work for
    a realtime system. There are also tests for how the thread scheduler
    works, to see if one can respond immediately to an event, or must wait
    until the scheduler makes a pass (waiting is forbidden). There are
    many perfectly fine operating system that will flunk these tests, and
    yet are widely used. But not for realtime.

    Joe Gwinn



    Even one remote invocation will cause it to jam. As
    will sending a message to oneself. Only asynchronous message passing
    will work.

    Joe Gwinn


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Joe Gwinn on Sun Oct 20 19:06:40 2024
    On 10/20/2024 1:21 PM, Joe Gwinn wrote:

    Well, we had the developers we had, and the team was large enough that
    they cannot all be superstars in any language.

    And business targets "average performers" as the effort to hire
    and retain "superstars" limits what the company can accomplish.

    It's a little bit deeper than that. Startups can afford to have a
    large fraction of superstars (so long as they like each other) because
    the need for spear carriers is minimal in that world.

    But for industrial scale, there are lots of simpler and more boring
    jobs that must also be done, thus diluting the superstars.

    The problem is that those "masses" tend to remain at the same skill
    level, indefinitely. And, often resist efforts to learn/change/grow.
    Hiring a bunch of ditch diggers is fine -- if you will ALWAYS need
    to dig ditches. But, *hoping* that some will aspire to become
    *architects* is a shot in the dark...

    War story: I used to run an Operating System Section, and one thing
    we needed to develop was hardware memory test programs for use in the factory. We had a hell of a lot of trouble getting this done because
    our programmers point-blank refused to do such test programs.

    That remains true of *most* "programmers". It is not seen as
    an essential part of their job.

    One fine day, it occurred to me that the problem was that we were
    trying to use race horses to pull plows. So I went out to get the
    human equivalent of a plow horse, one that was a tad autistic and so
    would not be bored. This worked quite well. Fit the tool to the job.

    I had a boss who would have approached it differently. He would have
    had one of the technicians "compromise" their prototype/development
    hardware. E.g., crack a few *individual* cores to render them ineffective
    as storage media. Then, let the prima donnas chase mysterious bugs
    in *their* code. Until they started to question the functionality of
    the hardware. "Gee, sure would be nice if we could *prove* the
    hardware was at fault. Otherwise, it *must* just be bugs in your code!"

    I went to school in the mid 70's. Each *course* had its own
    computer system (in addition to the school-wide "computing service")
    because each professor had his own slant on how he wanted to
    teach his courseware. We wrote code in Pascal, PL/1, LISP, Algol,
    Fortran, SNOBOL, and a variety of "toy" languages designed to
    illustrate specific concepts and OS approaches. I can't recall
    compile time ever being an issue (but, the largest classes had
    fewer than 400 students)

    I graduated in 1969, and there were no computer courses on offer near
    me except Basic programming, which I took.

    I was writing FORTRAN code (on Hollerith cards) in that time frame
    (I was attending a local college, nights, while in jr high school)
    Of course, it depends on the resources available to you.

    OTOH, I never had the opportunity to use a glass TTY until AFTER
    college (prior to that, everything was hardcopy output -- DECwriters,
    Trendata 1200's, etc.)

    Ten years later, I got a night-school masters degree in Computer
    Science.

    The target computers were pretty spare, multiple Motorola 68000
    single-board computers in a VME crate or the like. I recall that a
    one megahertz instruction rate was considered really fast then.

    Even the 645 ran at ~500KHz (!). Yet, supported hundreds of users
    doing all sorts of different tasks. (I think the 6180 ran at
    ~1MHz).

    Those were the days. Our computers did integer arithmetic only,
    because floating-point was done only in software and was dog slow.

    And we needed multi-precision integer arithmetic for many things,
    using scaled binary to handle the needed precision and dynamic range.

    Yes. It is still used (Q-notation) in cases where your code may not want
    to rely on FPU support and/or has to run really fast. I make extensive
    use of it in my gesture recognizer where I am trying to fit sampled
    points to the *best* of N predefined curves, in a handful of milliseconds (interactive interface).

    Much was made by the Pascal folk of the cost of software maintenance,
    but on the scale of a radar, maintenance was dominated by the
    hardware, and software maintenance was a roundoff error on the total
    cost of ownership. The electric bill was also larger.

    There likely is less call for change in such an "appliance".
    Devices with richer UIs tend to see more feature creep.
    This was one of Wirth's pet peeves; the fact that "designers"
    were just throwing features together instead of THINKING about
    which were truly needed. E.g., Oberon looks like something
    out of the 1980's...

    In the 1970s, there was no such thing as such an appliance.

    Anything that performed a fixed task. My first commercial product
    was a microprocessor-based LORAN position plotter (mid 70's).
    Now, we call them "deeply embedded" devices -- or, my preference,
    "appliances".

    Nor did appliances like stoves and toasters possess a computer.

    Key in this is figuring out how to *hide* complexity so a user
    (of varying degrees of capability across a wide spectrum) can
    get something to work within the constraints you've laid out.

    Hidden complexity is still complexity, with complex failure modes
    rendered incomprehensible and random-looking to those unaware of
    what's going on behind the pretty facade.

    If you can't explain the bulk of a solution "seated, having a drink",
    then it is too complex. "Complex is anything that doesn't fit in a
    single brain".

    Well, current radar systems (and all manner of commercial products)
    contain many millions of lines of code. Fitting this into a few
    brains is kinda achieved using layered abstractions.

    You can require a shitload of code to implement a simple abstraction.
    E.g., the whole notion of a Virtual Machine is easily captured in
    a single imagination, despite the complexity of making it happen on
    a particular set of commodity hardware.

    This falls apart in the integration lab, when that which is hidden
    turns on its creators. Progress is paced by having some people who do
    know how it really works, despite the abstractions, visible and
    hidden.

    Explain how the filesystem on <whatever> works, internally. How
    does it layer onto storage media? How are "devices" hooked into it?
    Abstract mechanisms like pipes? Where does buffering come into
    play? ACLs?

    There are people who do know these things.

    But that special knowledge is required due to a poor choice of
    abstractions. And, trying to shoehorn new notions into old
    paradigms. Just so you could leverage an existing name
    resolver for system objects that weren't part of the idea of
    "files".

    I, for example, dynamically create a "context" for each process.
    It is unique to that process so no other process can see its
    contents or access them. The whole notion of a separate
    ACL layered *atop* this is moot; if you aren't supposed to
    access something, then there won't be a *name* for that
    thing in the context provided to you!

    A context is then just a bag of (name, object) tuples and
    a set of rules for the resolver that operates on that context.

    So, a program that is responsible for printing paychecks would
    have a context *created* for it that contained:
    Clock -- something that can be queried for the current time/date
    Log -- a place to record its actions
    Printer -- a device that can materialize the paychecks
    and, some *number* of:
    Paycheck -- a description of the payee and amount
    i.e., the name "Paycheck" need not be unique in this context
    (why artificially force each paycheck to have a unique name
    just because you want to use an archaic namespace concept to
    bind an identifier/name to each? they're ALL just "paychecks")

    // resolve the objects governing the process
    theClock = MyContext=>resolve("Clock")
    theLog = MyContext=>resolve("Log")
    thePrinter = MyContext=>resolve("Printer")
    theDevice = thePrinter=>FriendlyName()

    // process each paycheck
    while( thePaycheck = MyContext=>resolve("Paycheck") ) {
    // get the parameters of interest for this paycheck
    thePayee = thePaycheck=>payee()
    theAmount = thePaycheck=>amount()
    theTime = theClock=>now()

    // print the check
    thePrinter=>write("Pay to the order of "
    , thePayee
    , "EXACTLY "
    , stringify(theAmount)
    )

    // make a record of the transaction
    theLog=>write("Drafted a disbursement to "
    , thePayee
    , " in the amount of "
    , theAmount
    , " at "
    , theTime
    , " printed on "
    , FriendlyName
    )

    // discard the processed paycheck
    MyContext=>unlink(thePaycheck)
    }

    // no more "Paycheck"s to process

    You don't need to worry about *where* each of these objects reside,
    how they are implemented, etc. I can move them at runtime -- even
    WHILE the code is executing -- if that would be a better use of
    resources available at the current time! Each of the objects
    bound to "Paycheck" names could reside in "Personnel's" computer
    as that would likely be able to support the number of employees
    on hand (thousands?). And, by unlinking the name from my namespace,
    I've just "forgotten" how to access that particular PROCESSED
    "Paycheck"; the original data remains intact (as it should
    because *I* shouldn't be able to dick with it!)

    "The Network is the Computer" -- welcome to the 1980's! (we'll
    get there, sooner or later!

    And, if the developer happens to use a method that is not supported
    on an object of that particular type, the compiler will grouse about
    it. If the process tries to use a method for which it doesn't have
    permission (bound into each object reference as "capabilities"), the
    OS will not pass the associated message to the referenced object
    and the error handler will likely have been configured to KILL
    the process (you obviously THINK you should be able to do something
    that you can't -- so, you are buggy!)

    No need to run this process as a particular UID and configure
    the "files" in the portion of the file system hierarchy that
    you've set aside for its use -- hoping that no one ELSE will
    be able to peek into that area and harvest this information.

    No worry that the process might go rogue and try to access
    something it shouldn't -- like the "password" file -- because
    it can only access the objects for which it has been *given*
    names and only manipulate each of those through the capabilities
    that have been bound to those handle *instances* (i.e., someone
    else, obviously, has the power to create their *contents*!)

    This is conceptually much cleaner. And, matches the way you
    would describe "printing paychecks" to another individual.

    Pascal uses this exact approach. The absence of true pointers is
    crippling for hardware control, which is a big part of the reason that
    C prevailed.

    I don't eschew pointers. Rather, if the object being referenced can
    be remote, then a pointer is meaningless; what value should the pointer
    have if the referenced object resides in some CPU at some address in
    some address space at the end of a network cable?

    Remote meaning accessed via a comms link or LAN is not done using RMIs
    in my world - too slow and too asynchronous. Round-trip transit delay
    would kill you. Also, not all messages need guaranteed delivery, and
    it's expensive to provide that guarantee, so there need to be
    distinctions.

    Horses for courses. "Real Time" only means that a deadline exists
    for a task to complete. It cares nothing about how "immediate" or
    "often" such deadlines occur. A deep space probe has deadlines
    regarding when it must make orbital adjustment procedures
    (flybys). They may be YEARS in the future. And, only a few
    in number. But, miss them and the "task" is botched.

    I assume that RMI is Remote Module or Method Invocation. These are

    The latter. Like RPC (instead of IPC) but in an OOPS context.

    Object-Oriented stuff had its own set of problems, especially as
    originally implemented. My first encounter ended badly for the
    proposed system, as it turned out that the OO overhead was so high
    that the context switches between objects (tracks in this case) would
    over consume the computers, leaving not enough time to complete a
    horizon scan, never mind do anything useful. But that's a story for
    another day.

    OOPS as embodied in programming languages is fraught with all
    sorts of overheads that don't often apply in an implementation.

    However, dealing with "things" that have particular "operations"
    and "properties" is a convenient way to model a solution.

    In ages past, the paycheck program would likely have been driven by a
    text file with N columns: payee, amount, date, department, etc.
    The program would extract tuples, sequentially, from that and
    build a paycheck before moving on to the next line.

    Of course, if something happened mid file, you were now faced
    with the problem of tracking WHERE you had progressed and
    restarting from THAT spot, exactly. In my implementation,
    you just restart the program and it processes any "Paycheck"s
    that haven't yet been unlinked from its namespace.

    inherently synchronous (like Ada rendezvous) and are crippling for
    realtime software of any complexity - the software soon ends up
    deadlocked, with everybody waiting for everybody else to do something.

    There is nothing that inherently *requires* an RMI to be synchronous.
    This is only necessary if the return value is required, *there*.
    E.g., actions that likely will take a fair bit of time to execute
    are often more easily implemented as asynchronous invocations
    (e.g., node127=>PowerOn()). But, these need to be few enough that the
    developer can keep track of "outstanding business"; expecting every
    remote interaction to be asynchronous means you end up having to catch
    a wide variety of diverse replies and sort out how they correlate
    with your requests (that are now "gone"). Many developers have a hard
    time trying to deal with this decoupled cause-effect relationship...
    especially if the result is a failure indication (How do I
    recover now that I've already *finished* executing that bit of code?)

    At the time, RMI was implemented synchronously only, and it did not
    matter if a response was required, you would always stall at that call
    until it completed. Meaning that you could not respond to the random
    arrival of an unrelated event.

    For asynchronous services, I would create a separate thread just
    to handle those replies as I wouldn't want my main thread having to
    be "interrupted" by late arriving messages that *it* would have to
    process. The second thread could convert those messages into
    flags (or other data) that the main thread could examine when it
    NEEDED to know about those other activities.

    E.g., the first invocation of the write() method on "thePrinter"
    could have caused the process that *implements* that printer
    to power up the printer. While waiting for it to come on-line,
    it could buffer the write() requests so that they would be ready
    when the printer actually *did* come on-line.

    War story: Some years later, in the late 1980s, I was asked to assess
    an academic operating system called Alpha for possible use in realtime applications. It was strictly synchronous. Turned out that if you
    made a typing mistake or the like, one could not stop the stream of
    error message without doing a full reboot. There was a Control-Z
    command, but it could not be processed because the OS was otherwise
    occupied with an endless loop. Oops. End of assessment.

    *Jensen's* Alpha distributed processing across multiple domains.
    So, "signals" had to chase the thread as it executed. I have a
    similar problem and rely on killing off a resource to notify
    it's consumers of its death and, thus, terminate their execution.

    Of course, it can never be instantaneous as there are finite transit
    delays to get from one node (where part of the process may be executing)
    to another, etc.

    But, my applications are intended to be run-to-completion, not
    interactive.

    When I developed the message-passing ring test, it was to flush out
    systems that were synchronous at the core, regardless of marketing
    bafflegab.

    But, synchronous programming is far easier to debug as you don't
    have to keep track of outstanding asynchronous requests that
    might "return" at some arbitrary point in the future. As the
    device executing the method is not constrained by the realities
    of the local client, there is no way to predict when it will
    have a result available.

    Well, the alternative is to use a different paradigm entirely, where
    for every event type there is a dedicated responder, which takes the appropriate course of action. Mostly this does not involve any other
    action type, but if necessary it is handled here. Typically, the
    overall architecture of this approach is a Finite State Machine.

    But then you are constrained to having those *dedicated* agents.
    What if a device goes down or is taken off line (maintenance)?
    I address this by simply moving the object to another node that
    has resources available to service its requests.

    So, if the "Personnel" computer (above) had to go offline, I would
    move all of the Paycheck objects to some other server that could
    serve up "paycheck" objects. The payroll program wouldn't be aware
    of this as the handle for each "paycheck" would just resolve to
    the same object but on a different server.

    The advantage, here, is that you can draw on ALL system resources to
    meet any demand instead of being constrained by the resources in
    a particular "box". E.g., my garage door opener can be tasked with
    retraining the speech recognizer. Or, controlling the HVAC!

    This is driven by the fact that the real world has uncorrelated
    events, capable of happening in any order, so no program that requires
    that event be ordered can survive.

    You only expect the first event you await to happen before you
    *expect* the second. That, because the second may have some (opaque)
    dependence on the first.

    Or, more commonly, be statistically uncorrelated random. Like
    airplanes flying into coverage as weather patterns drift by as flocks
    of geese flap on by as ...

    That depends on the process(es) being monitored/controlled.
    E.g., in a tablet press, an individual tablet can't be compressed
    until its granulation has been fed into it's die (mold).
    And, can't be ejected until it has been compressed.

    So, there is an inherent order in these events, regardless of when
    they *appear* to occur.

    Sure, someone could be printing paychecks while I'm making
    tablets. But, the two processes don't interact so one cares
    nothing about the other.

    There is a benchmark for message-passing in realtime software where
    there is ring of threads or processes passing message around the ring
    any number of times. This is modeled on the central structure of many
    kinds of radar.

    So, like most benchmarks, is of limited *general* use.

    True, but what's the point? It is general for that class of problems,
    when the intent is to eliminate operating systems that cannot work for
    a realtime system.

    What *proof* do you have of that assertion? RT systems have been built
    and deployed with synchronous interfaces for decades. Even if those
    are *implied* (i.e., using a FIFO/pipe to connect two processes).

    There are also tests for how the thread scheduler
    works, to see if one can respond immediately to an event, or must wait
    until the scheduler makes a pass (waiting is forbidden).

    Waiting is only required if the OS isn't preemptive. Whether or
    not it is "forbidden" is a function of the problem space being addressed.

    There are
    many perfectly fine operating system that will flunk these tests, and
    yet are widely used. But not for realtime.

    Again, why not? Real time only means a deadline exists. It says
    nothing about frequency, number, nearness, etc. If the OS is
    deterministic, then its behavior can be factored into the
    solution.

    I wrote a tape drive driver. The hardware was an 8b latch that captured
    the eight data tracks off the tape. And, held them until the transport delivered another!

    So, every 6 microseconds, I had to capture a byte before it would be
    overrun by the next byte. This is realtime NOT because of the nearness
    of the deadlines ("the next byte") but, rather, because of the
    deadline itself. If I slowed the transport down to 1% of its
    normal speed, it would still be real-time -- but, the deadline would
    now be at t=600us.

    "Hard" or "soft"? If I *missed* the datum, it wasn't strictly
    "lost"; it just meant that I had to do a "read reverse" to capture
    it coming back under the head. If I had to do this too often,
    performance would likely have been deemed unacceptable.

    OTOH, if I needed the data regardless of how long it took, then
    such an approach *would* be tolerated.

    Folks always want to claim they have HRT systems -- an admission that
    they haven't sorted out how to convert the problem into a SRT one
    (which is considerably harder as it requires admitting that you likely
    WILL miss some deadlines; then what?).

    If an incoming projectile isn't intercepted (before it inflicts damage)
    because your system has been stressed beyond its operational limits,
    do you shut down the system and accept defeat?

    If you designed with that sort of "hard" deadline in mind, you likely
    are throwing runtime resources at problems that you won't be able
    to address -- at the expense of the *next* problem that you possibly
    COULD have, had you not been distracted.

    The child's game of Wack-a-Mole is a great example. The process(es)
    have a release time defined by when the little bugger pokes his head up.
    The *deadline* is when he decides to take cover, again. If you can't
    wack him in that time period, you have failed.

    But, you don't *stop*!

    And, if you have started to take action on an "appearance" that you
    know you will not be able to "wack", you have hindered your
    performance on the *next* appearance; better to take the loss and
    prepare yourself for that *next*.

    I.e., in HRT problems, once the deadline has passed, there is
    no value to continuing to work on THAT problem. And, if you
    can anticipate that you won't meet that deadline, then aborting
    all work on it ASAP leaves you with more resources to throw
    at the NEXT instance.

    But, many designers are oblivious to this and keep chasing the
    current deadline -- demanding more and more resources to be
    able to CLAIM they can meet those. Amusingly, most have no way
    of knowing that they have missed a deadline save for the
    physical consequences ("Shit! They just nuked Dallas!").
    This because most RTOSs have no *real* concept of deadlines
    so the code can't adjust its "priorities".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Don Y on Sun Oct 20 19:31:20 2024
    On 10/20/2024 7:06 PM, Don Y wrote:
        // resolve the objects governing the process
        theClock = MyContext=>resolve("Clock")
        theLog = MyContext=>resolve("Log")
        thePrinter = MyContext=>resolve("Printer")
        theDevice = thePrinter=>FriendlyName()

        // process each paycheck
        while( thePaycheck = MyContext=>resolve("Paycheck") ) {
            // get the parameters of interest for this paycheck
            thePayee = thePaycheck=>payee()
            theAmount = thePaycheck=>amount()
            theTime = theClock=>now()

            // print the check
            thePrinter=>write("Pay to the order of "
                              , thePayee
                              , "EXACTLY "
                              , stringify(theAmount)
                              )

            // make a record of the transaction
            theLog=>write("Drafted a disbursement to "
                          , thePayee
                          , " in the amount of "
                          , theAmount
                          , " at "
                          , theTime
                          , " printed on "
                          , FriendlyName

    "FriendlyName" s.b. "theDevice"

                          )

            // discard the processed paycheck
            MyContext=>unlink(thePaycheck)
         }

         // no more "Paycheck"s to process

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Joe Gwinn@21:1/5 to blockedofcourse@foo.invalid on Wed Oct 23 11:13:59 2024
    On Sun, 20 Oct 2024 19:06:40 -0700, Don Y
    <blockedofcourse@foo.invalid> wrote:

    On 10/20/2024 1:21 PM, Joe Gwinn wrote:

    Well, we had the developers we had, and the team was large enough that >>>> they cannot all be superstars in any language.

    And business targets "average performers" as the effort to hire
    and retain "superstars" limits what the company can accomplish.

    It's a little bit deeper than that. Startups can afford to have a
    large fraction of superstars (so long as they like each other) because
    the need for spear carriers is minimal in that world.

    But for industrial scale, there are lots of simpler and more boring
    jobs that must also be done, thus diluting the superstars.

    The problem is that those "masses" tend to remain at the same skill
    level, indefinitely. And, often resist efforts to learn/change/grow.
    Hiring a bunch of ditch diggers is fine -- if you will ALWAYS need
    to dig ditches. But, *hoping* that some will aspire to become
    *architects* is a shot in the dark...

    War story: I used to run an Operating System Section, and one thing
    we needed to develop was hardware memory test programs for use in the
    factory. We had a hell of a lot of trouble getting this done because
    our programmers point-blank refused to do such test programs.

    That remains true of *most* "programmers". It is not seen as
    an essential part of their job.

    Nor could you convince them. They wanted to do things that were hard, important, and interesting. And look good on a resume.


    One fine day, it occurred to me that the problem was that we were
    trying to use race horses to pull plows. So I went out to get the
    human equivalent of a plow horse, one that was a tad autistic and so
    would not be bored. This worked quite well. Fit the tool to the job.

    I had a boss who would have approached it differently. He would have
    had one of the technicians "compromise" their prototype/development
    hardware. E.g., crack a few *individual* cores to render them ineffective
    as storage media. Then, let the prima donnas chase mysterious bugs
    in *their* code. Until they started to question the functionality of
    the hardware. "Gee, sure would be nice if we could *prove* the
    hardware was at fault. Otherwise, it *must* just be bugs in your code!"

    Actually, the customers often *required* us to inject faults to prove
    that our spiffy fault-detection logic actually worked. It's a lot
    harder than it looks.


    I went to school in the mid 70's. Each *course* had its own
    computer system (in addition to the school-wide "computing service")
    because each professor had his own slant on how he wanted to
    teach his courseware. We wrote code in Pascal, PL/1, LISP, Algol,
    Fortran, SNOBOL, and a variety of "toy" languages designed to
    illustrate specific concepts and OS approaches. I can't recall
    compile time ever being an issue (but, the largest classes had
    fewer than 400 students)

    I graduated in 1969, and there were no computer courses on offer near
    me except Basic programming, which I took.

    I was writing FORTRAN code (on Hollerith cards) in that time frame
    (I was attending a local college, nights, while in jr high school)
    Of course, it depends on the resources available to you.

    I was doing Fortran on cards as well.


    OTOH, I never had the opportunity to use a glass TTY until AFTER
    college (prior to that, everything was hardcopy output -- DECwriters, >Trendata 1200's, etc.)

    Real men used teletype machines, which required two real men to lift.
    I remember them well.


    Ten years later, I got a night-school masters degree in Computer
    Science.

    The target computers were pretty spare, multiple Motorola 68000
    single-board computers in a VME crate or the like. I recall that a
    one megahertz instruction rate was considered really fast then.

    Even the 645 ran at ~500KHz (!). Yet, supported hundreds of users
    doing all sorts of different tasks. (I think the 6180 ran at
    ~1MHz).

    Those were the days. Our computers did integer arithmetic only,
    because floating-point was done only in software and was dog slow.

    And we needed multi-precision integer arithmetic for many things,
    using scaled binary to handle the needed precision and dynamic range.

    Yes. It is still used (Q-notation) in cases where your code may not want
    to rely on FPU support and/or has to run really fast. I make extensive
    use of it in my gesture recognizer where I am trying to fit sampled
    points to the *best* of N predefined curves, in a handful of milliseconds >(interactive interface).

    BTDT, though we fitted other curves to approximate such as trig
    functions.


    Much was made by the Pascal folk of the cost of software maintenance,
    but on the scale of a radar, maintenance was dominated by the
    hardware, and software maintenance was a roundoff error on the total
    cost of ownership. The electric bill was also larger.

    There likely is less call for change in such an "appliance".
    Devices with richer UIs tend to see more feature creep.
    This was one of Wirth's pet peeves; the fact that "designers"
    were just throwing features together instead of THINKING about
    which were truly needed. E.g., Oberon looks like something
    out of the 1980's...

    In the 1970s, there was no such thing as such an appliance.

    Anything that performed a fixed task. My first commercial product
    was a microprocessor-based LORAN position plotter (mid 70's).
    Now, we call them "deeply embedded" devices -- or, my preference, >"appliances".

    Well, in the radar world, the signal and data processors would be
    fixed-task, but they were neither small nor simple.


    Nor did appliances like stoves and toasters possess a computer.

    Key in this is figuring out how to *hide* complexity so a user
    (of varying degrees of capability across a wide spectrum) can
    get something to work within the constraints you've laid out.

    Hidden complexity is still complexity, with complex failure modes
    rendered incomprehensible and random-looking to those unaware of
    what's going on behind the pretty facade.

    If you can't explain the bulk of a solution "seated, having a drink",
    then it is too complex. "Complex is anything that doesn't fit in a
    single brain".

    Well, current radar systems (and all manner of commercial products)
    contain many millions of lines of code. Fitting this into a few
    brains is kinda achieved using layered abstractions.

    You can require a shitload of code to implement a simple abstraction.
    E.g., the whole notion of a Virtual Machine is easily captured in
    a single imagination, despite the complexity of making it happen on
    a particular set of commodity hardware.

    Yep. So don't do that. Just ride the metal.


    This falls apart in the integration lab, when that which is hidden
    turns on its creators. Progress is paced by having some people who do
    know how it really works, despite the abstractions, visible and
    hidden.

    Explain how the filesystem on <whatever> works, internally. How
    does it layer onto storage media? How are "devices" hooked into it?
    Abstract mechanisms like pipes? Where does buffering come into
    play? ACLs?

    There are people who do know these things.

    But that special knowledge is required due to a poor choice of
    abstractions. And, trying to shoehorn new notions into old
    paradigms. Just so you could leverage an existing name
    resolver for system objects that weren't part of the idea of
    "files".

    No. The problem is again that those pretty abstractions always hide
    at least one ugly truth, and someone has to know what's inside that
    facade. to fix many otherwise intractable problems.


    I, for example, dynamically create a "context" for each process.
    It is unique to that process so no other process can see its
    contents or access them. The whole notion of a separate
    ACL layered *atop* this is moot; if you aren't supposed to
    access something, then there won't be a *name* for that
    thing in the context provided to you!

    Yep. We do layers a lot, basically for portability and reusability.
    But at least a few people must know the truth.


    A context is then just a bag of (name, object) tuples and
    a set of rules for the resolver that operates on that context.

    So, a program that is responsible for printing paychecks would
    have a context *created* for it that contained:
    Clock -- something that can be queried for the current time/date
    Log -- a place to record its actions
    Printer -- a device that can materialize the paychecks
    and, some *number* of:
    Paycheck -- a description of the payee and amount
    i.e., the name "Paycheck" need not be unique in this context
    (why artificially force each paycheck to have a unique name
    just because you want to use an archaic namespace concept to
    bind an identifier/name to each? they're ALL just "paychecks")

    // resolve the objects governing the process
    theClock = MyContext=>resolve("Clock")
    theLog = MyContext=>resolve("Log")
    thePrinter = MyContext=>resolve("Printer")
    theDevice = thePrinter=>FriendlyName()

    // process each paycheck
    while( thePaycheck = MyContext=>resolve("Paycheck") ) {
    // get the parameters of interest for this paycheck
    thePayee = thePaycheck=>payee()
    theAmount = thePaycheck=>amount()
    theTime = theClock=>now()

    // print the check
    thePrinter=>write("Pay to the order of "
    , thePayee
    , "EXACTLY "
    , stringify(theAmount)
    )

    // make a record of the transaction
    theLog=>write("Drafted a disbursement to "
    , thePayee
    , " in the amount of "
    , theAmount
    , " at "
    , theTime
    , " printed on "
    , FriendlyName
    )

    // discard the processed paycheck
    MyContext=>unlink(thePaycheck)
    }

    // no more "Paycheck"s to process

    Shouldn't this be written in COBOL running on an IBM mainframe running
    CICS (Customer Information Control System, a general-purpose
    transaction processing subsystem for the z/OS operating system)? This
    is where the heavy lifting is done in such as payroll generation
    systems.

    .<https://www.ibm.com/docs/en/zos-basic-skills?topic=zos-introduction-cics>


    You don't need to worry about *where* each of these objects reside,
    how they are implemented, etc. I can move them at runtime -- even
    WHILE the code is executing -- if that would be a better use of
    resources available at the current time! Each of the objects
    bound to "Paycheck" names could reside in "Personnel's" computer
    as that would likely be able to support the number of employees
    on hand (thousands?). And, by unlinking the name from my namespace,
    I've just "forgotten" how to access that particular PROCESSED
    "Paycheck"; the original data remains intact (as it should
    because *I* shouldn't be able to dick with it!)

    "The Network is the Computer" -- welcome to the 1980's! (we'll
    get there, sooner or later!

    And, if the developer happens to use a method that is not supported
    on an object of that particular type, the compiler will grouse about
    it. If the process tries to use a method for which it doesn't have >permission (bound into each object reference as "capabilities"), the
    OS will not pass the associated message to the referenced object
    and the error handler will likely have been configured to KILL
    the process (you obviously THINK you should be able to do something
    that you can't -- so, you are buggy!)

    No need to run this process as a particular UID and configure
    the "files" in the portion of the file system hierarchy that
    you've set aside for its use -- hoping that no one ELSE will
    be able to peek into that area and harvest this information.

    No worry that the process might go rogue and try to access
    something it shouldn't -- like the "password" file -- because
    it can only access the objects for which it has been *given*
    names and only manipulate each of those through the capabilities
    that have been bound to those handle *instances* (i.e., someone
    else, obviously, has the power to create their *contents*!)

    This is conceptually much cleaner. And, matches the way you
    would describe "printing paychecks" to another individual.

    Maybe so, but conceptual clarity does not pay the rent or meet the
    payroll. Gotta get to the church on time. Every time.


    Pascal uses this exact approach. The absence of true pointers is
    crippling for hardware control, which is a big part of the reason that >>>> C prevailed.

    I don't eschew pointers. Rather, if the object being referenced can
    be remote, then a pointer is meaningless; what value should the pointer
    have if the referenced object resides in some CPU at some address in
    some address space at the end of a network cable?

    Remote meaning accessed via a comms link or LAN is not done using RMIs
    in my world - too slow and too asynchronous. Round-trip transit delay
    would kill you. Also, not all messages need guaranteed delivery, and
    it's expensive to provide that guarantee, so there need to be
    distinctions.

    Horses for courses. "Real Time" only means that a deadline exists
    for a task to complete. It cares nothing about how "immediate" or
    "often" such deadlines occur. A deep space probe has deadlines
    regarding when it must make orbital adjustment procedures
    (flybys). They may be YEARS in the future. And, only a few
    in number. But, miss them and the "task" is botched.

    Actually, that is not what "realtime" means in real-world practice.

    The whole fetish about deadlines and deadline scheduling is an
    academic fantasy. The problem is that such systems are quite fragile
    - if a deadline is missed, even slightly, the system collapses. Which
    is intolerable in practice, so there was always a path to handle the
    occasional overrun gracefully.



    I assume that RMI is Remote Module or Method Invocation. These are

    The latter. Like RPC (instead of IPC) but in an OOPS context.

    Object-Oriented stuff had its own set of problems, especially as
    originally implemented. My first encounter ended badly for the
    proposed system, as it turned out that the OO overhead was so high
    that the context switches between objects (tracks in this case) would
    over consume the computers, leaving not enough time to complete a
    horizon scan, never mind do anything useful. But that's a story for
    another day.

    OOPS as embodied in programming languages is fraught with all
    sorts of overheads that don't often apply in an implementation.

    However, dealing with "things" that have particular "operations"
    and "properties" is a convenient way to model a solution.

    In ages past, the paycheck program would likely have been driven by a
    text file with N columns: payee, amount, date, department, etc.
    The program would extract tuples, sequentially, from that and
    build a paycheck before moving on to the next line.

    Of course, if something happened mid file, you were now faced
    with the problem of tracking WHERE you had progressed and
    restarting from THAT spot, exactly. In my implementation,
    you just restart the program and it processes any "Paycheck"s
    that haven't yet been unlinked from its namespace.

    Restart? If you are implementing a defense against incoming
    supersonic missiles, you just died. RIP.


    inherently synchronous (like Ada rendezvous) and are crippling for
    realtime software of any complexity - the software soon ends up
    deadlocked, with everybody waiting for everybody else to do something.

    There is nothing that inherently *requires* an RMI to be synchronous.
    This is only necessary if the return value is required, *there*.
    E.g., actions that likely will take a fair bit of time to execute
    are often more easily implemented as asynchronous invocations
    (e.g., node127=>PowerOn()). But, these need to be few enough that the
    developer can keep track of "outstanding business"; expecting every
    remote interaction to be asynchronous means you end up having to catch
    a wide variety of diverse replies and sort out how they correlate
    with your requests (that are now "gone"). Many developers have a hard
    time trying to deal with this decoupled cause-effect relationship...
    especially if the result is a failure indication (How do I
    recover now that I've already *finished* executing that bit of code?)

    At the time, RMI was implemented synchronously only, and it did not
    matter if a response was required, you would always stall at that call
    until it completed. Meaning that you could not respond to the random
    arrival of an unrelated event.

    For asynchronous services, I would create a separate thread just
    to handle those replies as I wouldn't want my main thread having to
    be "interrupted" by late arriving messages that *it* would have to
    process. The second thread could convert those messages into
    flags (or other data) that the main thread could examine when it
    NEEDED to know about those other activities.

    E.g., the first invocation of the write() method on "thePrinter"
    could have caused the process that *implements* that printer
    to power up the printer. While waiting for it to come on-line,
    it could buffer the write() requests so that they would be ready
    when the printer actually *did* come on-line.

    This is how some early OO systems worked, and the time to switch
    contexts between processes was quite long, so long that it was unable
    to scan the horizon for incoming missiles fast enough to matter.


    War story: Some years later, in the late 1980s, I was asked to assess
    an academic operating system called Alpha for possible use in realtime
    applications. It was strictly synchronous. Turned out that if you
    made a typing mistake or the like, one could not stop the stream of
    error message without doing a full reboot. There was a Control-Z
    command, but it could not be processed because the OS was otherwise
    occupied with an endless loop. Oops. End of assessment.

    *Jensen's* Alpha distributed processing across multiple domains.
    So, "signals" had to chase the thread as it executed. I have a
    similar problem and rely on killing off a resource to notify
    it's consumers of its death and, thus, terminate their execution.

    Hmm. I think that Jensen's Alpha is the one in the war story. We
    were tipped off about Alpha's problem with runaway blather by one of
    Jensen's competitors.


    Of course, it can never be instantaneous as there are finite transit
    delays to get from one node (where part of the process may be executing)
    to another, etc.

    But, my applications are intended to be run-to-completion, not
    interactive.

    And thus not suitable for essentially all realtime application.


    When I developed the message-passing ring test, it was to flush out
    systems that were synchronous at the core, regardless of marketing
    bafflegab.

    But, synchronous programming is far easier to debug as you don't
    have to keep track of outstanding asynchronous requests that
    might "return" at some arbitrary point in the future. As the
    device executing the method is not constrained by the realities
    of the local client, there is no way to predict when it will
    have a result available.

    Well, the alternative is to use a different paradigm entirely, where
    for every event type there is a dedicated responder, which takes the
    appropriate course of action. Mostly this does not involve any other
    action type, but if necessary it is handled here. Typically, the
    overall architecture of this approach is a Finite State Machine.

    But then you are constrained to having those *dedicated* agents.
    What if a device goes down or is taken off line (maintenance)?
    I address this by simply moving the object to another node that
    has resources available to service its requests.

    One can design for such things, if needed. It's called fault
    tolerance (random breakage) or damage tolerance (also known as battle
    damage). But it's done in bespoke application code.


    So, if the "Personnel" computer (above) had to go offline, I would
    move all of the Paycheck objects to some other server that could
    serve up "paycheck" objects. The payroll program wouldn't be aware
    of this as the handle for each "paycheck" would just resolve to
    the same object but on a different server.

    The advantage, here, is that you can draw on ALL system resources to
    meet any demand instead of being constrained by the resources in
    a particular "box". E.g., my garage door opener can be tasked with >retraining the speech recognizer. Or, controlling the HVAC!

    True, but not suited for many realtime applications.


    This is driven by the fact that the real world has uncorrelated
    events, capable of happening in any order, so no program that requires >>>> that event be ordered can survive.

    You only expect the first event you await to happen before you
    *expect* the second. That, because the second may have some (opaque)
    dependence on the first.

    Or, more commonly, be statistically uncorrelated random. Like
    airplanes flying into coverage as weather patterns drift by as flocks
    of geese flap on by as ...

    That depends on the process(es) being monitored/controlled.
    E.g., in a tablet press, an individual tablet can't be compressed
    until its granulation has been fed into it's die (mold).
    And, can't be ejected until it has been compressed.

    So, there is an inherent order in these events, regardless of when
    they *appear* to occur.

    Sure, someone could be printing paychecks while I'm making
    tablets. But, the two processes don't interact so one cares
    nothing about the other.

    Yes for molding plastic, but what about the above described use cases,
    where one cannot make any such assumption?


    There is a benchmark for message-passing in realtime software where
    there is ring of threads or processes passing message around the ring
    any number of times. This is modeled on the central structure of many >>>> kinds of radar.

    So, like most benchmarks, is of limited *general* use.

    True, but what's the point? It is general for that class of problems,
    when the intent is to eliminate operating systems that cannot work for
    a realtime system.

    What *proof* do you have of that assertion? RT systems have been built
    and deployed with synchronous interfaces for decades. Even if those
    are *implied* (i.e., using a FIFO/pipe to connect two processes).

    The word "realtime" is wonderfully elastic, especially as used by
    marketers.

    A better approach is by use cases.

    Classic test case. Ownship is being approached by some number of
    cruise missiles approaching at two or three time the speed of sound.
    The ship is unaware of those missiles until they emerge from the
    horizon. By the way, it will take a Mach 3 missile about thirty
    seconds from detection to impact. Now what?


    There are also tests for how the thread scheduler
    works, to see if one can respond immediately to an event, or must wait
    until the scheduler makes a pass (waiting is forbidden).

    Waiting is only required if the OS isn't preemptive. Whether or
    not it is "forbidden" is a function of the problem space being addressed.

    Again, it's not quite that simple, as many RTOSs are not preemptive,
    but they are dedicated and quite fast. But preemptive is common these
    days.


    There are
    many perfectly fine operating system that will flunk these tests, and
    yet are widely used. But not for realtime.

    Again, why not? Real time only means a deadline exists. It says
    nothing about frequency, number, nearness, etc. If the OS is
    deterministic, then its behavior can be factored into the
    solution.

    An OS can be deterministic, and still be unsuitable. Many big compute
    engine boxes have a scheduler that makes sweep once a second, and
    their definition of RT is to sweep ten times a second. Which is
    lethal in many RT applications. So use a more suitable OS.


    I wrote a tape drive driver. The hardware was an 8b latch that captured
    the eight data tracks off the tape. And, held them until the transport >delivered another!

    So, every 6 microseconds, I had to capture a byte before it would be
    overrun by the next byte. This is realtime NOT because of the nearness
    of the deadlines ("the next byte") but, rather, because of the
    deadline itself. If I slowed the transport down to 1% of its
    normal speed, it would still be real-time -- but, the deadline would
    now be at t=600us.

    I've done that too.


    "Hard" or "soft"? If I *missed* the datum, it wasn't strictly
    "lost"; it just meant that I had to do a "read reverse" to capture
    it coming back under the head. If I had to do this too often,
    performance would likely have been deemed unacceptable.

    OTOH, if I needed the data regardless of how long it took, then
    such an approach *would* be tolerated.

    How would this handle a cruise missile? One can ask it to back up and
    try again, but it's unclear that the missile is listening.


    Folks always want to claim they have HRT systems -- an admission that
    they haven't sorted out how to convert the problem into a SRT one
    (which is considerably harder as it requires admitting that you likely
    WILL miss some deadlines; then what?).

    If an incoming projectile isn't intercepted (before it inflicts damage) >because your system has been stressed beyond its operational limits,
    do you shut down the system and accept defeat?

    If you designed with that sort of "hard" deadline in mind, you likely
    are throwing runtime resources at problems that you won't be able
    to address -- at the expense of the *next* problem that you possibly
    COULD have, had you not been distracted.

    The child's game of Wack-a-Mole is a great example. The process(es)
    have a release time defined by when the little bugger pokes his head up.
    The *deadline* is when he decides to take cover, again. If you can't
    wack him in that time period, you have failed.

    But, you don't *stop*!

    And, if you have started to take action on an "appearance" that you
    know you will not be able to "wack", you have hindered your
    performance on the *next* appearance; better to take the loss and
    prepare yourself for that *next*.

    I.e., in HRT problems, once the deadline has passed, there is
    no value to continuing to work on THAT problem. And, if you
    can anticipate that you won't meet that deadline, then aborting
    all work on it ASAP leaves you with more resources to throw
    at the NEXT instance.

    But, many designers are oblivious to this and keep chasing the
    current deadline -- demanding more and more resources to be
    able to CLAIM they can meet those. Amusingly, most have no way
    of knowing that they have missed a deadline save for the
    physical consequences ("Shit! They just nuked Dallas!").
    This because most RTOSs have no *real* concept of deadlines
    so the code can't adjust its "priorities".

    Yeah. I think we are solving very different problems, so it's time to
    stop.

    Joe Gwinn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Joe Gwinn on Wed Oct 23 11:48:30 2024
    On 10/23/2024 8:13 AM, Joe Gwinn wrote:
    War story: I used to run an Operating System Section, and one thing
    we needed to develop was hardware memory test programs for use in the
    factory. We had a hell of a lot of trouble getting this done because
    our programmers point-blank refused to do such test programs.

    That remains true of *most* "programmers". It is not seen as
    an essential part of their job.

    Nor could you convince them. They wanted to do things that were hard, important, and interesting. And look good on a resume.

    One has to remember who the *boss* is in any relationship.

    One fine day, it occurred to me that the problem was that we were
    trying to use race horses to pull plows. So I went out to get the
    human equivalent of a plow horse, one that was a tad autistic and so
    would not be bored. This worked quite well. Fit the tool to the job.

    I had a boss who would have approached it differently. He would have
    had one of the technicians "compromise" their prototype/development
    hardware. E.g., crack a few *individual* cores to render them ineffective >> as storage media. Then, let the prima donnas chase mysterious bugs
    in *their* code. Until they started to question the functionality of
    the hardware. "Gee, sure would be nice if we could *prove* the
    hardware was at fault. Otherwise, it *must* just be bugs in your code!"

    Actually, the customers often *required* us to inject faults to prove
    that our spiffy fault-detection logic actually worked. It's a lot
    harder than it looks.

    Yup. I have to validate my DRAM test routines. How do you cause a DRAM
    to experience a SEU? Hard fault? Transient fault? Multiple bit
    faults (at same address or different addresses in a shared row
    or column)?

    And: you replace the DRAM with something that mimics a DRAM that you
    can *command* to fail in specific ways at specific times and verify that
    your code actually sees those failures.

    What's harder is injecting faults *inside* MCUs where, increasingly,
    more and more critical resources reside.

    OTOH, I never had the opportunity to use a glass TTY until AFTER
    college (prior to that, everything was hardcopy output -- DECwriters,
    Trendata 1200's, etc.)

    Real men used teletype machines, which required two real men to lift.
    I remember them well.

    I have one in the garage. Hard to bring myself to part with it...
    "nostalgia". (OTOH, I've rid myself of the mag tape transports...)

    And we needed multi-precision integer arithmetic for many things,
    using scaled binary to handle the needed precision and dynamic range.

    Yes. It is still used (Q-notation) in cases where your code may not want
    to rely on FPU support and/or has to run really fast. I make extensive
    use of it in my gesture recognizer where I am trying to fit sampled
    points to the *best* of N predefined curves, in a handful of milliseconds
    (interactive interface).

    BTDT, though we fitted other curves to approximate such as trig
    functions.

    Likely the data points didn't change every second as they do with
    a user trying to signal a particular gesture in an interactive environment.

    In the 1970s, there was no such thing as such an appliance.

    Anything that performed a fixed task. My first commercial product
    was a microprocessor-based LORAN position plotter (mid 70's).
    Now, we call them "deeply embedded" devices -- or, my preference,
    "appliances".

    Well, in the radar world, the signal and data processors would be
    fixed-task, but they were neither small nor simple.

    Neither particular size nor complexity are required. Rather, that
    the device isn't "general purpose".

    A context is then just a bag of (name, object) tuples and
    a set of rules for the resolver that operates on that context.

    So, a program that is responsible for printing paychecks would
    have a context *created* for it that contained:
    Clock -- something that can be queried for the current time/date
    Log -- a place to record its actions
    Printer -- a device that can materialize the paychecks
    and, some *number* of:
    Paycheck -- a description of the payee and amount
    i.e., the name "Paycheck" need not be unique in this context
    (why artificially force each paycheck to have a unique name
    just because you want to use an archaic namespace concept to
    bind an identifier/name to each? they're ALL just "paychecks")

    // resolve the objects governing the process
    theClock = MyContext=>resolve("Clock")
    theLog = MyContext=>resolve("Log")
    thePrinter = MyContext=>resolve("Printer")
    theDevice = thePrinter=>FriendlyName()

    // process each paycheck
    while( thePaycheck = MyContext=>resolve("Paycheck") ) {
    // get the parameters of interest for this paycheck
    thePayee = thePaycheck=>payee()
    theAmount = thePaycheck=>amount()
    theTime = theClock=>now()

    // print the check
    thePrinter=>write("Pay to the order of "
    , thePayee
    , "EXACTLY "
    , stringify(theAmount)
    )

    // make a record of the transaction
    theLog=>write("Drafted a disbursement to "
    , thePayee
    , " in the amount of "
    , theAmount
    , " at "
    , theTime
    , " printed on "
    , FriendlyName
    )

    // discard the processed paycheck
    MyContext=>unlink(thePaycheck)
    }

    // no more "Paycheck"s to process

    Shouldn't this be written in COBOL running on an IBM mainframe running
    CICS (Customer Information Control System, a general-purpose
    transaction processing subsystem for the z/OS operating system)? This
    is where the heavy lifting is done in such as payroll generation
    systems.

    It's an example constructed, on-the-fly, to illustrate how problems
    are approached in my environment. It makes clear that the developer
    need not understand how any of these objects are implemented nor
    have access to other methods that they likely support (i.e., someone
    obviously had to *set* the payee and amount in each paycheck, had to
    determine *which* physical printer would be used, what typeface would
    be used to "write" on the device, etc.).

    It also hides -- and provides flexibility to the implementor -- how
    these things are done. E.g., does the "payee()" method just return
    a string embodied in the "paycheck" object? Or, does it run a
    query on a database to fetch that information? Does theClock
    reflect the time in the physical location where the printer resides?
    Where the paycheck resides? Or, some other place? Is it indicated
    in 24 hour time? Which timezone? Is DST observed, there? etc.

    (why should a developer have to know these things just to print paychecks?)

    No need to run this process as a particular UID and configure
    the "files" in the portion of the file system hierarchy that
    you've set aside for its use -- hoping that no one ELSE will
    be able to peek into that area and harvest this information.

    No worry that the process might go rogue and try to access
    something it shouldn't -- like the "password" file -- because
    it can only access the objects for which it has been *given*
    names and only manipulate each of those through the capabilities
    that have been bound to those handle *instances* (i.e., someone
    else, obviously, has the power to create their *contents*!)

    This is conceptually much cleaner. And, matches the way you
    would describe "printing paychecks" to another individual.

    Maybe so, but conceptual clarity does not pay the rent or meet the
    payroll. Gotta get to the church on time. Every time.

    Conceptual clarity increases the likelihood that the *right* problem
    is solved. And, one only has to get to the church as promptly as
    is tolerated; if payroll doesn't get done on Friday (as expected),
    it still has value if the checks are printed on *monday* (though you
    may have to compensate the recipients in some way and surely not
    make a point of doing this often)

    Pascal uses this exact approach. The absence of true pointers is
    crippling for hardware control, which is a big part of the reason that >>>>> C prevailed.

    I don't eschew pointers. Rather, if the object being referenced can
    be remote, then a pointer is meaningless; what value should the pointer >>>> have if the referenced object resides in some CPU at some address in
    some address space at the end of a network cable?

    Remote meaning accessed via a comms link or LAN is not done using RMIs
    in my world - too slow and too asynchronous. Round-trip transit delay
    would kill you. Also, not all messages need guaranteed delivery, and
    it's expensive to provide that guarantee, so there need to be
    distinctions.

    Horses for courses. "Real Time" only means that a deadline exists
    for a task to complete. It cares nothing about how "immediate" or
    "often" such deadlines occur. A deep space probe has deadlines
    regarding when it must make orbital adjustment procedures
    (flybys). They may be YEARS in the future. And, only a few
    in number. But, miss them and the "task" is botched.

    Actually, that is not what "realtime" means in real-world practice.

    The whole fetish about deadlines and deadline scheduling is an
    academic fantasy. The problem is that such systems are quite fragile
    - if a deadline is missed, even slightly, the system collapses.

    No. That's a brittle system. If you miss a single incoming missile,
    the system doesn't collapse -- unless your defensive battery is targeted.
    X, Y or Z may sustain losses. But, you remain on-task to defend against
    A, B or C suffering similar fate.

    The "all or nothing" mindset is a naive approach to "real-time".

    A tablet press produces ~200 tablets per minute. Each minute.
    For an 8 hour shift.

    If I *miss* capturing the ACTUAL compression force that a specific
    tablet experiences as it undergoes compression to a fixed geometry,
    then I have no idea as to the actual weight of that particular tablet
    (which corresponds with the amount of "actives" in the tablet).

    I don't shutdown the tablet press because of such a missed deadline.
    Rather, I mark that tablet as "unknown" and, depending on the
    product being formulated, arrange for it to be discarded some number
    of milliseconds later when it leaves the tablet press -- instead of
    being "accepted" like those whose compression force was observed and
    found to be within the nominal range to indicate a correct tablet
    weight (you can't *weigh* individual tablets at those speeds).

    You design the system -- mechanism, hardware and software -- so that
    you can meet the deadlines that your process imposes. And, hope to
    maximize such compliance, relying on other mechanisms to correctly
    handle the "exceptions" that might otherwise get through.

    *IF* the mechanism that is intended to dispatch the rejected tablet
    fails (indicated by your failing to "see" the tablet exit via the
    "reject" chute), then you have a "dubious" tablet in a *barrel* of
    otherwise acceptable tablets. Isolating it would be time consuming
    and costly.

    So, use smaller barrels so the amount of product that you have to
    discard in such an event is reduced.

    Or, run the process slower to minimize the chance of such "double
    faults" (the first being the failure to observe some critical
    aspect of the tablet's production; the second the failure to
    definitively discard such tablets).

    Or, bitch to your provider that their product is failing to meet
    its advertised specifications.

    Which
    is intolerable in practice, so there was always a path to handle the occasional overrun gracefully.

    One has to *know* that there was an overrun in order to know that
    it has to be handled. Most RTOSs have no mechanisms to detect such
    overruns -- because they have no notion of the associated deadlines!

    I chuckle at how so few systems with serial (EIA232) ports actually
    *did* anything with overrun, framing, parity errors... you (your code)
    KNOW the character extracted from the receiver is NOT, likely, the
    character that you THINK it is, so why are you passing it along
    up the stack?

    Nowadays, similar mechanisms occur in network stacks. Does
    *anything* (higher level) know that the hardware is struggling?
    That the interface has slipped into HDX mode? That lots of
    packets are being rejected? What does the device *do* about
    these things? Or, do you just wait until the user gets
    annoyed with the performance and tries to diagnose the problem,
    himself?

    Of course, if something happened mid file, you were now faced
    with the problem of tracking WHERE you had progressed and
    restarting from THAT spot, exactly. In my implementation,
    you just restart the program and it processes any "Paycheck"s
    that haven't yet been unlinked from its namespace.

    Restart? If you are implementing a defense against incoming
    supersonic missiles, you just died. RIP.

    Only if *you* were targeted by said missile. If some other asset
    was struck, would you stop defending?

    If the personnel responsible for bringing new supplies to the
    defensive battery faltered, would you tell them not to bother
    trying, again, to replenish those stores?

    You seem to think all RT systems are missile defense systems.

    For asynchronous services, I would create a separate thread just
    to handle those replies as I wouldn't want my main thread having to
    be "interrupted" by late arriving messages that *it* would have to
    process. The second thread could convert those messages into
    flags (or other data) that the main thread could examine when it
    NEEDED to know about those other activities.

    E.g., the first invocation of the write() method on "thePrinter"
    could have caused the process that *implements* that printer
    to power up the printer. While waiting for it to come on-line,
    it could buffer the write() requests so that they would be ready
    when the printer actually *did* come on-line.

    This is how some early OO systems worked, and the time to switch
    contexts between processes was quite long, so long that it was unable
    to scan the horizon for incoming missiles fast enough to matter.

    That's a consequence of nearness of deadline vs. system resources
    available. Hardware is constantly getting faster. Designing as
    if the hardware will always deliver a specific level of performance
    means you are constantly redesigning. And, likely using obsolete
    hardware because it was *specified* long before the system's deployment
    date.

    [When making tablets, its more efficient to have a single process
    "track" each tablet instance from granulation feed, through precompression, compression, ejection and dispatch than it is to have separate
    processes for each of these "processes". And, with tablet rates of
    5ms, the time required for context switches would be just noise
    but, still an issue worth addressing. That's where system engineering
    comes into play.]

    When I toured NORAD (Cheyenne Mountain Complex) in the 80's, they
    were in the process of installing hardware ordered in the 60's
    (or so the tour guide indicated; believable given the overhead
    in specifying, bidding, designing and implementing such systems).

    Most RT systems have much shorter cycle times. A few years from
    conception to deployment is far more common; even less in the
    consumer market.

    War story: Some years later, in the late 1980s, I was asked to assess
    an academic operating system called Alpha for possible use in realtime
    applications. It was strictly synchronous. Turned out that if you
    made a typing mistake or the like, one could not stop the stream of
    error message without doing a full reboot. There was a Control-Z
    command, but it could not be processed because the OS was otherwise
    occupied with an endless loop. Oops. End of assessment.

    *Jensen's* Alpha distributed processing across multiple domains.
    So, "signals" had to chase the thread as it executed. I have a
    similar problem and rely on killing off a resource to notify
    it's consumers of its death and, thus, terminate their execution.

    Hmm. I think that Jensen's Alpha is the one in the war story. We
    were tipped off about Alpha's problem with runaway blather by one of
    Jensen's competitors.

    Jensen is essentially an academic. Wonderful ideas but largely impractical
    (on current hardware). "Those that CAN, *do*; those that CAN'T, *teach*?"

    Of course, it can never be instantaneous as there are finite transit
    delays to get from one node (where part of the process may be executing)
    to another, etc.

    But, my applications are intended to be run-to-completion, not
    interactive.

    And thus not suitable for essentially all realtime application.

    Again, what proof of that? Transit delays are much shorter
    today than 5 years ago, 10 years ago, 40 years ago. They'll
    be even shorter in the future.

    Just because they are unsuitable for *a* SPECIFIC RT problem
    doesn't rule them out for "essentially all". Clearly not
    applicable to pulling bytes off a transport at 6 microsecond
    intervals. But, controlling a tablet press? Or, a production
    line? CNC mill?

    This is how folks get misled in their ideas wrt RT -- they focus
    on "fast"/"frequent" tasks with short release-to-deadline
    times and generalize this as being characteristic of ALL systems.

    There is nothing to prevent me from aborting a process after it has
    been started (I have a colleague who codes like this, intentionally...
    get the process started, THEN decide if it should continue). I
    have to continuously adjust how resources are deployed in my
    system so it can expand to handle additional tasks without
    resorting to overprovisioning.

    If, for example, I start to retrain the speech recognizer and
    some other "responsibility" comes up (motion detected on the
    camera monitoring the approach to the front door), it would be
    silly to waste resources continuing that speech training (which
    could be deferred as it's deadline is distant, in time, and
    there is little decrease in value due to "lateness") at
    the expense of hindering the recognition of the individual
    approaching the door -- *he* likely isn't going to wait around
    until I have time to sort out his identity!

    So, kill the retraining task (it can be restarted from where
    it left off because it was designed to be so) to immediately
    free those resources for other use. When they again become
    available (here or on some other node that I may opt to
    bring on-line *because* I see a need for more resources),
    resume the retraining task. When it completes, those resources
    can be retired (made available to other tasks or nodes taken
    offline to conserve power).

    But, synchronous programming is far easier to debug as you don't
    have to keep track of outstanding asynchronous requests that
    might "return" at some arbitrary point in the future. As the
    device executing the method is not constrained by the realities
    of the local client, there is no way to predict when it will
    have a result available.

    Well, the alternative is to use a different paradigm entirely, where
    for every event type there is a dedicated responder, which takes the
    appropriate course of action. Mostly this does not involve any other
    action type, but if necessary it is handled here. Typically, the
    overall architecture of this approach is a Finite State Machine.

    But then you are constrained to having those *dedicated* agents.
    What if a device goes down or is taken off line (maintenance)?
    I address this by simply moving the object to another node that
    has resources available to service its requests.

    One can design for such things, if needed. It's called fault
    tolerance (random breakage) or damage tolerance (also known as battle damage). But it's done in bespoke application code.

    I have a more dynamic environment. E.g., if power fails, which
    physical nodes should I power down (as I have limited battery)?
    For the nodes left operational, which tasks should be (re)deployed
    to continue operation on them? If someone (hostile actor?)
    damages or interferes with a running node, how do I restore
    the operations that were in progress on that node?

    So, if the "Personnel" computer (above) had to go offline, I would
    move all of the Paycheck objects to some other server that could
    serve up "paycheck" objects. The payroll program wouldn't be aware
    of this as the handle for each "paycheck" would just resolve to
    the same object but on a different server.

    The advantage, here, is that you can draw on ALL system resources to
    meet any demand instead of being constrained by the resources in
    a particular "box". E.g., my garage door opener can be tasked with
    retraining the speech recognizer. Or, controlling the HVAC!

    True, but not suited for many realtime applications.

    In the IoT world, it is increasingly a requirement. The current approach
    of having little "islands" means that some more capable system must
    be ALWAYS available to coordinate their activities and do any "heavy
    lifting" for them.

    The NEST thermostat has 64MB of memory and two processors. And still
    relies on a remote service to implement its intended functionality.

    And, it can't "help" any other devices achieve THEIR goals even though
    99% of its time is spent "doing nothing". This leads to a binary decision process: do nodes share resources and cooperate to achieve goals that
    exceed their individual capabilities? Or, do they rely on some
    external service for that "added value"? (what happens when that
    service is unavailable??)

    This is driven by the fact that the real world has uncorrelated
    events, capable of happening in any order, so no program that requires >>>>> that event be ordered can survive.

    You only expect the first event you await to happen before you
    *expect* the second. That, because the second may have some (opaque)
    dependence on the first.

    Or, more commonly, be statistically uncorrelated random. Like
    airplanes flying into coverage as weather patterns drift by as flocks
    of geese flap on by as ...

    That depends on the process(es) being monitored/controlled.
    E.g., in a tablet press, an individual tablet can't be compressed
    until its granulation has been fed into it's die (mold).
    And, can't be ejected until it has been compressed.

    So, there is an inherent order in these events, regardless of when
    they *appear* to occur.

    Sure, someone could be printing paychecks while I'm making
    tablets. But, the two processes don't interact so one cares
    nothing about the other.

    Yes for molding plastic, but what about the above described use cases,
    where one cannot make any such assumption?

    There are no universal solutions. Apply your missile defense system to controlling the temperature in a home -- clearly a RT task (as WHEN you
    turn the HVAC on and off has direct consequences to the comfort of the occupants -- that being the reason they *have* a thermostat!). How large/expensive will it be and how many man-years to deploy? Its
    specific notion of timeliness and consequences are silly in the
    context of HVAC control.

    OTOH, controlling an air handler that must maintain constant temperature
    and humidity to a tablet coating process can't tolerate lateness as
    it alters the "complexion" of the air entering the process chamber.
    So, a batch of tablets could be rendered useless because the AHU
    wasn't up to the requirements stated by the process chemist.

    There is a benchmark for message-passing in realtime software where
    there is ring of threads or processes passing message around the ring >>>>> any number of times. This is modeled on the central structure of many >>>>> kinds of radar.

    So, like most benchmarks, is of limited *general* use.

    True, but what's the point? It is general for that class of problems,
    when the intent is to eliminate operating systems that cannot work for
    a realtime system.

    What *proof* do you have of that assertion? RT systems have been built
    and deployed with synchronous interfaces for decades. Even if those
    are *implied* (i.e., using a FIFO/pipe to connect two processes).

    The word "realtime" is wonderfully elastic, especially as used by
    marketers.

    A better approach is by use cases.

    Classic test case. Ownship is being approached by some number of
    cruise missiles approaching at two or three time the speed of sound.
    The ship is unaware of those missiles until they emerge from the
    horizon. By the way, it will take a Mach 3 missile about thirty
    seconds from detection to impact. Now what?

    Again with the missiles. Have you done any RT systems OTHER than
    missile defense?

    Space probe is approaching Jupiter and scheduling the burn of its
    main engine to adjust for orbital insertion. It *knows* what it
    has to do long before it gets to the "appointed time" (space).
    Granted, it may have to fine tune the exact timing -- and the
    length of the burn based on observations closer to the *deadline*.
    And, it doesn't get a second chance -- a miss IS a mile!

    Should it also worry about the possible presence of Klingons
    nearby?

    How you *define* RT shouldn't rely on use cases. That should
    be a refinement of the particular requirements of THAT use case
    and not RT in general.

    Witness tape drives, serial ports, tablet presses, deep space
    probes. ALL are RT problems. Why is "missile defense" more
    RT than any of these? NORAD has a shitload of resources
    dedicated to that task -- mainly because of the *cost* of
    a missed deadline. But, an ABS system has a similar cost to
    the driver (or nearby people) if it fails to meet it's
    performance deadline.

    Cost/value is just one of many ORTHOGONAL issues in a RT system.
    Deadline frequency, nearness, regularity, etc. are others.
    How you approach a deep space probe problem is different than
    how you approach capturing bottles on a conveyor belt. But,
    the same issues are present in each case.

    There are also tests for how the thread scheduler
    works, to see if one can respond immediately to an event, or must wait
    until the scheduler makes a pass (waiting is forbidden).

    Waiting is only required if the OS isn't preemptive. Whether or
    not it is "forbidden" is a function of the problem space being addressed.

    Again, it's not quite that simple, as many RTOSs are not preemptive,
    but they are dedicated and quite fast. But preemptive is common these
    days.

    They try to compensate by hoping the hardware is fast enough
    to give the illusion of timeliness in their response. Just
    like folks use nonRT OS's to tackle RT tasks -- HOPING they
    run fast enough to *not* miss deadlines or tweeking other
    aspects of the design to eliminate/minimize the probability
    of those issues mucking with the intended response. E.g.,
    multimedia disk drives that defer thermal recalibration
    cycles to provide for higher sustained throughput.

    There are
    many perfectly fine operating system that will flunk these tests, and
    yet are widely used. But not for realtime.

    Again, why not? Real time only means a deadline exists. It says
    nothing about frequency, number, nearness, etc. If the OS is
    deterministic, then its behavior can be factored into the
    solution.

    An OS can be deterministic, and still be unsuitable. Many big compute
    engine boxes have a scheduler that makes sweep once a second, and
    their definition of RT is to sweep ten times a second. Which is
    lethal in many RT applications. So use a more suitable OS.

    That's the "use a fast computer" approach to sidestep the issue of being suitable for RT use. For something << slower, it could provide acceptable performance. But, would likely be very brittle; if the needs of the
    problem increased or the load on the solution increased, it would
    magically and mysteriously break.

    "Hard" or "soft"? If I *missed* the datum, it wasn't strictly
    "lost"; it just meant that I had to do a "read reverse" to capture
    it coming back under the head. If I had to do this too often,
    performance would likely have been deemed unacceptable.

    OTOH, if I needed the data regardless of how long it took, then
    such an approach *would* be tolerated.

    How would this handle a cruise missile? One can ask it to back up and
    try again, but it's unclear that the missile is listening.

    A missile is an HRT case -- not because it has explosive capabilities
    but because there is no "value" (to use a Jensen-ism) to addressing
    the problem AFTER its deadline. It is just as ineffective at
    catching playing cards falling out of your hands (which have far less consequences)

    Too often, people treat THEIR problem as HRT -- we *must* handle ALL
    of these events -- when there are alternatives that acknowledge the
    real possibility that there are alternatives. It's just *easier*
    to make that claim as it gives you cover to request (require!)
    more resources to meet all of those needs.

    Apparently, israel is learning that they can't handle all the
    incoming projectiles/vessels that they may encounter, now.
    Poor system design? Or, the adversary has realized that
    there are limits to the defensive systems.... limits that they
    can overwhelm.

    Acknowledging (to yourself) that there are limits is empowering.
    It lets you use the resources that you have to protect more
    *value*. Do you stop the incoming missile targeting your
    ammunition dump, civilian lodging, military lodging or
    financial sector? Pick one because you (demonstrably) can't
    defend *all*.

    Resilient RT systems make these decisions dynamically (do I
    finish retraining the synthesizer or recognizer the visitor?).
    They do so by understanding the instantaneous limits to their
    abilities and adjusting, accordingly. (but, you have to be able
    to quantify those limits, as they change!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Don Y on Wed Oct 23 14:59:08 2024
    On 10/23/2024 11:48 AM, Don Y wrote:
    A tablet press produces ~200 tablets per minute.  Each minute.

    Ugh! S.b., "second". I.e., a 5ms tablet rate. (a large fraction of
    a million each hour)

    For an 8 hour shift.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)