• (Semi-) formal methods

    From Don Y@21:1/5 to All on Tue May 11 18:25:57 2021
    How prevalent are (semi-) formal design methods employed?
    Which?

    [I don't have first-hand knowledge of *anyone* using them]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to Don Y on Thu May 13 15:25:48 2021
    Don Y <blockedofcourse@foo.invalid> wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    We use theorem provers to find bugs in ISA specification: https://www.cl.cam.ac.uk/~pes20/sail/

    They're quite handy for finding bugs before they hit silicon...

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Theo on Thu May 13 11:57:36 2021
    On 5/13/2021 7:25 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    We use theorem provers to find bugs in ISA specification: https://www.cl.cam.ac.uk/~pes20/sail/

    They're quite handy for finding bugs before they hit silicon...

    But, presumably, only of value if you're a SoC integrator?

    I.e., given COTS devices, what might they reveal to users of
    said devices?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dave Nadler@21:1/5 to Don Y on Thu May 13 19:32:33 2021
    On 5/11/2021 9:25 PM, Don Y wrote:
    How prevalent are (semi-) formal design methods employed?

    https://tinyurl.com/thbjw5j4

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Dave Nadler on Thu May 13 16:57:55 2021
    On 5/13/2021 4:32 PM, Dave Nadler wrote:
    On 5/11/2021 9:25 PM, Don Y wrote:
    How prevalent are (semi-) formal design methods employed?

    https://tinyurl.com/thbjw5j4

    Hardly formal -- no cufflinks, watchfob, cumberbund nor tails! :>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to Don Y on Fri May 14 10:43:03 2021
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 5/13/2021 7:25 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    We use theorem provers to find bugs in ISA specification: https://www.cl.cam.ac.uk/~pes20/sail/

    They're quite handy for finding bugs before they hit silicon...

    But, presumably, only of value if you're a SoC integrator?

    I.e., given COTS devices, what might they reveal to users of
    said devices?

    They can reveal bugs in existing implementations - where they don't meet the spec and bad behaviour can result.

    However CPU and FPGA design is what we do so that's where we focus our
    efforts. Depends whether FPGA counts as COTS or not...

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Theo on Fri May 14 12:37:09 2021
    On 5/14/2021 2:43 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 5/13/2021 7:25 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    We use theorem provers to find bugs in ISA specification:
    https://www.cl.cam.ac.uk/~pes20/sail/

    They're quite handy for finding bugs before they hit silicon...

    But, presumably, only of value if you're a SoC integrator?

    I.e., given COTS devices, what might they reveal to users of
    said devices?

    They can reveal bugs in existing implementations - where they don't meet the spec and bad behaviour can result.

    However CPU and FPGA design is what we do so that's where we focus our efforts. Depends whether FPGA counts as COTS or not...

    Understood. Tools fit the application domains for which they were designed.

    How did *adoption* of the tool come to pass? Was it "mandated" by corporate policy? Something <someone> stumbled on, played with and then "pitched"
    to management/peers? Mandated by your industry? etc.

    [Just because a tool "makes sense" -- logically or economically -- doesn't
    mean it will be adopted, much less *embraced*!]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paul Rubin@21:1/5 to Dave Nadler on Fri May 14 12:22:31 2021
    Dave Nadler <drn@nadler.com> writes:
    How prevalent are (semi-) formal design methods employed?
    https://tinyurl.com/thbjw5j4

    Wrong picture, try the one on this page:

    https://web.stanford.edu/~engler/

    Look at the articles linked from there too ;-).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Clifford Heath@21:1/5 to Don Y on Sat May 15 14:15:15 2021
    On 12/5/21 11:25 am, Don Y wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    [I don't have first-hand knowledge of *anyone* using them]

    Don, IDK if you know about TLA+, but there is a growing community using
    it. It is specifically good at finding errors in protocol (== API)
    designs (because "TL" means "Temporal Logic"). I haven't used it so
    can't really answer many questions, but I have been following the
    mailing list for some time and greatly admire some of the excellent folk
    are are using it.

    <https://learntla.com/introduction/>

    Clifford Heath.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Clifford Heath on Fri May 14 21:52:47 2021
    On 5/14/2021 9:15 PM, Clifford Heath wrote:
    On 12/5/21 11:25 am, Don Y wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    [I don't have first-hand knowledge of *anyone* using them]

    Don, IDK if you know about TLA+, but there is a growing community using it. It
    is specifically good at finding errors in protocol (== API) designs (because "TL" means "Temporal Logic"). I haven't used it so can't really answer many questions, but I have been following the mailing list for some time and greatly
    admire some of the excellent folk are are using it.

    My query was more intended to see how *commonplace* such approaches are.
    There are (and have been) many "great ideas" but, from my vantage point,
    I don't see much by way of *adoption*. (Note your own experience with TLA!)

    [The "(Semi-)" was an accommodation for *individuals* who may be
    using such things even though their work environment doesn't]

    So, you either conclude that the methods are all "hype" (not likely),
    *or*, there is some inherent resistance to their adoption. Price?
    (Process) overhead? NIH? Scale? Education? <shrug>

    [Note my followup question to Theo as to how *he/they* ended up with
    their tool/process]

    There seem to be many "lost opportunities" (?) for tools, techniques, processes, etc. I'm just curious as to *why* (or, why *not*).
    Or, said another way, what does a tool/process have to *do* in
    order to overcome this "resistance"?

    It's as if a (professional) writer wouldn't avail himself of a
    spell-checker... Or, a layout guy not running DRCs... (yes,
    I realize this to be an oversimplification; the examples I've
    given are just mouse clicks!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Don Y on Fri May 14 21:55:15 2021
    On 5/14/2021 9:52 PM, Don Y wrote:
    Or, said another way, what does a tool/process have to *do* in
    order to overcome this "resistance"?

    By "do", I mean in the colloquial sense, not a specific feature
    set, etc.

    I.e., "It has to make my dinner and wash the dishes in order
    for me to consider it worth embracing" (or, "It has to cut
    25% of the development cost from a project")

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Clifford Heath@21:1/5 to Don Y on Sat May 15 18:14:14 2021
    On 15/5/21 2:52 pm, Don Y wrote:
    On 5/14/2021 9:15 PM, Clifford Heath wrote:
    On 12/5/21 11:25 am, Don Y wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    [I don't have first-hand knowledge of *anyone* using them]

    Don, IDK if you know about TLA+, but there is a growing community
    using it. It is specifically good at finding errors in protocol (==
    API) designs (because "TL" means "Temporal Logic"). I haven't used it
    so can't really answer many questions, but I have been following the
    mailing list for some time and greatly admire some of the excellent
    folk are are using it.

    My query was more intended to see how *commonplace* such approaches are. There are (and have been) many "great ideas" but, from my vantage point,
    I don't see much by way of *adoption*.  (Note your own experience with
    TLA!)

    My own experience is irrelevant, as I was semi-retired when I first came
    across it. On the other hand, the reason I came across it was I received
    a message from Chris Newcombe (admiring my related work), whose success
    in using it to find a potential failure in DynamoDB that could have
    knocked *Amazon* off the air was a stimulus to many *many* folk learning
    TLA+.


    So, you either conclude that the methods are all "hype" (not likely),
    *or*, there is some inherent resistance to their adoption.  Price?
    (Process) overhead?  NIH?  Scale?  Education?  <shrug>

    For software folk at least, it requires a very different way of
    thinking. The same problem I had promulgating fact-based modeling: both
    address a massive *blind spot* in developer's consciousness.

    Specifically we are unable to consciously detect when there is a failure
    in our logic; because to be conscious of the failure it would have to be
    *not present*. That is, we can only know such things in hindsight, or
    when we deliberately apply specific methods to check our logic. But why
    would we do that when it is "apparent" that our logic is correct?

    It's as if a (professional) writer wouldn't avail himself of a spell-checker...  Or, a layout guy not running DRCs...  (yes,
    I realize this to be an oversimplification; the examples I've
    given are just mouse clicks!)

    These are not the same at all, because those things have rules. There is
    no rule for correct logic.

    Clifford Heath

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HT-Lab@21:1/5 to Don Y on Sat May 15 08:44:01 2021
    On 14/05/2021 20:37, Don Y wrote:
    On 5/14/2021 2:43 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 5/13/2021 7:25 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    We use theorem provers to find bugs in ISA specification:
    https://www.cl.cam.ac.uk/~pes20/sail/

    They're quite handy for finding bugs before they hit silicon...

    But, presumably, only of value if you're a SoC integrator?

    I.e., given COTS devices, what might they reveal to users of
    said devices?

    They can reveal bugs in existing implementations - where they don't
    meet the
    spec and bad behaviour can result.

    However CPU and FPGA design is what we do so that's where we focus our
    efforts.  Depends whether FPGA counts as COTS or not...

    Understood.  Tools fit the application domains for which they were
    designed.

    How did *adoption* of the tool come to pass?  Was it "mandated" by
    corporate
    policy?  Something <someone> stumbled on, played with and then "pitched"
    to management/peers?  Mandated by your industry?  etc.


    It became a must have tool about 2-3 decades ago for the safety
    critical/ avionics/medical industries. Design where becoming so complex
    that simulation could no longer answers questions like, is dead-lock or life-lock possible on our statemachine, can you buffers overflow, do you
    have arithmetic overflow, deadcode, race condition etc. The tools are
    now well established and most of the above questions can be answered
    (with some user constraints) by a simple push button tool. They are
    still expensive (you won't get much change from 20K UK pounds) but most high-end FPGA/ASIC companies use them. They are not a replacement for simulation but one of the tools you need to complete your verification.

    Regards,
    Hans
    www.ht-lab.com


    [Just because a tool "makes sense" -- logically or economically -- doesn't mean it will be adopted, much less *embraced*!]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Clifford Heath on Sat May 15 02:10:01 2021
    On 5/15/2021 1:14 AM, Clifford Heath wrote:

    It's as if a (professional) writer wouldn't avail himself of a
    spell-checker... Or, a layout guy not running DRCs... (yes,
    I realize this to be an oversimplification; the examples I've
    given are just mouse clicks!)

    These are not the same at all, because those things have rules. There is no rule for correct logic.

    Logic is only "correct" if you are applying a prover.

    You can still use formal methods in things like specifications -- where
    there is no "proof" implied. The advantage being that everyone can unambiguously understand the intent of the specification without lots
    of (verbose) "legalese".

    E.g., I did my initial design with OCL as the means by which I conveyed
    my intent to my colleagues. It wasn't that much of a "lift" for them
    to learn the representation well enough to ask pertinent questions and "challenge" the implementation. And, didn't resort to lots of mathematical abstraction to make those points.

    Unfortunately, use in such a document is not suited for "general audiences" because it lacks rationale for each item in the specification. (and,
    relies on some semi-ubiquitous usage to ensure readers CAN read it)

    OTOH, if you're writing the code (or reading it), such documents add
    further texture to what you're seeing (in the programming language).
    Another set of comments, so to speak. Or, a roadmap.

    The alternative is: an /ad hoc/ specification (with some likely incompletely specified set of loose rules) *or* an *absent* specification. Each of these leave gaping holes in the design that (supposedly) follows.

    Again, why the resistance to adopting such a "codified" approach?
    There's no capital outlay required to adopt a *methodology* (unless
    you want/need tools). It's as if the effort is seen as an *additional*
    effort -- but with no perception of a "return". Is this because the
    "return" doesn't stand out and have flashing lights surrounding it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to HT-Lab on Sat May 15 02:48:24 2021
    On 5/15/2021 12:44 AM, HT-Lab wrote:
    On 14/05/2021 20:37, Don Y wrote:
    On 5/14/2021 2:43 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    On 5/13/2021 7:25 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    How prevalent are (semi-) formal design methods employed?
    Which?

    We use theorem provers to find bugs in ISA specification:
    https://www.cl.cam.ac.uk/~pes20/sail/

    They're quite handy for finding bugs before they hit silicon...

    But, presumably, only of value if you're a SoC integrator?

    I.e., given COTS devices, what might they reveal to users of
    said devices?

    They can reveal bugs in existing implementations - where they don't meet the
    spec and bad behaviour can result.

    However CPU and FPGA design is what we do so that's where we focus our
    efforts. Depends whether FPGA counts as COTS or not...

    Understood. Tools fit the application domains for which they were designed. >>
    How did *adoption* of the tool come to pass? Was it "mandated" by corporate >> policy? Something <someone> stumbled on, played with and then "pitched"
    to management/peers? Mandated by your industry? etc.


    It became a must have tool about 2-3 decades ago for the safety critical/ avionics/medical industries.

    But these are industries with inherently high levels of overhead -- possibly suggesting (market) "efficiency" (in other industries) as a reason that discourages adoption.

    So, if it/they have value *there*, why aren't it/they embraced EVERYWHERE? Obviously, other product offerings in other industries face similar design problems...

    Design where becoming so complex that simulation
    could no longer answers questions like, is dead-lock or life-lock possible on our statemachine, can you buffers overflow, do you have arithmetic overflow, deadcode, race condition etc. The tools are now well established and most of the above questions can be answered (with some user constraints) by a simple push button tool. They are still expensive (you won't get much change from 20K
    UK pounds) but most high-end FPGA/ASIC companies use them. They are not a replacement for simulation but one of the tools you need to complete your verification.

    Regards,
    Hans
    www.ht-lab.com


    [Just because a tool "makes sense" -- logically or economically -- doesn't >> mean it will be adopted, much less *embraced*!]


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Theo@21:1/5 to Don Y on Sat May 15 12:02:04 2021
    Don Y <blockedofcourse@foo.invalid> wrote:
    Understood. Tools fit the application domains for which they were designed.

    How did *adoption* of the tool come to pass? Was it "mandated" by corporate policy? Something <someone> stumbled on, played with and then "pitched"
    to management/peers? Mandated by your industry? etc.

    [Just because a tool "makes sense" -- logically or economically -- doesn't mean it will be adopted, much less *embraced*!]

    We're a university research lab. We developed the tool in response to two trends:

    - growing complexity of systems and the increasing prevalence of bugs in implementation (for example in memory coherency subsystems).

    - proposing security extensions to architectures and wanting to be able to
    show that what we've proposed doesn't have any loopholes in it. It provides confidence to us and to people adopting the technology that the technology
    is robust, and there won't be problems once silicon is deployed and
    impossible to fix later.

    (that's not to say there won't be entirely new classes of attacks coming out
    of left-field in the way that Spectre surprised a lot of people, but are at least trying to reason about the attacks we know about)

    Because the costs of respinning are so high, chip design is all about verification. Doing it in a formal sense is a step up from hand-writing or randomly-generating tests. I don't think the industry needs convincing that it's a good idea in abstract - it's mainly making the benefits outweigh the costs. A lot of the work (eg in the RISC-V community) is about bringing
    down those costs.

    Theo

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Clifford Heath@21:1/5 to Don Y on Sat May 15 20:54:10 2021
    On 15/5/21 7:10 pm, Don Y wrote:
    On 5/15/2021 1:14 AM, Clifford Heath wrote:

    It's as if a (professional) writer wouldn't avail himself of a
    spell-checker...  Or, a layout guy not running DRCs...  (yes,
    I realize this to be an oversimplification; the examples I've
    given are just mouse clicks!)

    These are not the same at all, because those things have rules. There
    is no rule for correct logic.

    Logic is only "correct" if you are applying a prover.


    I was loose with terminology. People tend to think that their
    "reasoning" is correct and doesn't need to be logically analysed or
    proved. They're wrong, but the blind spot is unavoidable.

    Of course, there's also the problem that a thing may be "proved" yet be (undetectably, until it becomes detectable) not what would have been
    wanted - if it had been possible to foresee the failure mode.


    You can still use formal methods in things like specifications

    That's how I used it, for static modelling (modelling all possible
    "states of the world" as they may exist at a point in time). Although
    dynamic modelling is much more exciting, it is rarely difficult once the correct static model has been agreed.

    <http://dataconstellation.com/ActiveFacts/CQLIntroduction.html>

    Unfortunately, use in such a document is not suited for "general audiences"

    The goal of CQL is to make the formal model suitable for (and expressed
    in the language of) anyone generally familiar with the domain being
    modelled.

    because it lacks rationale for each item in the specification.

    Actually rationale is seldom needed. What is needed is an example of the scenario that is allowed or disallowed by each definition. The example
    is almost always an adequate rationalisation.
    The alternative is: an /ad hoc/ specification (with some likely
    incompletely
    specified set of loose rules) *or* an *absent* specification.  Each of
    these leave gaping holes in the design that (supposedly) follows.

    That's precisely true. That's why analytic (formal) models are needed.

    Again, why the resistance to adopting such a "codified" approach?

    Hubris, mostly. People genuinely don't see the need until enough
    experience has humbled them, and by then, their accumulated caution and tentativeness mean their industry sees them as dinosaurs.

    Clifford Heath.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Clifford Heath on Sat May 15 12:26:30 2021
    On 5/15/2021 3:54 AM, Clifford Heath wrote:
    On 15/5/21 7:10 pm, Don Y wrote:
    Unfortunately, use in such a document is not suited for "general audiences"

    The goal of CQL is to make the formal model suitable for (and expressed in the
    language of) anyone generally familiar with the domain being modelled.

    I opted for OCL as my system is object-based, UML is relatively easy to understand and the way things would be expressed closely mimics the
    way those constraints would be imposed in the code. Any "translation"
    effort would introduce other opportunities for errors to creep in.

    because it lacks rationale for each item in the specification.

    Actually rationale is seldom needed. What is needed is an example of the scenario that is allowed or disallowed by each definition. The example is almost always an adequate rationalisation.

    I disagree. I find many cases where I need to resort to prose to
    explain why THIS implementation choice is better than THAT. Just
    *stating* (or implying) that it is leaves too much to the Reader,
    as an "exercise" ("Now WHY, exactly, would this be a better approach?")

    For example, I had to make a decision, early on, as to whether or
    not "communications" would be synchronous or asynchronous. There
    are advantages and drawbacks to each approach. From a performance
    standpoint, one can argue that async wins. But, from a cognitive
    standpoint, sync is considerably easier for "users" (developers)
    to "get right"; message and reply are temporally bound together
    so the user doesn't have to, *later*, sort out which message a
    subsequent reply correlates with (icky sentence structure but it's
    early in the morning :< )

    The fact that comms then become blocking operations means any
    desire for concurrency has to be addressed with other mechanisms
    (e.g., set up a thread to do the comms so the "main" thread can
    keep working and make the coordination between these two -- which
    now appear asynchronous -- to be more visible.)

    Or, trying to explain the many ways a particular comm can fail
    (e.g., what if the party on the other end never listens?
    Or, *can't* listen? Or, the processor hosting it powers down?
    Or...) and how the response/detection of that failure can
    vary based on where in the "failure" it occurs.

    The alternative is: an /ad hoc/ specification (with some likely incompletely >> specified set of loose rules) *or* an *absent* specification. Each of these >> leave gaping holes in the design that (supposedly) follows.

    That's precisely true. That's why analytic (formal) models are needed.

    I'm not sure the models need to be mechanically verifiable. What *needs*
    to happen is they need to be unambiguous and comprehensive. You should
    be able to look at one (before or after the code is written) and convince yourself that it addresses every contingency -- as well as HOW it does so
    (to the appearance of other actors)

    Again, why the resistance to adopting such a "codified" approach?

    Hubris, mostly. People genuinely don't see the need until enough experience has
    humbled them, and by then, their accumulated caution and tentativeness mean their industry sees them as dinosaurs.

    "Hubris" suggests overconfidence (in their abilities). I'm not sure
    that's the case.

    I started looking for a "common ground" in which I could express my current design when I "discovered" that, not only is there no such thing, but that there isn't even a common approach (methodology) to design. It's not
    a question of whether method A or method B is the more common (and, thus,
    more readily recognizable to adopt in my presentation) but, that NONE OF
    THE ABOVE is the clear winner!

    [I couldn't even get a concensus on how to *diagram* relationships
    between actors/objects just for illustrative purposes!]

    This prompted the question, here.

    I've been looking back over the past experiences I've had with getting
    folks to "move forward" to try to see if I can identify the issue that
    leads to this resistance.

    Early on, I can argue that a lack of understanding for the (software)
    design process led some employers to skimp in ways that they might
    not have realized. E.g., when a development seat cost many thousands of
    1975 dollars, you could see how an employer, new to using MPUs *in*
    their products, could decide to "share" a single seat among several
    developers.

    Similarly, when ICEs came on the scene.

    And, HLLs.

    But, those issues are largely behind us as there are many "free"/cheap
    tools that can make these sorts of decisions "no brainers".

    I can also see how scheduling pressures could lead to a resistance
    to adopt new methods that *claim* to improve productivity. I've
    never been in an environment where time was allotted to explore
    and learn new tools and techniques; there's another product that's
    anxious to make its way to manufacturing (and YOU don't want to
    be the one responsible for stalling the production line!)

    I can see how fear/uncertainty can lead individuals (and organizations
    as organizations are, at their heart, just individuals) to resist
    change; the process you know (regardless of how bad it might be) is
    safer than the process yet to BE known!

    For things like specification and modeling, I can see a preception of
    it as being a duplication of effort -- esp the more formally those
    are expressed.

    And, for some, I think an amazing lack of curiosity can explain their
    clinging to The Way We Always Did It. Or, the lack of "compensation"
    for the "risk" they may be taking ("Heads, you win; tails, I lose!")

    *Most* of these arguments can be rationalized by "employees" -- the
    excuse that THEY (personally) don't have any control over THEIR
    work environment. OK, I'll avoid getting into *that* argument...

    But, for folks working in more permissive environments (e.g.,
    independent contractors), most of the arguments crumble. YOU can
    make the time to learn a technique/tool; YOU can buy the tool;
    YOU can gauge its impact on YOUR productivity; etc.

    I have a lot of respect for my colleagues -- they've all got
    proven track records with significant projects. Yet, they still
    fall into this pattern of behavior -- clinging to past processes
    instead of exploring new. To their credit, they will make an
    effort to understand the approaches and technologies that *I*
    pursue -- but, that's more the result of a personal relationship
    (than a BUSINESS one). Yet, I never hear any epiphanies where
    they exclaim "this is SO COOL"... is the pull of the familiar SO
    overwhelming?

    [I left the 9-to-5 world primarily so I could dabble in other
    fields, techniques, etc. I have no desire to make "model 2",
    especially if it will follow "process 1"!]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Theo on Sat May 15 15:07:15 2021
    On 5/15/2021 4:02 AM, Theo wrote:
    Don Y <blockedofcourse@foo.invalid> wrote:
    Understood. Tools fit the application domains for which they were designed. >>
    How did *adoption* of the tool come to pass? Was it "mandated" by corporate >> policy? Something <someone> stumbled on, played with and then "pitched"
    to management/peers? Mandated by your industry? etc.

    [Just because a tool "makes sense" -- logically or economically -- doesn't >> mean it will be adopted, much less *embraced*!]

    We're a university research lab. We developed the tool in response to two trends:

    - growing complexity of systems and the increasing prevalence of bugs in implementation (for example in memory coherency subsystems).

    - proposing security extensions to architectures and wanting to be able to show that what we've proposed doesn't have any loopholes in it. It provides confidence to us and to people adopting the technology that the technology
    is robust, and there won't be problems once silicon is deployed and impossible to fix later.

    OK. Both make sense. And, both are hard (if not impossible) to "fix"
    at layers *above* the hardware.

    The next, most practical, question is: how do you encourage its adoption? Publishing papers is what professors/grad students "do". That's
    different from actually getting folks to *use* something that you've developed/written about.

    (a casual examination of the amount of "stuff" that has come out of
    academia and "gone nowhere" -- despite some value! -- should make
    this evident)

    (that's not to say there won't be entirely new classes of attacks coming out of left-field in the way that Spectre surprised a lot of people, but are at least trying to reason about the attacks we know about)

    Because the costs of respinning are so high, chip design is all about verification. Doing it in a formal sense is a step up from hand-writing or randomly-generating tests. I don't think the industry needs convincing that it's a good idea in abstract - it's mainly making the benefits outweigh the costs. A lot of the work (eg in the RISC-V community) is about bringing
    down those costs.

    Understood. In the past, I've had to rely on "good eyes" to spot problems
    in designs. Folks tend to only test how they *think* the design SHOULD perform. So, they often omit checking for things that they don't imagine
    it (mis?)doing.

    <do something; witness crash>
    "What did you just do?"
    "This..."
    "You're not supposed to do that!"
    "Then why did you/it LET ME? Did I break some *law*???"

    [Thankfully, I have a nack for identifying MY "assumptions" and the
    liabilities that they bring to a design]

    A suitably aggressive tool can avoid this bias and just hammer at every
    nit it can algorithmically deduce/exploit.

    A colleague designed a video scaler for a printer. When I started poking at
    it with real numbers, I discovered cases where the PHYSICAL image size
    produced on a B-size page was actually SMALLER than that produced on an
    A-size page (presumably, you'd print on larger paper to get a larger image!).

    Ooops!

    Given the number of variations in how the interface could be configured,
    he'd never have been able to exhaustively test all cases. So, my observation was just serendipitous.

    OTOH, *seeing* what I'd uncovered gave him cause to look more carefully
    at his implementation *before* sending it off to fab (which would have cost money AND calendar time -- leading to a delayed market introduction).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Clifford Heath@21:1/5 to Don Y on Sun May 16 11:41:32 2021
    On 16/5/21 5:26 am, Don Y wrote:
    On 5/15/2021 3:54 AM, Clifford Heath wrote:
    On 15/5/21 7:10 pm, Don Y wrote:
    Unfortunately, use in such a document is not suited for "general
    audiences"

    The goal of CQL is to make the formal model suitable for (and
    expressed in the language of) anyone generally familiar with the
    domain being modelled.

    I opted for OCL as my system is object-based, UML is relatively easy to understand and the way things would be expressed closely mimics the
    way those constraints would be imposed in the code.  Any "translation" effort would introduce other opportunities for errors to creep in.

    It's very difficult to teach non-programmers to read OCL. But I agree on
    the need for code generation directly from the model, without
    translation. That's what CQL does too.

    Actually rationale is seldom needed. What is needed is an example of
    the scenario that is allowed or disallowed by each definition. The
    example is almost always an adequate rationalisation.

    I disagree.  I find many cases where I need to resort to prose to
    explain why THIS implementation choice is better than THAT.

    Yes, where that is needed, CQL provides context-notes too. In fact I
    select four specific categories of rationale ("so that", "because", "as
    opposed to" and "to avoid"), with optional annotations saying who agreed
    and when.

    For example, I had to make a decision, early on, as to whether or
    not "communications" would be synchronous or asynchronous.  There
    are advantages and drawbacks to each approach.  From a performance standpoint, one can argue that async wins.  But, from a cognitive standpoint, sync is considerably easier for "users" (developers)
    to "get right"; message and reply are temporally bound together
    so the user doesn't have to, *later*, sort out which message a
    subsequent reply correlates with (icky sentence structure but it's
    early in the morning  :< )

    Yes, I'm well familiar with that problem. Best decade of my working life
    was building a development tool that exclusively used message passing.
    For the most part it's amazingly liberating, but sometimes frustrating.

    That's precisely true. That's why analytic (formal) models are needed.

    I'm not sure the models need to be mechanically verifiable.

    Even without verification, the model must only make rules that are *in principle* machine-verifiable. If a rule is not verifiable, it's just
    waffle with no single logical meaning. If it has a single logical
    meaning, it is in principle machine-verifiable.

      What *needs*
    to happen is they need to be unambiguous and comprehensive.

    Comprehensive is impossible. There's always a possibility for more
    detail in any real-world system. But unambiguous? Yes, that requires
    that it is a formal part of a formal system which allows its meaning to
    be definitively stated. That is, for any scenario, there exists a
    decision procedure which can determine whether that scenario is allowed
    or is disallowed (this is what is meant when a logical system is termed "decidable").

      You should
    be able to look at one (before or after the code is written) and convince yourself that it addresses every contingency

    It's a great goal, but it is *in principle* impossible. Correctness of
    every contingency requires that the model matches the "real world", and
    Godel's theorem shows that it's impossible to prove that.


    Again, why the resistance to adopting such a "codified" approach?

    Hubris, mostly. People genuinely don't see the need until enough
    experience has humbled them, and by then, their accumulated caution
    and tentativeness mean their industry sees them as dinosaurs.

    "Hubris" suggests overconfidence (in their abilities).  I'm not sure
    that's the case.

    I started looking for a "common ground" in which I could express my current design

    Fact-based modelling uses language as an access point to people's mental models, by analysing "plausible utterances" or "speech acts" for their
    logical intent, and building a formal model that captures the domain
    *using their own terms and phrases*.

    Poor though it is, there exists no better tool than natural language to
    explore and express common ground. CQL provides a two-way bridge between
    that and formal logic, so that any mathematically formal statement can
    be unambiguously expressed using natural sentences, and every fact in
    the domain can be expressed using at least one agreed sentence using a restricted natural grammar that is also mathematically formal (meaning, unambiguously parseable to a logical expression).

    when I "discovered" that, not only is there no such thing,

    Each model needs to be formulated by agreement in each case. The only
    way to reach agreement is to express scenarios and formalise ways of
    expressing them, so that any acceptable statement can be analysed for
    its logical intent.

    This works because every functioning business already has ways to talk
    about everything that matters to it. Fact-based modeling captures those expressions and formalises them, using their own words to express the
    result, so agreement can be reached. It's little use to formalise a rule
    in a way that cannot be verified by the people who proposed it - one
    cannot reach agreement that way. Many MANY development failures fall
    into the trap of "but you said... no but I meant....!", or "I didn't say
    that because it's just common sense! What kind of fool are you?"

    I've been looking back over the past experiences I've had with getting
    folks to "move forward" to try to see if I can identify the issue that
    leads to this resistance.

    "Every man's way is right in his own eyes" - Proverbs 21:2

    People can't see the flaws in their own logic, because their logic is
    flawed. They resist methodical attempts to correct them, because they're already "right".

    I've
    never been in an environment where time was allotted to explore
    and learn new tools and techniques;

    I know a number of companies that implement "10% time", for employees to explore any technology or personal projects they feel might be relevant
    to the business (or to their ability to contribute to it). I think
    Google is one of these, in fact, though my examples are closer to home.

    For things like specification and modeling, I can see a preception of
    it as being a duplication of effort -- esp the more formally those
    are expressed.

    Unfortunately a lot of companies view testing in the same way. As if it
    wasn't possible for them to make a mistake.

    And, for some, I think an amazing lack of curiosity can explain their clinging to The Way We Always Did It.  Or, the lack of "compensation"
    for the "risk" they may be taking ("Heads, you win; tails, I lose!")

    Folk who have been "lucky" a few times tend to become the "golden child"
    and get promoted. Once in senior positions they're much more likely to
    reject techniques which could discover that "the emperor has no clothes"

    I have a lot of respect for my colleagues -- they've all got
    proven track records with significant projects.  Yet, they still
    fall into this pattern of behavior -- clinging to past processes
    instead of exploring new.

    I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
    I promise that it will be worth your effort.

    Clifford Heath.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Clifford Heath on Sun May 16 00:35:27 2021
    On 5/15/2021 6:41 PM, Clifford Heath wrote:
    On 16/5/21 5:26 am, Don Y wrote:
    On 5/15/2021 3:54 AM, Clifford Heath wrote:
    On 15/5/21 7:10 pm, Don Y wrote:
    Unfortunately, use in such a document is not suited for "general audiences"

    The goal of CQL is to make the formal model suitable for (and expressed in >>> the language of) anyone generally familiar with the domain being modelled. >>
    I opted for OCL as my system is object-based, UML is relatively easy to
    understand and the way things would be expressed closely mimics the
    way those constraints would be imposed in the code. Any "translation"
    effort would introduce other opportunities for errors to creep in.

    It's very difficult to teach non-programmers to read OCL. But I agree on the need for code generation directly from the model, without translation. That's what CQL does too.

    My target audience is programmers.

    Actually rationale is seldom needed. What is needed is an example of the >>> scenario that is allowed or disallowed by each definition. The example is >>> almost always an adequate rationalisation.

    I disagree. I find many cases where I need to resort to prose to
    explain why THIS implementation choice is better than THAT.

    Yes, where that is needed, CQL provides context-notes too. In fact I select four specific categories of rationale ("so that", "because", "as opposed to" and "to avoid"), with optional annotations saying who agreed and when.

    I present my descriptions in "multimedia prose" so I can illustrate
    (and animate) to better convey the intent.

    In some domains, "text" is just a terrible choice for explaining what
    you're doing (e.g., speech synthesis, gesture recognition, etc.).
    It's considerably easier to emit sounds or animate graphics (in these
    mentioned examples) than to try to describe what you're addressing.

    For example, I had to make a decision, early on, as to whether or
    not "communications" would be synchronous or asynchronous. There
    are advantages and drawbacks to each approach. From a performance
    standpoint, one can argue that async wins. But, from a cognitive
    standpoint, sync is considerably easier for "users" (developers)
    to "get right"; message and reply are temporally bound together
    so the user doesn't have to, *later*, sort out which message a
    subsequent reply correlates with (icky sentence structure but it's
    early in the morning :< )

    Yes, I'm well familiar with that problem. Best decade of my working life was building a development tool that exclusively used message passing.
    For the most part it's amazingly liberating, but sometimes frustrating.

    From the comments I've received from colleagues using my codebase,
    the biggest problems seem to be related to (true) concurrency and
    the possibility of a remote host failing while the message is in transit.
    Folks seem to be used to "local" results in short order (transport delays
    not being significant even in an IPC)

    That's precisely true. That's why analytic (formal) models are needed.

    I'm not sure the models need to be mechanically verifiable.

    Even without verification, the model must only make rules that are *in principle* machine-verifiable. If a rule is not verifiable, it's just waffle with no single logical meaning. If it has a single logical meaning, it is in principle machine-verifiable.

    Yes. But *getting* such a tool (or requiring its existence before adopting
    a model strategy) can be prohibitive.

    What *needs*
    to happen is they need to be unambiguous and comprehensive.

    Comprehensive is impossible.

    "Comprehensive" only has to apply to the world-view seen through the
    interface. If you're dealing with a particular actor/object/server,
    all you have to do is be able to define EVERY possible outcome that
    a client could experience using that interface.

    If the sky turns yellow and every cat on the planet dies... <shrug>
    (likely this isn't pertinent to any interface)

    OTOH, if a process can die while servicing a request, or run out
    of resources, or encounter invalid operands, or... then each of
    those possibilities have to be enumerated.

    Much of the additional cruft (in my world) comes from the fact
    that the request is executing "elsewhere" (even if on the same
    host) while YOUR code appears to be running correctly; there's
    no guarantee that the request is executing AT ALL.

    There's always a possibility for more detail in
    any real-world system. But unambiguous? Yes, that requires that it is a formal
    part of a formal system which allows its meaning to be definitively stated. That is, for any scenario, there exists a decision procedure which can determine whether that scenario is allowed or is disallowed (this is what is meant when a logical system is termed "decidable").

    You should
    be able to look at one (before or after the code is written) and convince
    yourself that it addresses every contingency

    It's a great goal, but it is *in principle* impossible. Correctness of every contingency requires that the model matches the "real world", and Godel's theorem shows that it's impossible to prove that.

    Again, it only has to match the portion of the real world visible to
    the interface in question.

    A filesystem interface doesn't have to address the possibility of
    the polar ice caps melting...

    Again, why the resistance to adopting such a "codified" approach?

    Hubris, mostly. People genuinely don't see the need until enough experience >>> has humbled them, and by then, their accumulated caution and tentativeness >>> mean their industry sees them as dinosaurs.

    "Hubris" suggests overconfidence (in their abilities). I'm not sure
    that's the case.

    I started looking for a "common ground" in which I could express my current >> design
    when I "discovered" that, not only is there no such thing,

    You've missed the point. I was addressing the point that modeling, itself,
    is relatively uncommon (at least in "these circles"). So, trying to find
    some common subset that everyone (most practitioners) were using was
    pointless.

    If no one speaks Esperanto, then it is foolish to present your documentation *in* Esperanto!

    I've been looking back over the past experiences I've had with getting
    folks to "move forward" to try to see if I can identify the issue that
    leads to this resistance.

    "Every man's way is right in his own eyes" - Proverbs 21:2

    People can't see the flaws in their own logic, because their logic is flawed. They resist methodical attempts to correct them, because they're already "right".

    That's more pessimistic than I -- a cynic -- would claim.

    When I approached my colleagues re: this topic, no one tried to
    defend their current practices. I think they all realize they
    *could* be doing things "better". This led to my contemplation of
    why they *aren't* moving in those directions.

    It's like the guy who KNOWS he should be backing up his computer...
    yet doesn't. Why????

    I've
    never been in an environment where time was allotted to explore
    and learn new tools and techniques;

    I know a number of companies that implement "10% time", for employees to explore any technology or personal projects they feel might be relevant to the
    business (or to their ability to contribute to it). I think Google is one of these, in fact, though my examples are closer to home.

    Yes, but 10% of 40 hours isn't a helluvalot of time. I would imagine
    other things chip away at that time.

    I've seen companies budget for "reading technical journals/rags", etc.
    But, unless you're really aggressive with your time management, it's
    unlikely that a few hours a week is going to amount to much -- other
    than priming you for your after-work pursuit of the problem (on YOUR time)

    For things like specification and modeling, I can see a preception of
    it as being a duplication of effort -- esp the more formally those
    are expressed.

    Unfortunately a lot of companies view testing in the same way. As if it wasn't
    possible for them to make a mistake.

    I think everyone (management) would LOVE to test. If it didn't take any
    TIME! I don't think the same is true of developers! Most seem to
    consider testing "tedious" -- not very fulfilling. Hence they only test
    for what they KNOW the code/product should do!

    [I can't count the number of times I've been present when a developer
    witnessed a "can't happen" event. HE SAW IT WITH HIS OWN EYES -- he's
    not relying on a possibly distorted report from the field. Yet,
    unless it jumps up and yells at him -- "The Problem is Right Here!" -- he
    will eventually just shrug and move on... as if it had never happened!]

    And, tell management that you've baked in X months of testing
    into your schedule and they'll cut X months from it -- to get
    product out the door sooner.

    They *may* pay you lip service: "You can test while we're
    building our first production lots" -- but, "something" will
    come up to divert you from even that.

    And, for some, I think an amazing lack of curiosity can explain their
    clinging to The Way We Always Did It. Or, the lack of "compensation"
    for the "risk" they may be taking ("Heads, you win; tails, I lose!")

    Folk who have been "lucky" a few times tend to become the "golden child" and get promoted. Once in senior positions they're much more likely to reject techniques which could discover that "the emperor has no clothes"

    I don't know.

    How many ways are there for you to get from your home to your place of business? How many have you EXPLORED?? What would it cost for you to try a different one, each day? Depart 5 minutes earlier -- just in case there
    is some unexpected BIG delay inherent in a particular route. Chances are,
    each route will be roughly comparable in duration...

    How many different development styles have you tried, over the years
    ("tried" meaning "used for an entire project")?

    I like to explore different ways of doing things; I already KNOW how
    THAT way worked out! But, my experience suggests this is not a
    widespread idea/practice! Screw up (take too long, etc.) and
    it's YOUR fault; discover a better way of doing things -- <yawn>

    I have a lot of respect for my colleagues -- they've all got
    proven track records with significant projects. Yet, they still
    fall into this pattern of behavior -- clinging to past processes
    instead of exploring new.

    I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
    I promise that it will be worth your effort.

    It's unlikely that I will try any of them! This is my last "electronics" project (50 years in a field seems like "enough"; time to pursue OTHER interests!) As I don't see any immediate/direct benefit to the folks who
    are already using my codebase (nor any overwhelming use of the tools by others), it would just be an intellectual exercise benefiting only myself
    and would likely complicate *their* understanding of what I'm doing!

    I'll, instead, put the effort to something that will have more tangible results. Maybe some better tools to track the migration of a procedure
    through the system in real-time (instead of /ex post factum/)...?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Clifford Heath@21:1/5 to Don Y on Wed Jun 2 12:03:28 2021
    On 16/5/21 5:35 pm, Don Y wrote:
    On 5/15/2021 6:41 PM, Clifford Heath wrote:
    On 16/5/21 5:26 am, Don Y wrote:
    On 5/15/2021 3:54 AM, Clifford Heath wrote:
    On 15/5/21 7:10 pm, Don Y wrote:
    Unfortunately, use in such a document is not suited for "general
    audiences"

    The goal of CQL is to make the formal model suitable for (and
    expressed in the language of) anyone generally familiar with the
    domain being modelled.
    Actually rationale is seldom needed. What is needed is an example of
    the scenario that is allowed or disallowed by each definition. The
    example is almost always an adequate rationalisation.

    I disagree.  I find many cases where I need to resort to prose to
    explain why THIS implementation choice is better than THAT.

    Yes, where that is needed, CQL provides context-notes too. In fact I
    select four specific categories of rationale ("so that", "because",
    "as opposed to" and "to avoid"), with optional annotations saying who
    agreed and when.

    I present my descriptions in "multimedia prose" so I can illustrate
    (and animate) to better convey the intent.

    All that is needed is to allow the viewer to get to the "Aha!" moment
    where they see why the alternative will fail. Do whatever you have to do
    to achieve that, and you have your rationale. It will vary with the
    situation, and with the audience.

    Even without verification, the model must only make rules that are *in
    principle* machine-verifiable. If a rule is not verifiable, it's just
    waffle with no single logical meaning. If it has a single logical
    meaning, it is in principle machine-verifiable.

    Yes.  But *getting* such a tool (or requiring its existence before adopting a model strategy) can be prohibitive.
    There's always a possibility for more detail in any real-world system.


    Again, why the resistance to adopting such a "codified" approach?
    Hubris, mostly. People genuinely don't see the need until enough
    experience has humbled them, and by then, their accumulated caution
    and tentativeness mean their industry sees them as dinosaurs.

    "Hubris" suggests overconfidence (in their abilities).  I'm not sure
    that's the case.

    I started looking for a "common ground" in which I could express my
    current design when I "discovered" that, not only is there no such thing,

    You've missed the point.  I was addressing the point that modeling, itself, is relatively uncommon (at least in "these circles").  So, trying to find some common subset that everyone (most practitioners) were using was pointless.

    If no one speaks Esperanto, then it is foolish to present your
    documentation *in* Esperanto!

    In relation to software, at least, every modeling language has been a
    private language shared only (or mainly) by systems architects. They're
    all Esperanto, of one kind or another. (ER diagramming has sometimes
    been useful, and occasionally even UML, but usually not)

    As such, it serves only for documenting the system *as designed*, and
    can provide no help to a non-expert in identifying flaws where the
    result would not match the need (as opposed to not working correctly
    within its own frame of reference).

    Because "modelling" has always been subject to this failure, it is seen
    as a pointless exercise. Delivered software is likely to work "as
    designed" yet still mis-match the problem because the design was
    mis-targeted, whether it was modelled or was not.

    The solution to this is good modelling tools that can communicate to *everyone*, not just to programmers. And that's what I spent a decade
    trying to build.

    When I approached my colleagues re: this topic, no one tried to
    defend their current practices.  I think they all realize they
    *could* be doing things "better".  This led to my contemplation of
    why they *aren't* moving in those directions.

    It's easier to "stay in town" where it's comfortable, than to go
    exploring in the wilderness. It's wilderness because there is no
    adoption, and there's no adoption not because no-one has been there, but because they didn't build roads and towns there yet.

    I know a number of companies that implement "10% time",
    Yes, but 10% of 40 hours isn't a helluvalot of time.

    Half a day a week seems like quite a lot to me. If an employee doing
    that can't prove to me that they've discovered something worthy of a
    bigger investment, then I don't believe they have.

    At one company, I spent 18 months trying to evangelise the business need
    for a new technology I'd envisaged. Eventually someone told my manager
    "just give him 2 weeks to prototype it!", and I did. The effect was
    astounding - the entire company (110 people) dropped everything else
    they were doing to work on producing a new product based on my idea.
    That product brought in well over a hundred million dollars over the
    next decade.

    I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
    I promise that it will be worth your effort.

    It's unlikely that I will try any of them!  This is my last "electronics" project (50 years in a field seems like "enough"; time to pursue OTHER interests!)

    And that is why I'm sometimes reluctant to engage in these conversations
    with you, Don. You're the one asking "why does nobody try this", but
    even now you have time to explore (without the demands of delivering a
    result), you're unwilling to do more than talk about that.

    Clifford Heath.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Clifford Heath on Tue Jun 1 22:23:44 2021
    On 6/1/2021 7:03 PM, Clifford Heath wrote:
    On 16/5/21 5:35 pm, Don Y wrote:
    On 5/15/2021 6:41 PM, Clifford Heath wrote:
    On 16/5/21 5:26 am, Don Y wrote:
    On 5/15/2021 3:54 AM, Clifford Heath wrote:
    On 15/5/21 7:10 pm, Don Y wrote:
    Unfortunately, use in such a document is not suited for "general audiences"

    The goal of CQL is to make the formal model suitable for (and expressed in
    the language of) anyone generally familiar with the domain being modelled.
    Actually rationale is seldom needed. What is needed is an example of the >>>>> scenario that is allowed or disallowed by each definition. The example is >>>>> almost always an adequate rationalisation.

    I disagree. I find many cases where I need to resort to prose to
    explain why THIS implementation choice is better than THAT.

    Yes, where that is needed, CQL provides context-notes too. In fact I select >>> four specific categories of rationale ("so that", "because", "as opposed to"
    and "to avoid"), with optional annotations saying who agreed and when.

    I present my descriptions in "multimedia prose" so I can illustrate
    (and animate) to better convey the intent.

    All that is needed is to allow the viewer to get to the "Aha!" moment where they see why the alternative will fail. Do whatever you have to do to achieve that, and you have your rationale. It will vary with the situation, and with the audience.

    I've found that people tend to "lack imagination" when it comes to
    things in which they aren't DEEPLY invested. They often fail to
    see ("go looking for") aspects of a design that aren't superficially
    obvious. This evidenced by how many folks only test their designs
    with inputs they EXPECT to encounter (instead of imagining the larger
    set of inputs *possible*).

    So, I try to develop "interactions" that let the "reader" see what
    I'm trying to illustrate -- and, then, coach them into pondering
    "what ifs" that they've NOT likely imagined and show how those
    win/fail in different alternatives.

    With interactive presentations, I can literally tell them to "do X"
    and know that (if they do), the presentation will clearly show
    the issue that I would, otherwise, find tedious to explain in prose.

    Again, why the resistance to adopting such a "codified" approach?
    Hubris, mostly. People genuinely don't see the need until enough
    experience has humbled them, and by then, their accumulated caution and >>>>> tentativeness mean their industry sees them as dinosaurs.

    "Hubris" suggests overconfidence (in their abilities). I'm not sure
    that's the case.

    I started looking for a "common ground" in which I could express my current
    design when I "discovered" that, not only is there no such thing,

    You've missed the point. I was addressing the point that modeling, itself, >> is relatively uncommon (at least in "these circles"). So, trying to find
    some common subset that everyone (most practitioners) were using was
    pointless.

    If no one speaks Esperanto, then it is foolish to present your documentation >> *in* Esperanto!

    In relation to software, at least, every modeling language has been a private language shared only (or mainly) by systems architects. They're all Esperanto,
    of one kind or another. (ER diagramming has sometimes been useful, and occasionally even UML, but usually not)

    As such, it serves only for documenting the system *as designed*, and can

    That's exactly my goal. I'm looking to "translate" my design into
    form(s) that may be more easily recognizable -- to some "significant"
    group of readers.

    For example, one can design a grammar with totally /ad hoc/ methods.
    But, presenting it a BNF goes a long way to clarifying it to "new
    readers" -- regardless of how it came about (or was implemented).

    provide no help to a non-expert in identifying flaws where the result would not
    match the need (as opposed to not working correctly within its own frame of reference).

    Different goal.

    A "non-expert" reading the BNF of a grammar wouldn't be able to glean
    much about how it *could* be implemented or inconsistencies in any
    potential implementation. But, he *could* construct a valid sentence
    with just that BNF (and not worry about how it gets parsed).

    Because "modelling" has always been subject to this failure, it is seen as a pointless exercise. Delivered software is likely to work "as designed" yet still mis-match the problem because the design was mis-targeted, whether it was
    modelled or was not.

    The solution to this is good modelling tools that can communicate to *everyone*, not just to programmers. And that's what I spent a decade trying to
    build.

    You're trying to solve a different problem than I.

    I agree with your points -- in *that* application domain.

    When I approached my colleagues re: this topic, no one tried to
    defend their current practices. I think they all realize they
    *could* be doing things "better". This led to my contemplation of
    why they *aren't* moving in those directions.

    It's easier to "stay in town" where it's comfortable, than to go exploring in the wilderness. It's wilderness because there is no adoption, and there's no adoption not because no-one has been there, but because they didn't build roads
    and towns there yet.

    I don't think "ease" or "comfort" is the issue. Many of my colleagues
    are involved with very novel designs and application domains. They
    are almost always "treking in the wilderness".

    But, they aren't likely to add additional uncertainty to their efforts
    by tackling novel design techniques on top of a novel application
    (or application domain). There's a limit as to how much "risk"
    one can take on -- especially if you are answerable to "others"
    (the folks who pay the bills).

    I have historically taken on lots of "new experience" because it
    was something I could fold into *my* time... I could decide to
    "eat" the cost of learning a new technology "on my dime" as long
    as I'd budgeted (timewise) to meat my completion estimates for
    clients. *They* don't care if I move all of my development
    tools to UN*X -- as long as they can support my completed
    work with their *Windows* toolchains.

    [I.e., I bear the cost of assuming both worlds work -- the UN*X
    domain for me and the Windows domain for them]

    But, others may see no point in moving to a different hosting
    platform: "What's wrong with Windows?" Should I fault them for
    "staying in town"?

    I know a number of companies that implement "10% time",
    Yes, but 10% of 40 hours isn't a helluvalot of time.

    Half a day a week seems like quite a lot to me. If an employee doing that can't
    prove to me that they've discovered something worthy of a bigger investment, then I don't believe they have.

    Four hours a week is nothing.

    A colleague, many years ago, was assessing the effort for a particular
    task. He made the off-hand comment: "You can't do ANYTHING in 8 hours!"
    Which warranted a chuckle.

    But, in hindsight, there's a sort of truism, there.

    There are costs to starting and stopping activities. Folks who say
    "10% of your time" are obviously TRACKING time. How much time am
    I allotted to handle my daily correspondence (which may be distributed
    across the day and not neatly bundled in a "lump")? How much time
    for project meetings? How much time to browse trade magazines
    to "keep current" with product offerings? Ditto published works?
    Do I have a separate account to charge time to handle questions
    from the manufacturing engineer re: my design? What if I need
    to visit the toilet? Do I have an account against which to charge
    my TIMEKEEPING time?

    Four hours gets frittered away -- unless you start nickel and diming
    your "real" projects with the time "lost" to these other incidentals.

    Even with no one driving my time on a daily basis (self-employed),
    it is still amazing how quickly a day "disappears". A piece of
    test equipment needs repair/calibration/updating, ditto for a
    piece of software, a battery in a UPS dies and a replacement needs
    to be ordered -- and eventually installed. Etc.

    Ever have your boss come to you and say, "The technician who has
    been supporting the XYZ product -- that you designed -- has quit.
    Will you take over the Final Test activities so manufacturing can
    continue to ship product while we look to hire a replacement?
    And, when we hire that replacement, will you train him on the
    design and the process that you put in place? Of course, you
    *still* have that upcoming deadline for the ABC product that
    you are currently working on..."

    At one company, I spent 18 months trying to evangelise the business need for a
    new technology I'd envisaged. Eventually someone told my manager "just give him
    2 weeks to prototype it!", and I did. The effect was astounding - the entire

    How much progress would you have made if that 80 hours had been
    doled out in 4 hour chunks -- over the course of 20 weeks? FIVE MONTHS?

    Few places will let you "drop everything" for a day -- let alone a
    week or two. And, if what you are pursuing will take even longer,
    how likely for you to make REAL progress when that effort is
    diluted over months (years?)

    I donate ~10 hours weekly to local non-profits. That's "a day"
    in my eyes -- as it takes time to get TO anyplace and "charging"
    travel time to the "donation" seems disingenuous. Still, with
    8+ hours available IN A SOLID BLOCK, it is amazing how little
    progress I can make on whatever "goals" I've laid out for that
    day.

    E.g., I refurbished electric wheelchairs at one place. I can
    do this in *about* 10 man-hours. So, surely, one or two "visits"?

    Nope. More like four, on average. Even neglecting time spent
    eating, toilet, "resting", etc. there are just too many other
    activities clamoring for my time:

    "An 18-wheeler just pulled in. Can you help unload it?"
    (do you know how much effort there is in unloading such
    a beast? imagine with *a* helper; imagine with NO helper!)

    "The guy who drives the forklift is out on a delivery. Could
    you move these four Gaylords into the warehouse for us?"
    (of course, they are waiting for it to happen NOW, not once you
    come to a convenient breaking point in your activities)

    "The internet is down. Can you figure out if the problem is
    on site or if we have to call our provider?"
    (likewise waiting -- who's job IS this, anyway?)

    "A gentleman from XYZ Industries is here. They are thinking
    of becoming a sponsor. Could you give him a tour?"
    (Really? With all this grease on my hands and face???)

    Now, remember, this is *MY* time. My *DONATED* time. (like those
    "4 hours" RESERVED for *you*?). I can elect to be a ball-buster
    and simply decline all of these requests. (can you decline the
    request from Manufacturing?) Or, I can postpone MY planned activities
    and try to address these more immediate needs. With *ideal*
    timekeeping, this would still increase the cost of doing what "I want".

    Or, I can take work home and hope to make progress on it
    when THOSE distractions aren't around. (i.e., SOMEONE is
    waiting for that wheelchair and, as justifiable as my
    "excuses" might seem, they are inconvenienced by the delay)

    company (110 people) dropped everything else they were doing to work on producing a new product based on my idea. That product brought in well over a hundred million dollars over the next decade.

    I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
    I promise that it will be worth your effort.

    It's unlikely that I will try any of them! This is my last "electronics"
    project (50 years in a field seems like "enough"; time to pursue OTHER
    interests!)

    And that is why I'm sometimes reluctant to engage in these conversations with you, Don. You're the one asking "why does nobody try this", but even now you

    You've misread my intent. I am not ACCUSING anyone. Rather, I am trying to see how people have AVOIDED "doing this". How are folks designing products if they aren't writing specifications, modeling behaviors, quantifying market requirements, etc.? (if they ARE doing those things, then WHICH are they doing??)

    My *hope* was that there would be Concensus on how to explain (even if /ex
    post factum/) designs so that I could massage *my* "explanation" into a comparable form. I have no interest in revisiting the design process,
    just looking for a way to explain them in a context that others MIGHT
    easily understand.

    have time to explore (without the demands of delivering a result), you're

    Who says I don't have to "deliver a result"?

    unwilling to do more than talk about that.

    Who says I've not "ventured into the uncharted wilderness"?

    I've explored alternative technologies to support universal access to my application -- blind, deaf, ambulatory compromised, cognitive impairment,
    etc. I've had to research the prevalence of these to understand the
    sizes of the affected populations. And, the other demographics
    related to them (e.g., blind-from-birth has different skillsets than blind-from-diabetic-retinopathy; a naive assumption of one or the
    other will likely lead to a suboptimal solution)

    I've created an entirely different way of building applications that
    aren't "pre-biased" to a particular set of user abilities, abstracting
    the concept of "display" away from visual mechanisms without burdening
    the actual application developers with having to understand the different
    UI modalities. Can you use that new-fangled thermostat on your wall
    with your eyes closed? While seated in a wheelchair? Without the use
    of your ARMS/hands?

    I've explored four different speech synthesizer technologies -- implementing one of each -- and tried to quantifiably "evaluate" each of them (despite a complete lack of objective criteria "in the industry"). I've examined other "foreign" languages with an eye towards seeing how they could (by someone else) eventually be supported in the framework that I've designed.

    I've explored haptic technologies, gesture recognition, braille transcription, etc. (How does a blind person tell if a particular port on a network switch
    is seeing any activity? How does a nonverbal user indicate a desire to
    perform a particular action? How does a user confined to a wheelchair
    access a particular appliance? How do you alert a deaf user to an
    "event" -- like a fire, doorbell, timer on the microwave oven expiring,
    laundry being done?)

    I've spent days in an electric wheelchair to experience what it's
    like to have my mobility *constrained* (!) by the device that is
    intended to enhance it. And, how tedious it is to have to remember
    everything that I need to do *in* the kitchen WHILE I am *in*
    the kitchen -- as having to *return* to the kitchen is a chore!
    "Did I turn *off* the oven? Can I afford to gamble on that or
    should I drag myself out of bed, crawl back into the wheelchair
    and check it for myself?"

    I've developed a front-end preprocessor to add additional syntax to
    "standard C" to facilitate seamless support for distributed applications.
    I've explored and developed my own IDL and compiler to back that. I've included support for interface versioning so a running app can persist
    with connections to a version X server while a newly launched app can
    take advantage of version Y on that same server)

    I've evaluated the consequences of various "scripting languages" on
    the abilities of these varied users (try "reading" your code to a
    blind man; see how easily that blind man can write code of his own!
    Amazing just how many punctuation marks there are! :> )

    I've adopted a policy of replacing "const" members in applications
    with dynamically loaded data from an RDBMS. And, created mechanisms
    to allow for the associated contents of that RDBMS to be updated and
    merged, live, with the 24/7/365 applications that are using them.

    I've replaced the notion of a persistent store with that of a
    persistent DBMS. I've had to understand how that DBMS could
    potentially fail and how it's maintenance could be automated
    (cuz you don't have a DBA on hand in every household!)

    I've developed a decomposed RTOS that supports seamless distribution, redundancy, timeliness constraints, capabilities, disjoint namespaces,
    process migration, etc.

    I've implemented a distributed, synchronized clock (so code executing
    on different nodes can know the relative order of their activities).

    I've 22 hardware designs to support the various portions of the system.
    I've had to select components with an eye towards manufacturability
    and cost containment -- not just MY money but the monies that users
    would eventually have to pay.

    Each is PoE powered (so I need to know how to design a PD and a PSE).
    I've run the 4500 ft of CAT5e to support these devices -- in a
    preexisting home without basement or attic). I've figured out how to
    "hide" the devices so they don't make the place look like a tech eye-sore.

    I've had to consider how a malicious adversary could compromise portions
    of that system and still keep it operating (within the reduced capacity available). (What happens if I put a Tesla coil to a port in YOUR
    network switch?? What happens if I disassemble a piece of kit
    that you have "outside"/accessible and use it as a beachhead to
    compromise your system? What if you invite me to spend a weekend
    in your spare bedroom with that "unprotected" network drop by the bed?)

    I've built an AI that screens telephone calls based on observations
    of the "user's" responses to those calls. And, "learns" to
    recognize the voices of callers so it need not rely on easily
    fudgeable identifiers like CID.

    I've built a doorbell that recognizes visitors and gates their access
    to my property.

    I've built a "stealth" server that allows access from The Internet
    without disclosing its presence to malicious probes.

    All of these are "wilderness" areas. All of them took time to research, implement and evaluate.

    So, I guess I have a different idea of "having time to explore" than
    you do! And, a greater emphasis in having a real solution to a real
    problem than an esoteric "technique" that *might* apply to future
    problems. Your time is your own -- as is mine. Do what you think
    "adds value" to your expenditure of same (you only have to answer to
    yourself!)

    This is why I don't post, here -- too few people have any real
    pertinent experience to comment, knowledgeably, on the issues or
    potential solutions. It's an "imagined" problem (or, one that
    they want to think won't affect them, personally). Maybe they'll
    be "lucky" and develop some of these "constraints" on their
    abilities and gain first-hand USER experience, that way! *Hoping*
    that someone else will solve their problems (and chagrined to
    discover the solution is not forthcoming) :-/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Clifford Heath@21:1/5 to Don Y on Wed Jun 2 22:39:34 2021
    On 2/6/21 3:23 pm, Don Y wrote:
    On 6/1/2021 7:03 PM, Clifford Heath wrote:
    On 16/5/21 5:35 pm, Don Y wrote:
    On 5/15/2021 6:41 PM, Clifford Heath wrote:
    Yes, where that is needed, CQL provides context-notes too >>> I present my descriptions in "multimedia prose" so I can illustrate
    (and animate) to better convey the intent.

    All that is needed is to allow the viewer to get to the "Aha!" moment
    where they see why the alternative will fail. Do whatever you have to
    do to achieve that, and you have your rationale. It will vary with the
    situation, and with the audience.

    I've found that people tend to "lack imagination" when it comes to
    things in which they aren't DEEPLY invested.

    It's almost always the case that a simple "if we have *that*, and *this* happens, and we respond like *this*, this would be bad, yeah?" is enough
    to cover 99% of such situations. But as I said, every case is different,
    and I make no criticism of the way you do things. Anyhow, a minor point.

    As such, it serves only for documenting the system *as designed*, and can
    That's exactly my goal.  I'm looking to "translate" my design into
    form(s) that may be more easily recognizable -- to some "significant"
    group of readers.

    No worries. That's what systems architecture languages are for. They
    only seem to get adopted where they're mandated though.

    provide no help to a non-expert in identifying flaws where the result
    would not match the need (as opposed to not working correctly within
    its own frame of reference).
    Different goal.

    Agreed. But the model I produce is *also* compilable to working code,
    and can also be generated into a machine reasoning framework (program
    prover or SAT solver) for formal analysis. My novel contribution is
    doing that in a form of language which is comprehensible to "lay people" without training.

    The solution to this is good modelling tools that can communicate to
    *everyone*, not just to programmers. And that's what I spent a decade
    trying to build.

    You're trying to solve a different problem than I.

    I agree with your points -- in *that* application domain.

    Every problem in every domain must be defined before it can be solved.
    If the definition is not understood by those who have to live with the solution, then it *will* be mis-defined. That is the sad fact behind the software industry's dismal record.

    I don't think "ease" or "comfort" is the issue.  Many of my colleagues
    are involved with very novel designs and application domains.  They
    are almost always "treking in the wilderness".

    But, they aren't likely to add additional uncertainty to their efforts

    And there's the nub. They see these "unproven" technologies as adding uncertainty, where the reality is that they exist to *reduce
    uncertainty. The truth? They're afraid the formal analysis will show
    everyone how wrong they are, and have always been.

    I look forward to hearing of your experiences with TLA+, Alloy, or CQL. >>>> I promise that it will be worth your effort.

    It's unlikely that I will try any of them!  This is my last
    "electronics"
    project (50 years in a field seems like "enough"; time to pursue OTHER
    interests!)

    And that is why I'm sometimes reluctant to engage in these
    conversations with you, Don. You're the one asking "why does nobody
    try this", but even now you

    You've misread my intent.  I am not ACCUSING anyone.  Rather, I am
    trying to
    see how people have AVOIDED "doing this".  How are folks designing
    products if
    they aren't writing specifications, modeling behaviors, quantifying market requirements, etc.?  (if they ARE doing those things, then WHICH are they doing??)

    Good question. I agree with your thoughts. But that has nothing to do
    with what I was getting at.

    have time to explore (without the demands of delivering a result), you're
    Who says I don't have to "deliver a result"?
    unwilling to do more than talk about that.
    Who says I've not "ventured into the uncharted wilderness"?

    No-one. Certainly not me. But you've said you're effectively retiring.
    To me that means you don't have to "deliver a result" any more.
    But it also means that you can take even more time to explore, if you
    like to go even further from the well-trodden paths, so I suggested a
    few interesting paths that you might like to explore in your dotage.
    Perhaps learn something that you could communicate to help less
    experienced folk. So I was a bit surprised when you said you were
    unlikely to do that.

    Instead you blather on about the many things in your past. Guess what?
    Lots of us have many varied experiences in our past. You'd be surprised
    how widely my interests have strayed off the beaten tracks - I'm also a life-long explorer. But this is not about me, or about you, it's about
    how the industries we work in could do even better than we could, and
    what we could yet learn to help them do that. I'd still like to add to
    my legacy, and I hope you would too.

    Not just raise a long discussion leading to interesting areas to
    explore, then announce that it was all useless because you don't intend
    to explore any more.

    All of these are "wilderness" areas.  All of them took time to research, implement and evaluate.

    So, I guess I have a different idea of "having time to explore" than
    you do!

    But that was all in the past. Apparently you no longer have the desire
    to explore. That seems to be what you said anyhow. Correct me if I've
    misread that:

    "It's unlikely that I will try any of them! This is my last
    "electronics" project (50 years in a field seems like "enough""

    Clifford Heath

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Clifford Heath on Wed Jun 2 09:01:26 2021
    On 6/2/2021 5:39 AM, Clifford Heath wrote:

    I don't think "ease" or "comfort" is the issue. Many of my colleagues
    are involved with very novel designs and application domains. They
    are almost always "treking in the wilderness".

    But, they aren't likely to add additional uncertainty to their efforts

    And there's the nub. They see these "unproven" technologies as adding uncertainty, where the reality is that they exist to *reduce uncertainty. The

    They can only *potentially* "reduce uncertainty" after they are understood
    and "mastered" (to whatever degree). A new language, for example, may
    increase their coding efficiency or accuracy -- but, the effort to
    become proficient with it is an added (and unknown) "cost".... that the
    current project (and deadlines/commitments) will have to bear.

    If you have "down time" (and motivation!) between projects, you can
    explore new technologies. If, instead, you are moving from one
    crunch to the next, you have to decide how much "risk" you want to
    add to your current/next undertaking; your boss/client isn't
    likely going to "give you credit" for taking on something that
    *he* doesn't see as necessary.

    truth? They're afraid the formal analysis will show everyone how wrong they are, and have always been.

    I look forward to hearing of your experiences with TLA+, Alloy, or CQL. >>>>> I promise that it will be worth your effort.

    It's unlikely that I will try any of them! This is my last "electronics" >>>> project (50 years in a field seems like "enough"; time to pursue OTHER >>>> interests!)

    And that is why I'm sometimes reluctant to engage in these conversations >>> with you, Don. You're the one asking "why does nobody try this", but even >>> now you

    You've misread my intent. I am not ACCUSING anyone. Rather, I am trying to >> see how people have AVOIDED "doing this". How are folks designing products if
    they aren't writing specifications, modeling behaviors, quantifying market >> requirements, etc.? (if they ARE doing those things, then WHICH are they
    doing??)

    Good question. I agree with your thoughts. But that has nothing to do with what
    I was getting at.

    have time to explore (without the demands of delivering a result), you're >> Who says I don't have to "deliver a result"?
    unwilling to do more than talk about that.
    Who says I've not "ventured into the uncharted wilderness"?

    No-one. Certainly not me. But you've said you're effectively retiring.
    To me that means you don't have to "deliver a result" any more.

    I don't have to deliver a result *after* this project -- which I
    consider to be my last in this area. But, I'd not have invested
    a few hundred kilobucks in something just to piss away time and
    money! If I just wanted to "entertain myself", I could watch
    a helluvalot of movies for that kind of money!

    But it also means that you can take even more time to explore, if you like to go even further from the well-trodden paths, so I suggested a few interesting paths that you might like to explore in your dotage. Perhaps learn something that you could communicate to help less experienced folk. So I was a bit surprised when you said you were unlikely to do that.

    For the "risk" reasons outlined above, I see no value (to me) in
    taking on yet-another "wilderness" issue. I've added enough
    uncertainty to my current effort (have I anticipated every way that
    the hardware can be subverted? have I anticipated every way that a
    malicious "program" can interfere with benign applications? Have
    I chosen components that will STILL be available at the time it all
    comes together? Should I have run CAT6 instead of CAT5e? Will
    vision technology advance enough to make it more practical in
    tracking subjects? ...) So, any additional risk has to yield
    tangible short-term reward. E.g., developing the process migration
    mechanism gives me an "out" if I find myself in need of more resources (MIPS/memory) for some particular activity -- just power up another
    node and move some of the load onto it!

    And, design methodologies aren't likely to be of help to me AFTER
    this project (as I want to pursue other interests) so it's an investment
    with no LONG-term payoff.

    OTOH, if "everyone" was expressing designs in some particular
    structured form, then it would have value to THEM (and thus, to me)
    if I invested the time to learn enough to be able to translate
    my existing designs into such form.

    Instead you blather on about the many things in your past.

    I use examples from my past to explain/justify/rationalize
    approaches I've used/avoided as well as constraints with
    which I've had to live.

    I suspect most of my colleagues use THEIR pasts similarly.

    None of our FUTURES are known so pointless to "blather on"
    about those!

    Guess what? Lots of
    us have many varied experiences in our past. You'd be surprised how widely my interests have strayed off the beaten tracks - I'm also a life-long explorer. But this is not about me, or about you, it's about how the industries we work in could do even better than we could, and what we could yet learn to help them
    do that. I'd still like to add to my legacy, and I hope you would too.

    My "legacy" is the products that I've defined and the people I've
    influenced. I have no desire to immortalize myself in any of these
    things; the *things* are the legacy, not my role in their creation.

    E.g., I am hoping that demonstrating how you can approach a (UI) design differently can eliminate "ability-bias" in the way those interfaces
    are designed. I have no desire to go on a "speaking circuit" to
    pitch the approach. Few people will actually see/interact with
    my end result. But, if it causes them to think A LITTLE about
    their UIs, then it will have made a mark (contribution).

    Not just raise a long discussion leading to interesting areas to explore, then
    announce that it was all useless because you don't intend to explore any more.

    All of these are "wilderness" areas. All of them took time to research,
    implement and evaluate.

    So, I guess I have a different idea of "having time to explore" than
    you do!

    But that was all in the past. Apparently you no longer have the desire to explore. That seems to be what you said anyhow. Correct me if I've misread that:

    No, that's all part of *THIS* project. Note the sheer number of "new" techniques/technologies that I listed. Then, tell me I have no desire to explore!

    The verdict is still out as to how well my redundancy mechanisms will work
    IN REAL LIFE. The verdict is still out as to how well my speech synthesizers will perform when confronted with some yet-to-be-conceived string of characters. Or, how well my hardware will survive a *real* "attack" -- ditto for the software. How will my speaker identification software fare when it encounters someone with a severe case of laryngitis? Or, as those voices naturally "age"? Will others be able to get the information they need from
    the documentation I've prepared?

    So, while I have attempted to tackle each of these "wilderness issues",
    I still have no idea as to which, if any, will work IN PRACTICE as
    well as what the shortcomings of those that miss their mark will likely be. I'll BEGIN to know that when all of the technologies have been implemented
    and deployed. As with any project, those things only are truly understandable AFTER the project is complete.

    In *my* case, that will just be an intellectual exercise as my NEW
    activities won't (likely) directly benefit from those past.

    "It's unlikely that I will try any of them! This is my last
    "electronics" project (50 years in a field seems like "enough""

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Clifford Heath@21:1/5 to Don Y on Thu Jun 3 11:54:39 2021
    On 3/6/21 2:01 am, Don Y wrote:
    On 6/2/2021 5:39 AM, Clifford Heath wrote:
    No-one. Certainly not me. But you've said you're effectively retiring.
    To me that means you don't have to "deliver a result" any more.

    OTOH, if "everyone" was expressing designs in some particular
    structured form, then it would have value to THEM (and thus, to me)
    if I invested the time to learn enough to be able to translate
    my existing designs into such form.

    See my comment on SysML at the bottom.

    My "legacy" is the products that I've defined and the people I've influenced.  I have no desire to immortalize myself in any of these
    things; the *things* are the legacy, not my role in their creation.

    Good for you. You have my respect for your work.

    For myself, I feel that demonstrating *how* I worked is potentially more beneficial, even though many fewer people are directly impacted. I hold
    the principle that it is more beneficial to invent a better shovel than
    to dig another hole. So to talk with those few about *how* I did what I
    did (and to ruminate on how I might have done better) is a worthwhile
    activity. Not valuable to me of course, but to them and those who will
    benefit from the things they create.

    E.g., I am hoping that demonstrating how you can approach a (UI) design differently can eliminate "ability-bias" in the way those interfaces
    are designed.

    Without knowing anything about how you've done that, I suspect a lot of
    it was already studied in UIMS work and implemented in products like
    Apollo's "Dialog Manager" during the 1970s and 80s. We implemented many
    of these ideas (along with many of my own) in our startup product OpenUI
    in the 1990's... ideas which still haven't been widely adopted. Yes,
    novel user interface technology was the most productive decade of my
    life so far. There's plenty of room for improvement still!

    It's good to explore "new" territory, but it's also good to firstly plan
    the journey by studying the prior art - just in case it's not new
    territory after all.

      I have no desire to go on a "speaking circuit" to
    pitch the approach.  Few people will actually see/interact with
    my end result.

    I did the speaking circuit for a few years, and I think that might be
    enough for me. I don't have the motivation to become an industry
    fashionista (which is all that most such people are). However, the
    result is that hundreds of people are implementing some of my ideas and spreading them to other people, especially in Europe.

    No, that's all part of *THIS* project.  Note the sheer number of "new" techniques/technologies that I listed.  Then, tell me I have no desire to explore!

    Ok, fair enough! Formal methods are probably a step too far. You might
    get some benefit from a brief look at SysML, which I believe is very
    closely aligned to your needs. I believe SysML v2.0 was released just today.

    Clifford Heath.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Y@21:1/5 to Clifford Heath on Thu Jun 3 01:56:53 2021
    On 6/2/2021 6:54 PM, Clifford Heath wrote:
    On 3/6/21 2:01 am, Don Y wrote:
    On 6/2/2021 5:39 AM, Clifford Heath wrote:
    No-one. Certainly not me. But you've said you're effectively retiring.
    To me that means you don't have to "deliver a result" any more.

    OTOH, if "everyone" was expressing designs in some particular
    structured form, then it would have value to THEM (and thus, to me)
    if I invested the time to learn enough to be able to translate
    my existing designs into such form.

    See my comment on SysML at the bottom.

    My "legacy" is the products that I've defined and the people I've
    influenced. I have no desire to immortalize myself in any of these
    things; the *things* are the legacy, not my role in their creation.

    Good for you. You have my respect for your work.

    For myself, I feel that demonstrating *how* I worked is potentially more beneficial, even though many fewer people are directly impacted. I hold the principle that it is more beneficial to invent a better shovel than to dig another hole. So to talk with those few about *how* I did what I did (and to ruminate on how I might have done better) is a worthwhile activity. Not valuable to me of course, but to them and those who will benefit from the things they create.

    I share my approaches with my colleagues -- but don't "evangelize".
    They know that my "style" tends to evolve with each new project so,
    possibly, they see every "discovery" as just a "latest fad"?

    I let my work do the preaching. People reviewing it will likely be paying far more attention to it than they would to an "opinion piece". And, it's
    a /fait acomplis/, of sorts; there's no question as to whether it works
    or how well it works -- it is concrete evidence of both.

    I bumped into a colleague from a previous employer some years after I'd
    left. He mentioned that he'd taken on the manufacturing support for
    one of my designs. (<shrug> "OK, someone had to do so!") He went on to
    offer that he was initially overwhelmed at the complexity of the circuit.
    Then, quickly added, "But once I realized HOW you did things, everything
    was crystal clear!" No doubt because I tend to be very consistent in
    how I approach a solution (understand what I've done in one place and it directly translates into how I did something else, elsewhere).

    I try to "coerce" people following up on my efforts to adopt the same
    style -- by building mechanisms that make it easier to do things LIKE
    I did than to have to build something from scratch. Sometimes there's
    so much "structure" in my design that it makes it impractical to "lift"
    it, intact, and use it in another application without a fair bit
    of rework.

    [My speech synthesizers are a perfect example of this. As they have
    LOTS of "const" data driving their algorithms, they load that data
    from the RDBMS at runtime. And, as the RDBMS may be updated some
    time (hours, days, months?) later -- WHILE THEY ARE STILL RUNNING -- they install a trigger in the RDBMS that effectively initiates an upcall
    to the synthesizer whenever those table entries are altered.
    ("The data on which you rely has been altered -- by SOMETHING. When
    it is convenient for you to do so, you may want to refresh your
    local sense of that data...") As other folks likely don't have a
    similar mechanism in their applications, all of the related code has
    to be elided and the appropriate const data installed in the binary
    (via the source).]

    Whether any of this wears off, long term, is unknown. But, I do notice an increased use of finite automata among my colleagues, more structure (inefficiency?) in their code, more synchronous hardware designs, etc.
    So, maybe they've "matured" on their own -- or, silently taken up
    some of the design approaches that I've championed, over the years.

    E.g., I am hoping that demonstrating how you can approach a (UI) design
    differently can eliminate "ability-bias" in the way those interfaces
    are designed.

    Without knowing anything about how you've done that, I suspect a lot of it was
    already studied in UIMS work and implemented in products like Apollo's "Dialog
    Manager" during the 1970s and 80s. We implemented many of these ideas (along with many of my own) in our startup product OpenUI in the 1990's... ideas which
    still haven't been widely adopted. Yes, novel user interface technology was the
    most productive decade of my life so far. There's plenty of room for improvement still!

    Most "electronic" UI's tend to rely heavily on vision. And, some sort
    of dexterity (to manipulate controls). There have been numerous (bad)
    attempts to retrofit assistive technology interfaces to interfaces
    originally designed for "able-bodied" users. But, this after-the-fact
    approach tends to be readily apparent to anyone using same.

    Imagine how you'd enter text with a sip-n-puff/mouth stick. Then,
    step back and ask, "Why are we asking for TEXT, here? Is there some
    other way to get the information we want without resorting to that
    'obvious' implementation choice?"

    For example, prior to bedtime (here, my current project), the
    user needs to be apprised of anything that my system can't
    AUTOMATICALLY "fix" on his behalf. (e.g., I can adjust the
    thermostat without his intervention -- once I realize that he
    is retiring; but, I can't turn off the stovetop if he forgot to
    do so, earlier... or, close any open windows, etc.)

    The "obvious" way to convey this information to the user (given
    that there are MANY "things" that *could* potentially need
    attention, even though few actually *will*) is to display a
    graphic rendering of the floorplan with "red" highlights
    in those areas that need attention. A user can then just glance
    at such a display and assure himself that all is well (or,
    discover WHAT needs attention).

    This won't work for a blind user. And, an alternative "display"
    might need to be available for someone confined to a wheelchair
    (e.g., I've located three such "display panels" around the house...
    intended for STANDING sighted users -- out of sight/reach of
    anyone wheelchair bound).

    Reading off a list of "issues" could work for a blind (but *hearing*)
    user. But, what order to present that list -- given that the
    user is likely NOT going to be taking "written" notes? Surely,
    any FIXED order has significant downsides as it would suggest the
    user visit those issues in the PHYSICAL order enumerated instead
    of taking into account his present location.

    And, as he's not consulting a VISUAL display panel, it's silly
    to assume he is located near one!

    A friendlier presentation would inform him of the items needing
    addressing in relationship to his current location (so, your
    implementation should determine that before creating a presentation).
    And, as he likely can access that information without being
    present in a fixed location (which you can determine by HOW he
    is accessing it -- via a fixed "display AUDIO station" or via
    a wireless PORTABLE earpiece), you should "bundle" the items
    needing attention into groups and summarize the AREAS needing
    attention and rely on the user to move to an area that he finds
    convenient, before expanding on the report that you have
    for THAT area.

    And, you wouldn't want to bother *either* user with "conditions"
    that you've grown to know are "normal" for THEIR usage (e.g.,
    we sleep with windows open most of the year so don't alarm on
    those issues).

    You can't effectively "bolt on" this sort of interface to
    one that was originally designed for sighted users; it's a
    significant chore (and would have to be done for EVERY
    "visual presentation) which means it simply doesn't get done.

    Or, is done in some kludgey manner ("Let's let the user
    scan the visual display and INSPECT each area...")

    The developer who thinks in visual terms won't even consider
    these other options in the design of his implementation.

    And, an implementation that expects to tackle it with
    *explicit*, developer defined mechanisms like:

    case (interface_type) {
    visual =>
    video->display(annotated_floor_plan());
    audible =>
    audio->display(list_problem_areas());
    haptic =>
    haptic->display(problems());
    * =>
    // panic
    }

    just invites folks to skimp on some implementations ("interface_types")
    as well as omit those that they can't sort out (easily) at design time.

    It's good to explore "new" territory, but it's also good to firstly plan the journey by studying the prior art - just in case it's not new territory after all.

    I have no desire to go on a "speaking circuit" to
    pitch the approach. Few people will actually see/interact with
    my end result.

    I did the speaking circuit for a few years, and I think that might be enough for me. I don't have the motivation to become an industry fashionista (which is
    all that most such people are). However, the result is that hundreds of people
    are implementing some of my ideas and spreading them to other people, especially in Europe.

    A different set of motivation. I like building "things" that solve problems for "real people" (everyone KNOWS developers are UNREAL! :> ). I don't
    have the patience to "school" anyone -- even if done with a light touch.
    My attitude is, instead, "This is what I did. Have a look for yourself.
    I've got more interesting things to pursue..."

    ["Education" is neverending; what do you do when the NEXT guy comes
    along and expresses an interest? Do you spent the time required to
    educate (and persuade!) him? And, the one after that? Schematics,
    specs, manuals, source code are my proxies for those activities.
    "You're going to HAVE to understand at least some of those media
    if you're going to do this sort of work, so have at it!"]

    I made the mistake of letting a local builder see what I was doing
    with the house... explaining how it is intended to address disabilities
    (both present and evolved) to allow occupants to exist independantly
    for "longer". He just saw it as a way to add $X (X being a relatively
    large number) to the sales price of his so-equipped homes -- instead
    of truly appreciating what I was trying to achieve. After hounding
    me for months ("Is it done, yet?"), I finally stopped taking his calls.

    "Here's an IDEA. You are free to find someone to exploit that idea
    ON YOUR SCHEDULE AND AT YOUR PROFIT MARGIN. But, *I* am not interested."

    No, that's all part of *THIS* project. Note the sheer number of "new"
    techniques/technologies that I listed. Then, tell me I have no desire to
    explore!

    Ok, fair enough! Formal methods are probably a step too far. You might get some
    benefit from a brief look at SysML, which I believe is very closely aligned to
    your needs. I believe SysML v2.0 was released just today.

    I'll look at it but still suspect the fact that there was no outcry
    of "Yeah, we ALL use SysML!" is a testament to its value, to *me*
    (or the folks I expect will be reading my documentation).

    If, for example, I was looking for a way to describe a piece of
    "hardware", I'd seriously consider Verilog or VHDL -- instead of
    (or in addition to) a "plain old schematic". But, that because
    I know there is a sizeable population of "hardware" designers
    who'd be familiar with such an expression. I'd certainly not
    present a series of photomicrographs -- despite being a more
    *exact* representation of the actual implementation! It simply
    would fly over the heads of the intended audience.

    Thanks for the input!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)