How prevalent are (semi-) formal design methods employed?
Which?
Don Y <blockedofcourse@foo.invalid> wrote:
How prevalent are (semi-) formal design methods employed?
Which?
We use theorem provers to find bugs in ISA specification: https://www.cl.cam.ac.uk/~pes20/sail/
They're quite handy for finding bugs before they hit silicon...
How prevalent are (semi-) formal design methods employed?
On 5/11/2021 9:25 PM, Don Y wrote:
How prevalent are (semi-) formal design methods employed?
https://tinyurl.com/thbjw5j4
On 5/13/2021 7:25 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
How prevalent are (semi-) formal design methods employed?
Which?
We use theorem provers to find bugs in ISA specification: https://www.cl.cam.ac.uk/~pes20/sail/
They're quite handy for finding bugs before they hit silicon...
But, presumably, only of value if you're a SoC integrator?
I.e., given COTS devices, what might they reveal to users of
said devices?
Don Y <blockedofcourse@foo.invalid> wrote:
On 5/13/2021 7:25 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
How prevalent are (semi-) formal design methods employed?
Which?
We use theorem provers to find bugs in ISA specification:
https://www.cl.cam.ac.uk/~pes20/sail/
They're quite handy for finding bugs before they hit silicon...
But, presumably, only of value if you're a SoC integrator?
I.e., given COTS devices, what might they reveal to users of
said devices?
They can reveal bugs in existing implementations - where they don't meet the spec and bad behaviour can result.
However CPU and FPGA design is what we do so that's where we focus our efforts. Depends whether FPGA counts as COTS or not...
How prevalent are (semi-) formal design methods employed?https://tinyurl.com/thbjw5j4
How prevalent are (semi-) formal design methods employed?
Which?
[I don't have first-hand knowledge of *anyone* using them]
On 12/5/21 11:25 am, Don Y wrote:
How prevalent are (semi-) formal design methods employed?
Which?
[I don't have first-hand knowledge of *anyone* using them]
Don, IDK if you know about TLA+, but there is a growing community using it. It
is specifically good at finding errors in protocol (== API) designs (because "TL" means "Temporal Logic"). I haven't used it so can't really answer many questions, but I have been following the mailing list for some time and greatly
admire some of the excellent folk are are using it.
Or, said another way, what does a tool/process have to *do* in
order to overcome this "resistance"?
On 5/14/2021 9:15 PM, Clifford Heath wrote:
On 12/5/21 11:25 am, Don Y wrote:
How prevalent are (semi-) formal design methods employed?
Which?
[I don't have first-hand knowledge of *anyone* using them]
Don, IDK if you know about TLA+, but there is a growing community
using it. It is specifically good at finding errors in protocol (==
API) designs (because "TL" means "Temporal Logic"). I haven't used it
so can't really answer many questions, but I have been following the
mailing list for some time and greatly admire some of the excellent
folk are are using it.
My query was more intended to see how *commonplace* such approaches are. There are (and have been) many "great ideas" but, from my vantage point,
I don't see much by way of *adoption*. (Note your own experience with
TLA!)
So, you either conclude that the methods are all "hype" (not likely),
*or*, there is some inherent resistance to their adoption. Price?
(Process) overhead? NIH? Scale? Education? <shrug>
It's as if a (professional) writer wouldn't avail himself of a spell-checker... Or, a layout guy not running DRCs... (yes,
I realize this to be an oversimplification; the examples I've
given are just mouse clicks!)
On 5/14/2021 2:43 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 5/13/2021 7:25 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
How prevalent are (semi-) formal design methods employed?
Which?
We use theorem provers to find bugs in ISA specification:
https://www.cl.cam.ac.uk/~pes20/sail/
They're quite handy for finding bugs before they hit silicon...
But, presumably, only of value if you're a SoC integrator?
I.e., given COTS devices, what might they reveal to users of
said devices?
They can reveal bugs in existing implementations - where they don't
meet the
spec and bad behaviour can result.
However CPU and FPGA design is what we do so that's where we focus our
efforts. Depends whether FPGA counts as COTS or not...
Understood. Tools fit the application domains for which they were
designed.
How did *adoption* of the tool come to pass? Was it "mandated" by
corporate
policy? Something <someone> stumbled on, played with and then "pitched"
to management/peers? Mandated by your industry? etc.
[Just because a tool "makes sense" -- logically or economically -- doesn't mean it will be adopted, much less *embraced*!]
It's as if a (professional) writer wouldn't avail himself of a
spell-checker... Or, a layout guy not running DRCs... (yes,
I realize this to be an oversimplification; the examples I've
given are just mouse clicks!)
These are not the same at all, because those things have rules. There is no rule for correct logic.
On 14/05/2021 20:37, Don Y wrote:
On 5/14/2021 2:43 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
On 5/13/2021 7:25 AM, Theo wrote:
Don Y <blockedofcourse@foo.invalid> wrote:
How prevalent are (semi-) formal design methods employed?
Which?
We use theorem provers to find bugs in ISA specification:
https://www.cl.cam.ac.uk/~pes20/sail/
They're quite handy for finding bugs before they hit silicon...
But, presumably, only of value if you're a SoC integrator?
I.e., given COTS devices, what might they reveal to users of
said devices?
They can reveal bugs in existing implementations - where they don't meet the
spec and bad behaviour can result.
However CPU and FPGA design is what we do so that's where we focus our
efforts. Depends whether FPGA counts as COTS or not...
Understood. Tools fit the application domains for which they were designed. >>
How did *adoption* of the tool come to pass? Was it "mandated" by corporate >> policy? Something <someone> stumbled on, played with and then "pitched"
to management/peers? Mandated by your industry? etc.
It became a must have tool about 2-3 decades ago for the safety critical/ avionics/medical industries.
Design where becoming so complex that simulation
could no longer answers questions like, is dead-lock or life-lock possible on our statemachine, can you buffers overflow, do you have arithmetic overflow, deadcode, race condition etc. The tools are now well established and most of the above questions can be answered (with some user constraints) by a simple push button tool. They are still expensive (you won't get much change from 20K
UK pounds) but most high-end FPGA/ASIC companies use them. They are not a replacement for simulation but one of the tools you need to complete your verification.
Regards,
Hans
www.ht-lab.com
[Just because a tool "makes sense" -- logically or economically -- doesn't >> mean it will be adopted, much less *embraced*!]
Understood. Tools fit the application domains for which they were designed.
How did *adoption* of the tool come to pass? Was it "mandated" by corporate policy? Something <someone> stumbled on, played with and then "pitched"
to management/peers? Mandated by your industry? etc.
[Just because a tool "makes sense" -- logically or economically -- doesn't mean it will be adopted, much less *embraced*!]
On 5/15/2021 1:14 AM, Clifford Heath wrote:
It's as if a (professional) writer wouldn't avail himself of a
spell-checker... Or, a layout guy not running DRCs... (yes,
I realize this to be an oversimplification; the examples I've
given are just mouse clicks!)
These are not the same at all, because those things have rules. There
is no rule for correct logic.
Logic is only "correct" if you are applying a prover.
You can still use formal methods in things like specifications
Unfortunately, use in such a document is not suited for "general audiences"
because it lacks rationale for each item in the specification.
The alternative is: an /ad hoc/ specification (with some likely
incompletely
specified set of loose rules) *or* an *absent* specification. Each of
these leave gaping holes in the design that (supposedly) follows.
Again, why the resistance to adopting such a "codified" approach?
On 15/5/21 7:10 pm, Don Y wrote:
Unfortunately, use in such a document is not suited for "general audiences"
The goal of CQL is to make the formal model suitable for (and expressed in the
language of) anyone generally familiar with the domain being modelled.
because it lacks rationale for each item in the specification.
Actually rationale is seldom needed. What is needed is an example of the scenario that is allowed or disallowed by each definition. The example is almost always an adequate rationalisation.
The alternative is: an /ad hoc/ specification (with some likely incompletely >> specified set of loose rules) *or* an *absent* specification. Each of these >> leave gaping holes in the design that (supposedly) follows.
That's precisely true. That's why analytic (formal) models are needed.
Again, why the resistance to adopting such a "codified" approach?
Hubris, mostly. People genuinely don't see the need until enough experience has
humbled them, and by then, their accumulated caution and tentativeness mean their industry sees them as dinosaurs.
Don Y <blockedofcourse@foo.invalid> wrote:
Understood. Tools fit the application domains for which they were designed. >>
How did *adoption* of the tool come to pass? Was it "mandated" by corporate >> policy? Something <someone> stumbled on, played with and then "pitched"
to management/peers? Mandated by your industry? etc.
[Just because a tool "makes sense" -- logically or economically -- doesn't >> mean it will be adopted, much less *embraced*!]
We're a university research lab. We developed the tool in response to two trends:
- growing complexity of systems and the increasing prevalence of bugs in implementation (for example in memory coherency subsystems).
- proposing security extensions to architectures and wanting to be able to show that what we've proposed doesn't have any loopholes in it. It provides confidence to us and to people adopting the technology that the technology
is robust, and there won't be problems once silicon is deployed and impossible to fix later.
(that's not to say there won't be entirely new classes of attacks coming out of left-field in the way that Spectre surprised a lot of people, but are at least trying to reason about the attacks we know about)
Because the costs of respinning are so high, chip design is all about verification. Doing it in a formal sense is a step up from hand-writing or randomly-generating tests. I don't think the industry needs convincing that it's a good idea in abstract - it's mainly making the benefits outweigh the costs. A lot of the work (eg in the RISC-V community) is about bringing
down those costs.
On 5/15/2021 3:54 AM, Clifford Heath wrote:
On 15/5/21 7:10 pm, Don Y wrote:
Unfortunately, use in such a document is not suited for "general
audiences"
The goal of CQL is to make the formal model suitable for (and
expressed in the language of) anyone generally familiar with the
domain being modelled.
I opted for OCL as my system is object-based, UML is relatively easy to understand and the way things would be expressed closely mimics the
way those constraints would be imposed in the code. Any "translation" effort would introduce other opportunities for errors to creep in.
Actually rationale is seldom needed. What is needed is an example of
the scenario that is allowed or disallowed by each definition. The
example is almost always an adequate rationalisation.
I disagree. I find many cases where I need to resort to prose to
explain why THIS implementation choice is better than THAT.
For example, I had to make a decision, early on, as to whether or
not "communications" would be synchronous or asynchronous. There
are advantages and drawbacks to each approach. From a performance standpoint, one can argue that async wins. But, from a cognitive standpoint, sync is considerably easier for "users" (developers)
to "get right"; message and reply are temporally bound together
so the user doesn't have to, *later*, sort out which message a
subsequent reply correlates with (icky sentence structure but it's
early in the morning :< )
That's precisely true. That's why analytic (formal) models are needed.
I'm not sure the models need to be mechanically verifiable.
What *needs*
to happen is they need to be unambiguous and comprehensive.
You should
be able to look at one (before or after the code is written) and convince yourself that it addresses every contingency
Again, why the resistance to adopting such a "codified" approach?
Hubris, mostly. People genuinely don't see the need until enough
experience has humbled them, and by then, their accumulated caution
and tentativeness mean their industry sees them as dinosaurs.
"Hubris" suggests overconfidence (in their abilities). I'm not sure
that's the case.
I started looking for a "common ground" in which I could express my current design
when I "discovered" that, not only is there no such thing,
I've been looking back over the past experiences I've had with getting
folks to "move forward" to try to see if I can identify the issue that
leads to this resistance.
I've
never been in an environment where time was allotted to explore
and learn new tools and techniques;
For things like specification and modeling, I can see a preception of
it as being a duplication of effort -- esp the more formally those
are expressed.
And, for some, I think an amazing lack of curiosity can explain their clinging to The Way We Always Did It. Or, the lack of "compensation"
for the "risk" they may be taking ("Heads, you win; tails, I lose!")
I have a lot of respect for my colleagues -- they've all got
proven track records with significant projects. Yet, they still
fall into this pattern of behavior -- clinging to past processes
instead of exploring new.
On 16/5/21 5:26 am, Don Y wrote:
On 5/15/2021 3:54 AM, Clifford Heath wrote:
On 15/5/21 7:10 pm, Don Y wrote:I opted for OCL as my system is object-based, UML is relatively easy to
Unfortunately, use in such a document is not suited for "general audiences"
The goal of CQL is to make the formal model suitable for (and expressed in >>> the language of) anyone generally familiar with the domain being modelled. >>
understand and the way things would be expressed closely mimics the
way those constraints would be imposed in the code. Any "translation"
effort would introduce other opportunities for errors to creep in.
It's very difficult to teach non-programmers to read OCL. But I agree on the need for code generation directly from the model, without translation. That's what CQL does too.
Actually rationale is seldom needed. What is needed is an example of the >>> scenario that is allowed or disallowed by each definition. The example is >>> almost always an adequate rationalisation.
I disagree. I find many cases where I need to resort to prose to
explain why THIS implementation choice is better than THAT.
Yes, where that is needed, CQL provides context-notes too. In fact I select four specific categories of rationale ("so that", "because", "as opposed to" and "to avoid"), with optional annotations saying who agreed and when.
For example, I had to make a decision, early on, as to whether or
not "communications" would be synchronous or asynchronous. There
are advantages and drawbacks to each approach. From a performance
standpoint, one can argue that async wins. But, from a cognitive
standpoint, sync is considerably easier for "users" (developers)
to "get right"; message and reply are temporally bound together
so the user doesn't have to, *later*, sort out which message a
subsequent reply correlates with (icky sentence structure but it's
early in the morning :< )
Yes, I'm well familiar with that problem. Best decade of my working life was building a development tool that exclusively used message passing.
For the most part it's amazingly liberating, but sometimes frustrating.
That's precisely true. That's why analytic (formal) models are needed.
I'm not sure the models need to be mechanically verifiable.
Even without verification, the model must only make rules that are *in principle* machine-verifiable. If a rule is not verifiable, it's just waffle with no single logical meaning. If it has a single logical meaning, it is in principle machine-verifiable.
What *needs*
to happen is they need to be unambiguous and comprehensive.
Comprehensive is impossible.
There's always a possibility for more detail in
any real-world system. But unambiguous? Yes, that requires that it is a formal
part of a formal system which allows its meaning to be definitively stated. That is, for any scenario, there exists a decision procedure which can determine whether that scenario is allowed or is disallowed (this is what is meant when a logical system is termed "decidable").
You should
be able to look at one (before or after the code is written) and convince
yourself that it addresses every contingency
It's a great goal, but it is *in principle* impossible. Correctness of every contingency requires that the model matches the "real world", and Godel's theorem shows that it's impossible to prove that.
Again, why the resistance to adopting such a "codified" approach?
Hubris, mostly. People genuinely don't see the need until enough experience >>> has humbled them, and by then, their accumulated caution and tentativeness >>> mean their industry sees them as dinosaurs.
"Hubris" suggests overconfidence (in their abilities). I'm not sure
that's the case.
I started looking for a "common ground" in which I could express my current >> design
when I "discovered" that, not only is there no such thing,
I've been looking back over the past experiences I've had with getting
folks to "move forward" to try to see if I can identify the issue that
leads to this resistance.
"Every man's way is right in his own eyes" - Proverbs 21:2
People can't see the flaws in their own logic, because their logic is flawed. They resist methodical attempts to correct them, because they're already "right".
I've
never been in an environment where time was allotted to explore
and learn new tools and techniques;
I know a number of companies that implement "10% time", for employees to explore any technology or personal projects they feel might be relevant to the
business (or to their ability to contribute to it). I think Google is one of these, in fact, though my examples are closer to home.
For things like specification and modeling, I can see a preception of
it as being a duplication of effort -- esp the more formally those
are expressed.
Unfortunately a lot of companies view testing in the same way. As if it wasn't
possible for them to make a mistake.
And, for some, I think an amazing lack of curiosity can explain their
clinging to The Way We Always Did It. Or, the lack of "compensation"
for the "risk" they may be taking ("Heads, you win; tails, I lose!")
Folk who have been "lucky" a few times tend to become the "golden child" and get promoted. Once in senior positions they're much more likely to reject techniques which could discover that "the emperor has no clothes"
I have a lot of respect for my colleagues -- they've all got
proven track records with significant projects. Yet, they still
fall into this pattern of behavior -- clinging to past processes
instead of exploring new.
I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
I promise that it will be worth your effort.
On 5/15/2021 6:41 PM, Clifford Heath wrote:
On 16/5/21 5:26 am, Don Y wrote:
On 5/15/2021 3:54 AM, Clifford Heath wrote:
On 15/5/21 7:10 pm, Don Y wrote:
Unfortunately, use in such a document is not suited for "general
audiences"
The goal of CQL is to make the formal model suitable for (and
expressed in the language of) anyone generally familiar with the
domain being modelled.
Actually rationale is seldom needed. What is needed is an example of
the scenario that is allowed or disallowed by each definition. The
example is almost always an adequate rationalisation.
I disagree. I find many cases where I need to resort to prose to
explain why THIS implementation choice is better than THAT.
Yes, where that is needed, CQL provides context-notes too. In fact I
select four specific categories of rationale ("so that", "because",
"as opposed to" and "to avoid"), with optional annotations saying who
agreed and when.
I present my descriptions in "multimedia prose" so I can illustrate
(and animate) to better convey the intent.
Even without verification, the model must only make rules that are *in
principle* machine-verifiable. If a rule is not verifiable, it's just
waffle with no single logical meaning. If it has a single logical
meaning, it is in principle machine-verifiable.
Yes. But *getting* such a tool (or requiring its existence before adopting a model strategy) can be prohibitive.
There's always a possibility for more detail in any real-world system.
Again, why the resistance to adopting such a "codified" approach?Hubris, mostly. People genuinely don't see the need until enough
experience has humbled them, and by then, their accumulated caution
and tentativeness mean their industry sees them as dinosaurs.
"Hubris" suggests overconfidence (in their abilities). I'm not sure
that's the case.
I started looking for a "common ground" in which I could express my
current design when I "discovered" that, not only is there no such thing,
You've missed the point. I was addressing the point that modeling, itself, is relatively uncommon (at least in "these circles"). So, trying to find some common subset that everyone (most practitioners) were using was pointless.
If no one speaks Esperanto, then it is foolish to present your
documentation *in* Esperanto!
When I approached my colleagues re: this topic, no one tried to
defend their current practices. I think they all realize they
*could* be doing things "better". This led to my contemplation of
why they *aren't* moving in those directions.
I know a number of companies that implement "10% time",Yes, but 10% of 40 hours isn't a helluvalot of time.
I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
I promise that it will be worth your effort.
It's unlikely that I will try any of them! This is my last "electronics" project (50 years in a field seems like "enough"; time to pursue OTHER interests!)
On 16/5/21 5:35 pm, Don Y wrote:
On 5/15/2021 6:41 PM, Clifford Heath wrote:
On 16/5/21 5:26 am, Don Y wrote:
On 5/15/2021 3:54 AM, Clifford Heath wrote:
On 15/5/21 7:10 pm, Don Y wrote:
Unfortunately, use in such a document is not suited for "general audiences"
The goal of CQL is to make the formal model suitable for (and expressed in
the language of) anyone generally familiar with the domain being modelled.
Actually rationale is seldom needed. What is needed is an example of the >>>>> scenario that is allowed or disallowed by each definition. The example is >>>>> almost always an adequate rationalisation.
I disagree. I find many cases where I need to resort to prose to
explain why THIS implementation choice is better than THAT.
Yes, where that is needed, CQL provides context-notes too. In fact I select >>> four specific categories of rationale ("so that", "because", "as opposed to"
and "to avoid"), with optional annotations saying who agreed and when.
I present my descriptions in "multimedia prose" so I can illustrate
(and animate) to better convey the intent.
All that is needed is to allow the viewer to get to the "Aha!" moment where they see why the alternative will fail. Do whatever you have to do to achieve that, and you have your rationale. It will vary with the situation, and with the audience.
Again, why the resistance to adopting such a "codified" approach?Hubris, mostly. People genuinely don't see the need until enough
experience has humbled them, and by then, their accumulated caution and >>>>> tentativeness mean their industry sees them as dinosaurs.
"Hubris" suggests overconfidence (in their abilities). I'm not sure
that's the case.
I started looking for a "common ground" in which I could express my current
design when I "discovered" that, not only is there no such thing,
You've missed the point. I was addressing the point that modeling, itself, >> is relatively uncommon (at least in "these circles"). So, trying to find
some common subset that everyone (most practitioners) were using was
pointless.
If no one speaks Esperanto, then it is foolish to present your documentation >> *in* Esperanto!
In relation to software, at least, every modeling language has been a private language shared only (or mainly) by systems architects. They're all Esperanto,
of one kind or another. (ER diagramming has sometimes been useful, and occasionally even UML, but usually not)
As such, it serves only for documenting the system *as designed*, and can
provide no help to a non-expert in identifying flaws where the result would not
match the need (as opposed to not working correctly within its own frame of reference).
Because "modelling" has always been subject to this failure, it is seen as a pointless exercise. Delivered software is likely to work "as designed" yet still mis-match the problem because the design was mis-targeted, whether it was
modelled or was not.
The solution to this is good modelling tools that can communicate to *everyone*, not just to programmers. And that's what I spent a decade trying to
build.
When I approached my colleagues re: this topic, no one tried to
defend their current practices. I think they all realize they
*could* be doing things "better". This led to my contemplation of
why they *aren't* moving in those directions.
It's easier to "stay in town" where it's comfortable, than to go exploring in the wilderness. It's wilderness because there is no adoption, and there's no adoption not because no-one has been there, but because they didn't build roads
and towns there yet.
I know a number of companies that implement "10% time",Yes, but 10% of 40 hours isn't a helluvalot of time.
Half a day a week seems like quite a lot to me. If an employee doing that can't
prove to me that they've discovered something worthy of a bigger investment, then I don't believe they have.
At one company, I spent 18 months trying to evangelise the business need for a
new technology I'd envisaged. Eventually someone told my manager "just give him
2 weeks to prototype it!", and I did. The effect was astounding - the entire
company (110 people) dropped everything else they were doing to work on producing a new product based on my idea. That product brought in well over a hundred million dollars over the next decade.
I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
I promise that it will be worth your effort.
It's unlikely that I will try any of them! This is my last "electronics"
project (50 years in a field seems like "enough"; time to pursue OTHER
interests!)
And that is why I'm sometimes reluctant to engage in these conversations with you, Don. You're the one asking "why does nobody try this", but even now you
have time to explore (without the demands of delivering a result), you're
unwilling to do more than talk about that.
On 6/1/2021 7:03 PM, Clifford Heath wrote:
On 16/5/21 5:35 pm, Don Y wrote:
On 5/15/2021 6:41 PM, Clifford Heath wrote:
Yes, where that is needed, CQL provides context-notes too >>> I present my descriptions in "multimedia prose" so I can illustrate(and animate) to better convey the intent.
All that is needed is to allow the viewer to get to the "Aha!" moment
where they see why the alternative will fail. Do whatever you have to
do to achieve that, and you have your rationale. It will vary with the
situation, and with the audience.
I've found that people tend to "lack imagination" when it comes to
things in which they aren't DEEPLY invested.
As such, it serves only for documenting the system *as designed*, and canThat's exactly my goal. I'm looking to "translate" my design into
form(s) that may be more easily recognizable -- to some "significant"
group of readers.
provide no help to a non-expert in identifying flaws where the resultDifferent goal.
would not match the need (as opposed to not working correctly within
its own frame of reference).
The solution to this is good modelling tools that can communicate to
*everyone*, not just to programmers. And that's what I spent a decade
trying to build.
You're trying to solve a different problem than I.
I agree with your points -- in *that* application domain.
I don't think "ease" or "comfort" is the issue. Many of my colleagues
are involved with very novel designs and application domains. They
are almost always "treking in the wilderness".
But, they aren't likely to add additional uncertainty to their efforts
I look forward to hearing of your experiences with TLA+, Alloy, or CQL. >>>> I promise that it will be worth your effort.
It's unlikely that I will try any of them! This is my last
"electronics"
project (50 years in a field seems like "enough"; time to pursue OTHER
interests!)
And that is why I'm sometimes reluctant to engage in these
conversations with you, Don. You're the one asking "why does nobody
try this", but even now you
You've misread my intent. I am not ACCUSING anyone. Rather, I am
trying to
see how people have AVOIDED "doing this". How are folks designing
products if
they aren't writing specifications, modeling behaviors, quantifying market requirements, etc.? (if they ARE doing those things, then WHICH are they doing??)
have time to explore (without the demands of delivering a result), you'reWho says I don't have to "deliver a result"?
unwilling to do more than talk about that.Who says I've not "ventured into the uncharted wilderness"?
All of these are "wilderness" areas. All of them took time to research, implement and evaluate.
So, I guess I have a different idea of "having time to explore" than
you do!
"It's unlikely that I will try any of them! This is my last
"electronics" project (50 years in a field seems like "enough""
I don't think "ease" or "comfort" is the issue. Many of my colleagues
are involved with very novel designs and application domains. They
are almost always "treking in the wilderness".
But, they aren't likely to add additional uncertainty to their efforts
And there's the nub. They see these "unproven" technologies as adding uncertainty, where the reality is that they exist to *reduce uncertainty. The
truth? They're afraid the formal analysis will show everyone how wrong they are, and have always been.
I look forward to hearing of your experiences with TLA+, Alloy, or CQL. >>>>> I promise that it will be worth your effort.
It's unlikely that I will try any of them! This is my last "electronics" >>>> project (50 years in a field seems like "enough"; time to pursue OTHER >>>> interests!)
And that is why I'm sometimes reluctant to engage in these conversations >>> with you, Don. You're the one asking "why does nobody try this", but even >>> now you
You've misread my intent. I am not ACCUSING anyone. Rather, I am trying to >> see how people have AVOIDED "doing this". How are folks designing products if
they aren't writing specifications, modeling behaviors, quantifying market >> requirements, etc.? (if they ARE doing those things, then WHICH are they
doing??)
Good question. I agree with your thoughts. But that has nothing to do with what
I was getting at.
have time to explore (without the demands of delivering a result), you're >> Who says I don't have to "deliver a result"?Who says I've not "ventured into the uncharted wilderness"?
unwilling to do more than talk about that.
No-one. Certainly not me. But you've said you're effectively retiring.
To me that means you don't have to "deliver a result" any more.
But it also means that you can take even more time to explore, if you like to go even further from the well-trodden paths, so I suggested a few interesting paths that you might like to explore in your dotage. Perhaps learn something that you could communicate to help less experienced folk. So I was a bit surprised when you said you were unlikely to do that.
Instead you blather on about the many things in your past.
Guess what? Lots of
us have many varied experiences in our past. You'd be surprised how widely my interests have strayed off the beaten tracks - I'm also a life-long explorer. But this is not about me, or about you, it's about how the industries we work in could do even better than we could, and what we could yet learn to help them
do that. I'd still like to add to my legacy, and I hope you would too.
Not just raise a long discussion leading to interesting areas to explore, then
announce that it was all useless because you don't intend to explore any more.
All of these are "wilderness" areas. All of them took time to research,
implement and evaluate.
So, I guess I have a different idea of "having time to explore" than
you do!
But that was all in the past. Apparently you no longer have the desire to explore. That seems to be what you said anyhow. Correct me if I've misread that:
"It's unlikely that I will try any of them! This is my last
"electronics" project (50 years in a field seems like "enough""
On 6/2/2021 5:39 AM, Clifford Heath wrote:
No-one. Certainly not me. But you've said you're effectively retiring.
To me that means you don't have to "deliver a result" any more.
OTOH, if "everyone" was expressing designs in some particular
structured form, then it would have value to THEM (and thus, to me)
if I invested the time to learn enough to be able to translate
my existing designs into such form.
My "legacy" is the products that I've defined and the people I've influenced. I have no desire to immortalize myself in any of these
things; the *things* are the legacy, not my role in their creation.
E.g., I am hoping that demonstrating how you can approach a (UI) design differently can eliminate "ability-bias" in the way those interfaces
are designed.
I have no desire to go on a "speaking circuit" to
pitch the approach. Few people will actually see/interact with
my end result.
No, that's all part of *THIS* project. Note the sheer number of "new" techniques/technologies that I listed. Then, tell me I have no desire to explore!
On 3/6/21 2:01 am, Don Y wrote:
On 6/2/2021 5:39 AM, Clifford Heath wrote:
No-one. Certainly not me. But you've said you're effectively retiring.
To me that means you don't have to "deliver a result" any more.
OTOH, if "everyone" was expressing designs in some particular
structured form, then it would have value to THEM (and thus, to me)
if I invested the time to learn enough to be able to translate
my existing designs into such form.
See my comment on SysML at the bottom.
My "legacy" is the products that I've defined and the people I've
influenced. I have no desire to immortalize myself in any of these
things; the *things* are the legacy, not my role in their creation.
Good for you. You have my respect for your work.
For myself, I feel that demonstrating *how* I worked is potentially more beneficial, even though many fewer people are directly impacted. I hold the principle that it is more beneficial to invent a better shovel than to dig another hole. So to talk with those few about *how* I did what I did (and to ruminate on how I might have done better) is a worthwhile activity. Not valuable to me of course, but to them and those who will benefit from the things they create.
E.g., I am hoping that demonstrating how you can approach a (UI) design
differently can eliminate "ability-bias" in the way those interfaces
are designed.
Without knowing anything about how you've done that, I suspect a lot of it was
already studied in UIMS work and implemented in products like Apollo's "Dialog
Manager" during the 1970s and 80s. We implemented many of these ideas (along with many of my own) in our startup product OpenUI in the 1990's... ideas which
still haven't been widely adopted. Yes, novel user interface technology was the
most productive decade of my life so far. There's plenty of room for improvement still!
It's good to explore "new" territory, but it's also good to firstly plan the journey by studying the prior art - just in case it's not new territory after all.
I have no desire to go on a "speaking circuit" to
pitch the approach. Few people will actually see/interact with
my end result.
I did the speaking circuit for a few years, and I think that might be enough for me. I don't have the motivation to become an industry fashionista (which is
all that most such people are). However, the result is that hundreds of people
are implementing some of my ideas and spreading them to other people, especially in Europe.
No, that's all part of *THIS* project. Note the sheer number of "new"
techniques/technologies that I listed. Then, tell me I have no desire to
explore!
Ok, fair enough! Formal methods are probably a step too far. You might get some
benefit from a brief look at SysML, which I believe is very closely aligned to
your needs. I believe SysML v2.0 was released just today.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 242:00:52 |
Calls: | 6,624 |
Files: | 12,175 |
Messages: | 5,320,145 |