• Re: ZFC solution to incorrect questions: reject them

    From Richard Damon@21:1/5 to olcott on Wed Mar 13 22:30:15 2024
    XPost: comp.theory, sci.logic

    On 3/13/24 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H.

    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>>> corresponding decider/input pair that only differs by the >>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return
    value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both
    of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no
    requirement that any of them get any particular answer right.

    So, ALL deciders are in the set that we cycle through and apply the >>>>>>> following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the >>>>>>> existance of the input was what as just proven, the ^ template


    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they >>>>>>> will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying.

    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that select a >>>>>>> input that the machine will get wrong needs to be the same for two >>>>>>> differnt machines.




    For Every H, we show we can find at least one input (chosen
    just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of
    the same machine that only differs by return value where both
    get the wrong answer on the same input. By same input I mean
    the same finite string of numerical values.


    But if they returned differnt values, they will have different
    descriptions.

    Otherwise, how could a UTM get the right answer, since it only gets >>>>>>> the description.

    We can get around all of this stuff by simply using this criteria: >>>>>> Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is
    correct*
    (He has neither reviewed nor agreed to anything else in this paper) >>>>>> (a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running >>>>>> unless aborted then
    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability
    of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system
    that cannot have hidden gaps its its reasoning. If a machine can do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x) (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x)
    (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf




    Except that (1) and (2) are not "Assumptions", but are sentences that he
    shows can be constructed in the meta-theory, not "assumed"

    You don't seem to understand the difference.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Mar 14 15:01:28 2024
    XPost: comp.theory, sci.logic

    On 3/14/24 12:01 PM, olcott wrote:
    On 3/14/2024 11:58 AM, Ross Finlayson wrote:
    On 03/13/2024 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote:
    On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H. >>>>>>>>>>>>>>>>>>>
    When we disallow decider/input pairs that are incorrect >>>>>>>>>>>>>>>>>>>> questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid and >>>>>>>>>>>>>>>>>>> incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>> ⟨Ĥ⟩

    No, because a given H will only go to one of the answers. >>>>>>>>>>>>>>>>> THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) has a >>>>>>>>>>>>>> corresponding decider/input pair that only differs by the >>>>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD)

    Every H/TMD pair (referenced in the above set) has a
    corresponding H/TMD pair that only differs by the return >>>>>>>>>>>> value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves that >>>>>>>>>>>> their question was incorrect because the opposite answer to the >>>>>>>>>>>> same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both >>>>>>>>>> of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no >>>>>>>>> requirement that any of them get any particular answer right. >>>>>>>>>
    So, ALL deciders are in the set that we cycle through and apply >>>>>>>>> the
    following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and the >>>>>>>>> existance of the input was what as just proven, the ^ template >>>>>>>>>

    When they are Turing_Machines_Returning_Boolean the this
    set inherently includes identical pairs that only differ
    by return value.

    But in the step of select and input that they will get wrong, they >>>>>>>>> will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying. >>>>>>>>>>>
    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that
    select a
    input that the machine will get wrong needs to be the same for two >>>>>>>>> differnt machines.




    For Every H, we show we can find at least one input (chosen >>>>>>>>>>> just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of >>>>>>>>>> the same machine that only differs by return value where both >>>>>>>>>> get the wrong answer on the same input. By same input I mean >>>>>>>>>> the same finite string of numerical values.


    But if they returned differnt values, they will have different >>>>>>>>> descriptions.

    Otherwise, how could a UTM get the right answer, since it only >>>>>>>>> gets
    the description.

    We can get around all of this stuff by simply using this criteria: >>>>>>>> Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is >>>>>>>> correct*
    (He has neither reviewed nor agreed to anything else in this paper) >>>>>>>> (a) If simulating halt decider H correctly simulates its input D >>>>>>>> until H
    correctly determines that its simulated D would never stop running >>>>>>>> unless aborted then
    (b) H can abort its simulation of D and correctly report that D >>>>>>>> specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts
    all machines", it just says "for any machine, there's an input
    that halts it".

    Where "halt the machine" means "put it in an infinite loop".

    So, rather, Halting theorem never said "there's an input that
    exhausts all machines", it just says, "for any machine, there's
    an input that exhausts it".

    I still don't see how that would be with infinite tapes though,
    without a means of checking all the way right the tape in one
    step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded.


    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly.

    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding
    the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability >>>>>>> of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs,
    Gödel 1931 Incompleteness and the Liar Paradox. I have only spent
    about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal system >>>>> that cannot have hidden gaps its its reasoning. If a machine can do it >>>>> then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox
    https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x)
    (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x) >>> (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf





    Well I have some collected works of Tarski so
    I'll be looking to them, the excerpt there you
    reference basically talks about the model under
    consideration, as a fragment, of "the meta-theory",
    i.e., "Tarski's meta-theory" is "rather underdefined",
    that what he does is that he says that in the meta-theory,
    that there's something about the model under consideration,
    that isn't true in the model but is true, and he invokes
    Goedel to basically says that Goedel's incompleteness
    applies to this model, which of course must thusly by
    "strong enough to support arithmetic", where otherwise
    of course, Goedel's incompleteness does _not_ apply
    to theories that may be entirely closed categories.

    It seems the issue is that you think that you have
    entirely closed categories, in your meta-theory,
    then are applying this meta-theory to another model
    under consideration, which isn't, so it seems like
    you have implicits on your model, which are underdefined,
    rather as the "properly logical" or "non-logical" are
    with respect to being modeled by the logical, that
    you have the some sort of "un-logical" that are the
    result of that the axioms are un-logical, with regards
    to that your "meta-theory" is logical and your model
    under consideration, its _logical_ objects, are actually
    "un-logical" objects with respect to your, "meta-theory".


    That's a problem overall with "meta-theory", here the
    goal is to arrive at a theory that is its own meta-theory,
    that's also all one language, with regards to otherwise

    Yes that is exactly correct.

    *True(L,x) returns TRUE when x is True otherwise returns FALSE*

    So what does True(L, S) return when S is defined as S := ~True(L,S)


    *Easier to understand in English*
    LP = "This sentence is not true."
    Boolean True(English, LP)  returns FALSE
    Boolean True(English, ~LP) returns FALSE

    I formalize this in my own minimal type theory that has actual
    self-reference using the := {is defined as} operator. https://en.wikipedia.org/wiki/List_of_logic_symbols

    LP := ~True(LP)

    *Minimal Type Theory* (YACC BNF) https://www.researchgate.net/publication/331859461_Minimal_Type_Theory_YACC_BNF

    Prolog rejects this
    ?- LP = not(true(LP)).
    LP = not(true(LP)).

    ?- unify_with_occurs_check(LP, not(true(LP))).
    false.

    It seems that Tarski's theory  get stuck on the Liar Paradox
    whereas mine does not because it is anchored in the Prolog
    model where True means proven by axioms and false means
    (negation as failure) unprovable from axioms.

    We can still get back to the conventional notion of false
    when the negation of the sentence in unprovable from axioms.

    So, what does True(L, S) return?

    If S is untrue, doesn't that make True(L,S) false, and thus S true?


    these sorts "fragments", of theories, interpreting each
    other, which is only ordinary, and when the objects
    are structurally self-defining or under-defined, as it
    were, when you're talking about quantifier ambiguity,
    in a fragment with decision or the non-logical about
    quantifier ambiguity in the meta-theory, then you've
    basically stipulations which aren't so much "true axioms"
    as "preconceived notions". They're stipulations.

    It seems you're sort of invoking a wash among
    meta-theory, theory, fragment, theory, meta-theory,
    theory, fragment, theory - and not modeling it,
    instead being sort of memoryless and unconscious
    about it, it's sort of considered unconscientious
    about it.


    I've been talking about these kinds of things in
    my podcasts recently, or I read a biography of Tarski
    and sort of address modern metaphysics and with regards
    to formal foundations .

    https://www.youtube.com/watch?v=lNYTU97XCMw&list=PLb7rLSBiE7F4eHy5vT61UYFR7_BIhwcOY&index=28

    For example the ontology and teleology have a
    sort of dualism, here like "fragment" and "meta-theory",
    the goal is a sort of "monist dualism" that's a
    sort of "dualist monism", here about the "logical".


    Good luck though it's a usual thing, about what
    there's basically _expansion_ of comprehension first,
    then that _restriction_ of comprehension is always
    fragments, it really does get into idea like for
    computability theory "the model of the integers,
    is only either a fragment or extension, more than
    standard".

    So, book-keeping your meta-theories or abstractions
    and fragments or grasps, it's like, "get a grip",
    right, but, "left-hand/right-hand".

    It's sort of like "left-hand/right-hand:
    stop hitting yourself, I'm not doing it".



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Mar 14 18:27:13 2024
    XPost: comp.theory, sci.logic

    On 3/14/24 3:54 PM, olcott wrote:
    On 3/14/2024 5:01 PM, Richard Damon wrote:
    On 3/14/24 12:01 PM, olcott wrote:
    On 3/14/2024 11:58 AM, Ross Finlayson wrote:
    On 03/13/2024 10:20 PM, olcott wrote:
    On 3/13/2024 1:16 PM, Ross Finlayson wrote:
    On 03/12/2024 09:00 PM, olcott wrote:
    On 3/12/2024 10:49 PM, Ross Finlayson wrote:
    On 03/12/2024 08:23 PM, Ross Finlayson wrote:
    On 03/12/2024 07:52 PM, olcott wrote:
    On 3/12/2024 9:28 PM, Richard Damon wrote:
    On 3/12/24 4:31 PM, olcott wrote:
    On 3/12/2024 6:11 PM, Richard Damon wrote:
    On 3/12/24 3:53 PM, olcott wrote:
    On 3/12/2024 5:30 PM, Richard Damon wrote:
    On 3/12/24 2:34 PM, olcott wrote:
    On 3/12/2024 4:23 PM, Richard Damon wrote:
    On 3/12/24 1:11 PM, olcott wrote:
    On 3/12/2024 2:40 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 3/12/24 12:02 PM, olcott wrote:
    On 3/12/2024 1:31 PM, immibis wrote:
    On 12/03/24 19:12, olcott wrote:
    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>>>
    There is some input TMD to every H such that >>>>>>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>>>>
    And it can be a different TMD to each H. >>>>>>>>>>>>>>>>>>>>>
    When we disallow decider/input pairs that are >>>>>>>>>>>>>>>>>>>>>> incorrect
    questions where both YES and NO are the wrong answer >>>>>>>>>>>>>>>>>>>>>
    Once we understand that either YES or NO is the right >>>>>>>>>>>>>>>>>>>>> answer, the whole rebuttal is tossed out as invalid >>>>>>>>>>>>>>>>>>>>> and
    incorrect.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to
    ⟨Ĥ⟩
    halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to
    ⟨Ĥ⟩
    does
    not halt
    BOTH YES AND NO ARE THE WRONG ANSWER FOR EVERY Ĥ.H >>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ ⟨Ĥ⟩

    No, because a given H will only go to one of the >>>>>>>>>>>>>>>>>>> answers.
    THAT
    will be wrong, and the other one right.


    ∀ H ∈ Turing_Machine_Deciders
    ∃ TMD ∈ Turing_Machine_Descriptions  | >>>>>>>>>>>>>>>>>> Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>>>>>
    Not exactly. A pair of otherwise identical machines that >>>>>>>>>>>>>>>>>> (that are contained within the above specified set) >>>>>>>>>>>>>>>>>> only differ by return value will both be wrong on the >>>>>>>>>>>>>>>>>> same pathological input.

    You mean a pair of DIFFERENT machines. Any difference is >>>>>>>>>>>>>>>>> different.

    Every decider/input pair (referenced in the above set) >>>>>>>>>>>>>>>> has a
    corresponding decider/input pair that only differs by the >>>>>>>>>>>>>>>> return
    value of its decider.

    Nope.

    ∀ H ∈ Turing_Machines_Returning_Boolean
    ∃ TMD ∈ Turing_Machine_Descriptions  |
    Predicted_Behavior(H, TMD) != Actual_Behavior(TMD) >>>>>>>>>>>>>>
    Every H/TMD pair (referenced in the above set) has a >>>>>>>>>>>>>> corresponding H/TMD pair that only differs by the return >>>>>>>>>>>>>> value of its Boolean_TM.

    That isn't in the set above.


    That both of these H/TMD pairs get the wrong answer proves >>>>>>>>>>>>>> that
    their question was incorrect because the opposite answer >>>>>>>>>>>>>> to the
    same question is also proven to be incorrect.


    Nope, since both aren't in the set selected.


    When they are deciders that must get the correct answer both >>>>>>>>>>>> of them are not in the set.

    *IF* they are correct decider.

    WHen we select from all Turing Machine Deciders, there is no >>>>>>>>>>> requirement that any of them get any particular answer right. >>>>>>>>>>>
    So, ALL deciders are in the set that we cycle through and >>>>>>>>>>> apply the
    following logic to ALL of them.

    Each is them paired with an input that it will get wrong, and >>>>>>>>>>> the
    existance of the input was what as just proven, the ^ template >>>>>>>>>>>

    When they are Turing_Machines_Returning_Boolean the this >>>>>>>>>>>> set inherently includes identical pairs that only differ >>>>>>>>>>>> by return value.

    But in the step of select and input that they will get wrong, >>>>>>>>>>> they
    will be givne DIFFERENT inputs.


    You just don't understand what that statement is saying. >>>>>>>>>>>>>
    I've expalined it, but it seems over you head.

    No the problem is that you are not paying attention.

    No, you keep on making STUPID mistakes, like thinking that >>>>>>>>>>> select a
    input that the machine will get wrong needs to be the same >>>>>>>>>>> for two
    differnt machines.




    For Every H, we show we can find at least one input (chosen >>>>>>>>>>>>> just for
    that machine) that it will get wrong.

    When we use machine templates then we can see instances of >>>>>>>>>>>> the same machine that only differs by return value where both >>>>>>>>>>>> get the wrong answer on the same input. By same input I mean >>>>>>>>>>>> the same finite string of numerical values.


    But if they returned differnt values, they will have different >>>>>>>>>>> descriptions.

    Otherwise, how could a UTM get the right answer, since it >>>>>>>>>>> only gets
    the description.

    We can get around all of this stuff by simply using this
    criteria:
    Date 10/13/2022 11:29:23 AM
    *MIT Professor Michael Sipser agreed this verbatim paragraph is >>>>>>>>>> correct*
    (He has neither reviewed nor agreed to anything else in this >>>>>>>>>> paper)
    (a) If simulating halt decider H correctly simulates its input D >>>>>>>>>> until H
    correctly determines that its simulated D would never stop >>>>>>>>>> running
    unless aborted then
    (b) H can abort its simulation of D and correctly report that D >>>>>>>>>> specifies a non-halting sequence of configurations.

    *When we apply this criteria* (elaborated above)
    Will you halt if you never abort your simulation?
    *Then the halting problem is conquered*

    When two different machines implementing this criteria
    get different results from identical inputs then we
    know that Pathological Self-Reference has been detected.

    We don't even need to know that for:
    *denial-of-service-attack detection*
    *NO always means reject as unsafe*


    But, Halting theorem never said "there's an input that halts >>>>>>>>> all machines", it just says "for any machine, there's an input >>>>>>>>> that halts it".

    Where "halt the machine" means "put it in an infinite loop". >>>>>>>>>
    So, rather, Halting theorem never said "there's an input that >>>>>>>>> exhausts all machines", it just says, "for any machine, there's >>>>>>>>> an input that exhausts it".

    I still don't see how that would be with infinite tapes though, >>>>>>>>> without a means of checking all the way right the tape in one >>>>>>>>> step, i.e. that it's 1's or 0's or any pattern at all, any
    input that unbounded with respect to the machine basically
    exhausts it or where the machine would run in the unbounded. >>>>>>>>>

    Of course any finite tape, has a static analysis that is
    not infinite, that decides whether or not it halts
    (or, loops, or grows, the state space of the decider).

    Static analysis has to either enumerate or _infer_ the
    state-space, where equal values in what's determined
    the idempotent can detect loops, while inequalities
    or proven divergence, ..., can detect unbounded growth.

    Now, proving convergence or divergence is its own kind
    of thing. For example, there are series that converge
    very slowly, and series that diverge very slowly. These
    are subtly intractable to analysis.

    Then the usual idea of progressions that rapidly grow
    yet return to what's detectable, are pathological to
    analysis, only in resources not correctness or completion,
    vis-a-vis the subtle intractability of the convergence or
    divergence what either halts, or loops, or grows unboundedly. >>>>>>>>>
    Separately "not-halts" into "loops or grows unboundedly",
    has that really for most matters of interest of understanding >>>>>>>>> the state-space of programs, is "halts or enters loops"
    and not "grows unboundedly".

    I.e. if the Halting problem is basically subject the
    subtle intractability of slow convergence, that otherwise
    it can just infer divergence and decide, practically
    it's sort of more relevant what the machine would be
    doing on the input on the tape, then with respect to
    beyond the Turing theory, of the state of the read-head,
    what happens when somebody modifies the tape, or events,
    the write-head.

    Anyways though for bounded inputs, besides slow divergence,
    it's to be made clear that _most all_ and _almost all_
    programs _are_ decided their behavior by static analysis.

    Though, "most all" and "almost all" might be a bit strong,
    but pretty much all that don't involve "the subtle intractability >>>>>>>>> of slow divergence".



    Giving the idea that an existence result
    is in any way the expected result here
    seems sort of the root of this dilem-na.






    (Though that the real numbers in ZFC have a well-ordering
    and if they had a normal ordering that was a well-ordering,
    that would be a thing, because ZFC has a well-ordering of
    [0,1], but can't give one.)





    What I'm saying is that you'd need to solve all
    cases of slow convergence and slow divergence
    to have a correct and complete halt decider,
    for the unbounded, and finite, if not the infinite.

    Most though setups of convergence though
    would effectively exhaust according to the
    existence of criteria of convergence as modeling
    the computation, though, so, if you exclude
    slow convergence and slow divergence,
    then a subset of very usual machines is of
    course quite all decidable.  (Or decide-able,
    or as I put it once, "not deciddable".)



    Well-ordering the reals in ZFC, that's it own
    sort issue - that a well-ordering implies the
    existence of a bijection from ordinals to a
    set, then as that as tuples, subsets of those
    are well-ordered by their ordinal part, then
    that if ever uncountably many of those are
    in normal order their real part and irrationals,
    then between those are rationals.  I.e.,
    ZFC doesn't give an example of well-ordering
    the reals, but there is one.


    So, Peter, I think you're kind of misreading that
    quote you authoritated.  It just says "if static
    analysis detects a loop or unboundedly growing
    then it's not-halts".


    Of course, for any bounded resource, there
    are arbitrarily larger bounded resources
    required by what's called pathological,
    that an arbitrarily larger resource can though solve.



    So anyways "slow convergence" and "slow divergence",
    have that in mathematics, it's unknown for some series
    whether they converge or diverge, though it's usually
    accepted that the inverse powers of 2 converge and
    that the sum of integers diverges, and that periodic
    functions do neither.

    I.e. the criteria of convergence and existence of a limit,
    and the special case "diverges as going to infinity",
    are not yet closed in today's mathematics.

    It's sort of a conjecture or independent whether
    they ever could be, closed, and complete, the criteria.



    I have spent 20 years on The conventional halting problem proofs, >>>>>>> Gödel 1931 Incompleteness and the Liar Paradox. I have only spent >>>>>>> about seven years on Tarski Undefinability.

    The Halting problem is the only one of these that has a formal
    system
    that cannot have hidden gaps its its reasoning. If a machine can >>>>>>> do it
    then it can be done.


    "Algorithmics" is a study of algorithms. The most usual
    consideration is "asymptotics", "asymptotics of complexity",
    with the idea that computing does work, that there are
    closures and completions, and there are approximations and
    attenuations, as it were, in the full and the final and
    the initial and the partial.

    The most usual resources are time, and space, that actions
    occur in time, and according to a structure, in space.

    Of course any CS grad pretty much knows that.


    The objects of mathematics, have it so, that the finite
    and bounded resources of any collection of digital and
    binary machines, are finite and bounded, while, the unbounded
    models of mathematics that represent them, are unbounded,
    it's bad that anybody ever said a Turing tape was "infinite",
    except that it's unbounded as an object of mathematics, and
    the model of any finite, digital, binary machine.

    One might aver "digital" has a complement in computing,
    "analog", which represents the continuous instead of
    the discrete, and one might aver "binary" has a complement,
    either "real-valued" or "fuzzy" or variously about the
    "multi-valent", in logic, the models, according to the
    mechanisms, the models, their mechanics, machines.

    The Turing machine though is a model of unbounded machine
    as what any thusly simply mechanical model can implement,
    up to its bounds.


    So, the objects of mathematics, then involve analytical
    results. I can't draw a circle, though a compass may
    portray a diagram arbitrarily close, and no different.
    It's the analytical results, that make so for why
    mathematics is: "sensible, fungible, and tractable",
    and for "the ubiquitous success of mathematics" in
    all such matters of physics, which is a continuum mechanics,
    and as well discrete mechanics.

    Of course, results exist in mathematical objects,
    after the unbounded the unbounded and complete,
    which is why closures and completions are so
    relevant to all such matters, vis-a-vis approximations
    and attenuations, which in bounded terms result
    indistinguishably non-different results,
    up to precision, say, or tolerances.

    So, "machine" might be discrete, binary, and digital,
    or it might be continuous, multivalent, and analog.

    The Turing machine in a concrete form, is bounded.

    Similar models, here mostly for the stack models,
    stack machines, among finite-state-machines then
    for example concepts like "unbounded nondeterminism",
    here is pretty much about determinism, about the
    defined behavior of a machine according to a
    mathematical model that given defined behavior
    of the mechanisms results a mechanical model of
    a mathematical model, such a theory as this.

    So, the theory of computing, with mechanical or
    computational models, models of computing, is
    pretty much exactly like the theory of physics,
    with the attachment of physical models to the
    mathematical models, and rather, "interpreted as",
    the models, the models, with respect to each other.

    I.e. the extensionality that so follows "they're
    equal, or same, or not different, they represent
    each other, they're equi-interpretable, they model
    each other", the models, of logical and mathematical
    and computational and mechanical and physical models,
    help represent that over all the entire thing there
    is a usual theory for the entire thing, it's a theory
    where model theory models extensionality, and in identity
    intensionality, about equality, tautology, identity,
    and sameness/difference, and nearness/farness, and all
    these usual aspects of the mathematical models'
    arithmetic, algebra, and geometry. (And function
    theory and topology, with respect to categories and
    types, sets and parts, relations and predicates,
    then all the model-building among that which is
    all the propositional or the stipulated or axiomatic.)

    The idea is that this sort of platonistic universe of
    mathematical objects has always existed, and it's just
    discovered, then that axiomatics for example, just
    sort of results a model theory of it, with regards
    to models, the modular, modulation, modality, and the mode.


    So, a machine, with respect to computation, establishing
    the validity of interpretations of models, is subject to
    filling out all the above, vis-a-vis what it can do,
    and it can-do.


    Then, "slowly converging" and "slowly diverging"
    are examples that get get into that there are more
    law(s) of large numbers, than the usual classical
    law of large numbers.

    Some people variously do or don't have a mathematical
    model of larger numbers, or the infinite, at all.
    It's a totally ancient and dogmatic tradition that
    no, we finite peoples, persons, or agents of limited
    and finite means, no we do not have discrete, digital,
    binary mechanical models of the infinite.

    Yet, it's also about the first thing that deduction
    arrives at just from the plain contemplation of the
    beyond the unbounded, like the theories of Democritus
    and Atomism or the theories of Zeno and Continuity,
    that make it so we at least have something to arrive
    at for models of the unbounded, as "methods of exhaustion",
    the approximation and attenuative what results
    the asymptotics of algorithmics.

    So, we do have mental models of the continuous and infinite.
    And, where physics is a continuum mechanics,
    there are physical models of it as well, then
    with regards to the philosophy of science and
    the scientific method and the Big Science and
    from the Primary Science, lots to establish
    that the continuous and analog has mathematical
    results, as whethe, "sensible, fungible, and tractable",
    to machines at all.

    Numerical methods then, and, approximation and
    attenuation, result from analysis, why they exist,
    here for the criteria of convergence and existence
    of limits, in the closed, in the complete.


    So, a metonymy, unless it's "the Strong Metonymy",
    which is utter intensionality of "the ground model",
    is yet a metaphor and a metaphor yet a weaker metonymy
    joint among weaker metonymys, that, ontology, has
    that there are or may be a completion, of ontology,
    as a "strong ontology" and "strong metonymy",
    but otherwise it's just a bag of facts.



    Here then there's certainly a perceived analytical
    development Cantor space, and, Square Cantor space,
    and, Signal Cantor space as it were, about that among
    the law(s) of large numbers, there are definitions of
    continuity, at least including field-reals, line-reals,
    and signal-reals, three sorts continuous domains.


    Mathematics owes physics this to better begin to
    explain, to disclose, to find truths of, continuum mechanics.


    Then that analog, fuzzy, multi-value, "quantum" computing
    models (parts) of that, that discrete, binary, digital
    computing does not, is a thing.



    The convergence of terms in series is an example
    of a model of things, that, sometimes has clear
    and direct and perfectly accurate representations,
    models, in stack machine, and sometimes doesn't.


    Of course, for any bounded input, there is a sufficiently
    larger bounded analyzer, that does solve it. So, why not
    just put those all together? It would be infinite,
    yet most things are not. (Or, you know, are.)


    It's called "foundations" the study of all these things
    as a fundamental coherency, a thing together, while of
    its parts.

    So, students should well have the concepts of "abacus"
    and "slide rule" and other "tabulators and computers"
    of the usual manual mechanical variety down, according
    to the principles of the mathematical models behind them,
    then introducing the law(s) of large numbers, then
    introducing the idea of a standard model of an infinite,
    about rates, related rates, asymptotics, and bounds.


    "Foundations" then the idea is that it's been around
    forever, it's our dogma and from our canon and it develops
    over time, and the current edition is called "modern".

    Did I make any mistakes?

    Tarski anchors his proof in the Liar Paradox
    https://liarparadox.org/Tarski_247_248.pdf

    How Tarski encodes the Liar Paradox
    x ∉ True if and only if p
    where the symbol 'p' represents the whole sentence x

    This is transformed into line 01 of his proof
    by replacing True with Provable

    *Here is the Tarski Undefinability Theorem proof*
    (1) x ∉ Provable if and only if p   // assumption
    (2) x ∈ True if and only if p       // assumption
    (3) x ∉ Provable if and only if x ∈ True.
    (4) either x ∉ True or x̄ ∉ True;    // axiom: ~True(x) ∨ ~True(~x)
    (5) if x ∈ Provable, then x ∈ True; // axiom: Provable(x) → True(x) >>>>> (6) if x̄ ∈ Provable, then x̄ ∈ True;  // axiom: Provable(~x) → True(~x)
    (7) x ∈ True
    (8) x ∉ Provable
    (9) x̄ ∉ Provable

    Tarski's Full proof
    https://liarparadox.org/Tarski_275_276.pdf





    Well I have some collected works of Tarski so
    I'll be looking to them, the excerpt there you
    reference basically talks about the model under
    consideration, as a fragment, of "the meta-theory",
    i.e., "Tarski's meta-theory" is "rather underdefined",
    that what he does is that he says that in the meta-theory,
    that there's something about the model under consideration,
    that isn't true in the model but is true, and he invokes
    Goedel to basically says that Goedel's incompleteness
    applies to this model, which of course must thusly by
    "strong enough to support arithmetic", where otherwise
    of course, Goedel's incompleteness does _not_ apply
    to theories that may be entirely closed categories.

    It seems the issue is that you think that you have
    entirely closed categories, in your meta-theory,
    then are applying this meta-theory to another model
    under consideration, which isn't, so it seems like
    you have implicits on your model, which are underdefined,
    rather as the "properly logical" or "non-logical" are
    with respect to being modeled by the logical, that
    you have the some sort of "un-logical" that are the
    result of that the axioms are un-logical, with regards
    to that your "meta-theory" is logical and your model
    under consideration, its _logical_ objects, are actually
    "un-logical" objects with respect to your, "meta-theory".


    That's a problem overall with "meta-theory", here the
    goal is to arrive at a theory that is its own meta-theory,
    that's also all one language, with regards to otherwise

    Yes that is exactly correct.

    *True(L,x) returns TRUE when x is True otherwise returns FALSE*

    So what does True(L, S) return when S is defined as S := ~True(L,S)


    True(L, S) returns Prolog's negation as failure false
    that only means not true.

    So True(L, S) Isn't a Predicate, BY DEFINITION, as predicate only return
    the values True and False.

    So, you are AGREEING with Tarski, and Lying in your claim he is wrong.


    True(L, ~S) also returns Prolog's negation as failure false
    that only means not true.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)