• Re: Simulating (partial) Halt Deciders Defeat the Halting Problem Proof

    From Richard Damon@21:1/5 to olcott on Tue Apr 18 07:32:11 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and
    halt. It does this by correctly recognizing several non-halting behavior patterns in a finite number of steps of correct simulation. Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does give
    an answer, which you say will be non-halting, and then "Correctly
    Simulated" by giving it representation to a UTM, we see that the
    simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you have
    added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you added
    have removed essential features needed for it to be an actual UTM. That
    you make this claim shows you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle, since
    it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to the UTM change the behavior of the simulated input for the first N steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the first
    N steps.

    No one claims that it doesn't correctly reproduce the first N steps of
    the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine, not
    a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps
    of correct simulation then we have conclusive proof that D presents non- halting behavior to H.

    But it isn't "Correctly Simulated by H" since this H never does a
    correct simulation of the sort that determines halting or not (that of
    an ACTUAL UTM, which never aborts until it reaches the end).

    Since H DOES abort its simulation, the changing of "H" to be a UTM
    instead, is just saying that H doesn't actually process the input that
    was given it, and thus it gets the wrong answer.


    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs


    Which is full of UNSOUND logic and Strawman.

    You aren't allowed to change the input, so you can't change the H that D
    uses.

    You have been repeated told this, and yet you still repeat it. This
    shows you have no capability of learning, and that you are totally
    ignorant of the things you are talking about.

    You have buried your reputation by all your lies and fabrications.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Tue Apr 18 10:58:58 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and
    halt. It does this by correctly recognizing several non-halting behavior
    patterns in a finite number of steps of correct simulation. Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does give
    an answer, which you say will be non-halting, and then "Correctly
    Simulated" by giving it representation to a UTM, we see that the
    simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you have
    added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you added
    have removed essential features needed for it to be an actual UTM. That
    you make this claim shows you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle, since
    it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to the UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps of
    the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it
    enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine, not
    a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps
    of correct simulation then we have conclusive proof that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Tue Apr 18 22:55:06 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and
    halt. It does this by correctly recognizing several non-halting behavior >>> patterns in a finite number of steps of correct simulation. Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does give
    an answer, which you say will be non-halting, and then "Correctly
    Simulated" by giving it representation to a UTM, we see that the
    simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you have
    added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you added
    have removed essential features needed for it to be an actual UTM.
    That you make this claim shows you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to the UTM >>> change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps of
    the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it
    enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine,
    not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps
    of correct simulation then we have conclusive proof that D presents non- >>> halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is erroneous:

    void Px(void (*x)())
    {
    (void) H(x, x);
    return;
    }

    Px halts (it discards the result that H returns); your decider thinks
    that Px is non-halting which is an obvious error due to a design flaw in
    the architecture of your decider. Only the Flibble Signaling Simulating
    Halt Decider (SSHD) correctly handles this case.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Apr 18 18:50:33 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/23 6:39 PM, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation. Inputs
    that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see
    that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its
    input
    it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an actual
    UTM. That you make this claim shows you don't actually know what a
    UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to
    the UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps
    of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D >>>>> presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it >>>>> enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine,
    not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps >>>>> of correct simulation then we have conclusive proof that D presents
    non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines so it is unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the
    actual behavior of these N steps.

    Right, but not halting in N steps is not the same as not halting ever. Remember, it is the actual machine described by the input that matters,
    not the (partial) simulation done by H.


    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    Full of ERRORS.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Apr 18 18:30:19 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and
    halt. It does this by correctly recognizing several non-halting behavior >>> patterns in a finite number of steps of correct simulation. Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does give
    an answer, which you say will be non-halting, and then "Correctly
    Simulated" by giving it representation to a UTM, we see that the
    simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you have
    added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you added
    have removed essential features needed for it to be an actual UTM.
    That you make this claim shows you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to the UTM >>> change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps of
    the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it
    enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine,
    not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps
    of correct simulation then we have conclusive proof that D presents non- >>> halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is shown
    by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a final
    state, but that isn't the definition of Halting. Halting is strictly
    about the behavior of the machine itself.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Tue Apr 18 17:39:33 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and
    halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation. Inputs that >>>> do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see
    that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you have
    added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its input >>>> it derives the exact same N steps that a pure UTM would derive because >>>> it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you added
    have removed essential features needed for it to be an actual UTM.
    That you make this claim shows you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to the
    UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps
    of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it >>>> enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine,
    not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps >>>> of correct simulation then we have conclusive proof that D presents
    non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines so it is
    unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
        (void) H(x, x);
        return;
    }

    Px halts (it discards the result that H returns); your decider thinks
    that Px is non-halting which is an obvious error due to a design flaw in
    the architecture of your decider.  Only the Flibble Signaling Simulating Halt Decider (SSHD) correctly handles this case.

    /Flibble


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Tue Apr 18 18:13:05 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/2023 5:30 PM, Richard Damon wrote:
    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and
    halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation. Inputs that >>>> do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see
    that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you have
    added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its input >>>> it derives the exact same N steps that a pure UTM would derive because >>>> it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you added
    have removed essential features needed for it to be an actual UTM.
    That you make this claim shows you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to the
    UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps
    of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it >>>> enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine,
    not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps >>>> of correct simulation then we have conclusive proof that D presents
    non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is shown
    by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a final
    state, but that isn't the definition of Halting. Halting is strictly
    about the behavior of the machine itself.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    computation that halts… “the Turing machine will halt whenever it enters
    a final state” (Linz:1990:234)

    Non-halting behavior patterns can be matched in N steps
    ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
    number of steps

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    The above N steps proves that ⟨Ĥ⟩ correctly simulated by embedded_H
    could not possibly reach the final state of ⟨Ĥ.q0⟩ in any finite number
    of steps of correct simulation *because ⟨Ĥ⟩ is defined to have*
    *a pathological relationship to embedded_H*

    That a UTM applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts shows an entirely different sequence
    *because UTM and ⟨Ĥ⟩ ⟨Ĥ⟩ do not have a pathological relationship*


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Tue Apr 18 21:57:56 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/2023 9:31 PM, Richard Damon wrote:
    On 4/18/23 7:13 PM, olcott wrote:
    On 4/18/2023 5:30 PM, Richard Damon wrote:
    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation. Inputs >>>>>> that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see
    that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its
    input
    it derives the exact same N steps that a pure UTM would derive
    because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an actual
    UTM. That you make this claim shows you don't actually know what a
    UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to
    the UTM
    change the behavior of the simulated input for the first N steps
    of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps
    of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D >>>>>> presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever >>>>>> it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot >>>>>> possibly reach its simulated final state in any finite number of
    steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
    shown by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a final
    state, but that isn't the definition of Halting. Halting is strictly
    about the behavior of the machine itself.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    computation that halts… “the Turing machine will halt whenever it enters >> a final state” (Linz:1990:234)

    Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as it must to be saying that its input is non-halting.

    This is because embedded_H and H must be identical machines, and thus do exactly the same thing when given the same input.


    Non-halting behavior patterns can be matched in N steps
    ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
    number of steps

    Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a finite number of steps.

    You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
    because it doesn't stop simulating until it reaches a final state, or it
    just keeps simulating.

    H / embedded_H are NOT a UTM, as they don't have that necessary property.


    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
    behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
    ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Except you have defined that H, and thus embeded_H doesn't do (c), but
    when it sees the attempted to go into embedded_H with the same input
    actually aborts its simulation and goes to Ĥ.qn which causes the machine
    Ĥ to halt.


    embedded_H could do (c) 10,000 times before aborting which would have to
    be the actual behavior of the actual input because embedded_H remains in
    pure UTM mode until it aborts.

    How many times does it take for you to understand that ⟨Ĥ⟩ can't
    possibly reach ⟨Ĥ.qn⟩ because of its pathological relationship to embedded_H ?


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Apr 18 22:31:55 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/23 7:13 PM, olcott wrote:
    On 4/18/2023 5:30 PM, Richard Damon wrote:
    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation. Inputs
    that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see
    that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its
    input
    it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an actual
    UTM. That you make this claim shows you don't actually know what a
    UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to
    the UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps
    of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D >>>>> presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it >>>>> enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine,
    not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps >>>>> of correct simulation then we have conclusive proof that D presents
    non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
    shown by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a final
    state, but that isn't the definition of Halting. Halting is strictly
    about the behavior of the machine itself.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    computation that halts… “the Turing machine will halt whenever it enters a final state” (Linz:1990:234)

    Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as it must to be saying that its input is non-halting.

    This is because embedded_H and H must be identical machines, and thus do exactly the same thing when given the same input.


    Non-halting behavior patterns can be matched in N steps
    ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
    number of steps

    Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a
    finite number of steps.

    You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
    because it doesn't stop simulating until it reaches a final state, or it
    just keeps simulating.

    H / embedded_H are NOT a UTM, as they don't have that necessary property.


    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
    ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Except you have defined that H, and thus embeded_H doesn't do (c), but
    when it sees the attempted to go into embedded_H with the same input
    actually aborts its simulation and goes to Ĥ.qn which causes the machine
    Ĥ to halt.




    The above N steps proves that ⟨Ĥ⟩ correctly simulated by embedded_H could not possibly reach the final state of ⟨Ĥ.q0⟩ in any finite number of steps of correct simulation *because ⟨Ĥ⟩ is defined to have*
    *a pathological relationship to embedded_H*

    Nope, H is presuming INCORRECTLY that embedded_H is a UTM, and not a
    copy of itself.


    That a UTM applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts shows an entirely different sequence
    *because UTM and ⟨Ĥ⟩ ⟨Ĥ⟩ do not have a pathological relationship*


    Nope, the correct simulation of the input is the correct simulation of
    the input and matches the actual behavior of the machine the input
    represetns.

    H does NOT do an actual "Correct Simulation" but only a PARTIAL
    simulation of only N steps, which doesn't prove non-halting behavior.

    You mind is stuck in a pathological loop, because you don't seem to
    understand the actual basics of Turing Machines.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Apr 18 23:10:42 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/23 10:57 PM, olcott wrote:
    On 4/18/2023 9:31 PM, Richard Damon wrote:
    On 4/18/23 7:13 PM, olcott wrote:
    On 4/18/2023 5:30 PM, Richard Damon wrote:
    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation.
    Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see >>>>>> that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its >>>>>>> input
    it derives the exact same N steps that a pure UTM would derive
    because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an
    actual UTM. That you make this claim shows you don't actually know >>>>>> what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle, >>>>>> since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
    change the behavior of the simulated input for the first N steps >>>>>>> of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.

    No one claims that it doesn't correctly reproduce the first N
    steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>> simulated by simulating halt decider H are the actual behavior
    that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever >>>>>>> it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>

    When we see (after N steps) that D correctly simulated by H cannot >>>>>>> possibly reach its simulated final state in any finite number of >>>>>>> steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
    shown by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a final
    state, but that isn't the definition of Halting. Halting is strictly
    about the behavior of the machine itself.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    computation that halts… “the Turing machine will halt whenever it enters
    a final state” (Linz:1990:234)

    Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as
    it must to be saying that its input is non-halting.

    This is because embedded_H and H must be identical machines, and thus
    do exactly the same thing when given the same input.


    Non-halting behavior patterns can be matched in N steps
    ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
    number of steps

    Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a
    finite number of steps.

    You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
    because it doesn't stop simulating until it reaches a final state, or
    it just keeps simulating.

    H / embedded_H are NOT a UTM, as they don't have that necessary property.


    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
    behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
    ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>
    Except you have defined that H, and thus embeded_H doesn't do (c), but
    when it sees the attempted to go into embedded_H with the same input
    actually aborts its simulation and goes to Ĥ.qn which causes the
    machine Ĥ to halt.


    embedded_H could do (c) 10,000 times before aborting which would have to
    be the actual behavior of the actual input because embedded_H remains in
    pure UTM mode until it aborts.

    No such thing. UTM isn't a "Mode" but an identity.

    if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.

    It might be in "Simulation" mode, but it is incorrect think of it as
    actually being a UTM, since it isn't.

    That would be like saying you are in "Immortal" mode, until you die.

    An Immortal can't die, just like a UTM won't stop simulating until it
    reaches a final state.


    How many times does it take for you to understand that ⟨Ĥ⟩ can't possibly reach ⟨Ĥ.qn⟩ because of its pathological relationship to embedded_H ?


    Then way does it? Either you are lying the embedded_H is an exact copy
    of H, or you are lying that H (Ĥ) (Ĥ) goes to Qn, or Ĥ (Ĥ) goes to Ĥ.qn.

    There is no other option.

    You don't seem to understand that programs do exactly what they are
    programmed to do, but not necessarily what you intend them to do.

    embedded_H doesn't simulate until it gets the right answer, but does
    exactly what H does,

    So, if H aborts on the first time it sees Ĥ get into embedded_H, than so
    does embedded_H, which IS the behavior you claim for the machine H that
    gives the "right" answer,


    You are just showing how little you understand about what you are talking.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Tue Apr 18 22:48:17 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/2023 10:35 PM, Richard Damon wrote:
    On 4/18/23 11:21 PM, olcott wrote:
    On 4/18/2023 10:10 PM, Richard Damon wrote:
    On 4/18/23 10:57 PM, olcott wrote:
    On 4/18/2023 9:31 PM, Richard Damon wrote:
    On 4/18/23 7:13 PM, olcott wrote:
    On 4/18/2023 5:30 PM, Richard Damon wrote:
    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>> state and
    halt. It does this by correctly recognizing several
    non-halting behavior
    patterns in a finite number of steps of correct simulation. >>>>>>>>>> Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that >>>>>>>>> does give an answer, which you say will be non-halting, and
    then "Correctly Simulated" by giving it representation to a
    UTM, we see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>> you have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of >>>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>>>> because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you >>>>>>>>> added have removed essential features needed for it to be an >>>>>>>>> actual UTM. That you make this claim shows you don't actually >>>>>>>>> know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal
    vehicle, since it started as one and just had some extra
    features axded.


    My reviewers cannot show that any of the extra features added >>>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>> steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>> the first N steps.

    No one claims that it doesn't correctly reproduce the first N >>>>>>>>> steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>>> that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>> wrong.


    When we see (after N steps) that D correctly simulated by H >>>>>>>>>> cannot
    possibly reach its simulated final state in any finite number >>>>>>>>>> of steps
    of correct simulation then we have conclusive proof that D >>>>>>>>>> presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly >>>>>>>> recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as
    is shown by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a
    final state, but that isn't the definition of Halting. Halting is >>>>>>> strictly about the behavior of the machine itself.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    computation that halts… “the Turing machine will halt whenever it >>>>>> enters
    a final state” (Linz:1990:234)

    Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, >>>>> as it must to be saying that its input is non-halting.

    This is because embedded_H and H must be identical machines, and
    thus do exactly the same thing when given the same input.


    Non-halting behavior patterns can be matched in N steps
    ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a
    finite
    number of steps

    Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in >>>>> a finite number of steps.

    You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste, >>>>> because it doesn't stop simulating until it reaches a final state,
    or it just keeps simulating.

    H / embedded_H are NOT a UTM, as they don't have that necessary
    property.


    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Except you have defined that H, and thus embeded_H doesn't do (c),
    but when it sees the attempted to go into embedded_H with the same
    input actually aborts its simulation and goes to Ĥ.qn which causes
    the machine Ĥ to halt.


    embedded_H could do (c) 10,000 times before aborting which would
    have to
    be the actual behavior of the actual input because embedded_H
    remains in
    pure UTM mode until it aborts.

    No such thing. UTM isn't a "Mode" but an identity.

    if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.
    But that is flat out not the truth. The input simulated by embedded_H
    necessarily must have exact same behavior as simulated by a pure UTM
    until the simulation of this input is aborted because aborting the
    simulation of its input is the only one of three features added to a UTM
    that changes the behavior of its input relative to a pure UTM.

    Which makes it NOT a UTM, so embedded_H doesn't actually act like a UTM.

    It MUST act like H, or you have LIED about following the requirement for building Ĥ.


    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it.
    (b) Even aborting the simulation after N steps doesn't change the
    first N steps.

    N steps could be 10,000 recursive simulations.


    Rigth, and then one more recusrive simulation by a REAL UTM past that
    point will see the outer embedded_H abort its simulation, go to Qn and Ĥ will then halt, showing embedded_H was wrong to say it couldn't.


    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000 necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.


    Aborted simulations don't, by themselves, show non-halting behavior.

    The only case that this doesn't work is if embedded_H actually never
    does abort, but then H can't either, so H doesn't answer, and fails to
    be a decider.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Apr 18 23:35:55 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/23 11:21 PM, olcott wrote:
    On 4/18/2023 10:10 PM, Richard Damon wrote:
    On 4/18/23 10:57 PM, olcott wrote:
    On 4/18/2023 9:31 PM, Richard Damon wrote:
    On 4/18/23 7:13 PM, olcott wrote:
    On 4/18/2023 5:30 PM, Richard Damon wrote:
    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>>> correctly simulated input can possibly reach its own final
    state and
    halt. It does this by correctly recognizing several non-halting >>>>>>>>> behavior
    patterns in a finite number of steps of correct simulation.
    Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that
    does give an answer, which you say will be non-halting, and then >>>>>>>> "Correctly Simulated" by giving it representation to a UTM, we >>>>>>>> see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you >>>>>>>> have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of >>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>>> because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you >>>>>>>> added have removed essential features needed for it to be an
    actual UTM. That you make this claim shows you don't actually
    know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal
    vehicle, since it started as one and just had some extra
    features axded.


    My reviewers cannot show that any of the extra features added >>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N
    steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>> the first N steps.

    No one claims that it doesn't correctly reproduce the first N
    steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>> that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt
    whenever it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>>

    When we see (after N steps) that D correctly simulated by H cannot >>>>>>>>> possibly reach its simulated final state in any finite number >>>>>>>>> of steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is >>>>>> shown by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a
    final state, but that isn't the definition of Halting. Halting is
    strictly about the behavior of the machine itself.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    computation that halts… “the Turing machine will halt whenever it >>>>> enters
    a final state” (Linz:1990:234)

    Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as >>>> it must to be saying that its input is non-halting.

    This is because embedded_H and H must be identical machines, and
    thus do exactly the same thing when given the same input.


    Non-halting behavior patterns can be matched in N steps
    ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a
    finite
    number of steps

    Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in >>>> a finite number of steps.

    You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
    because it doesn't stop simulating until it reaches a final state,
    or it just keeps simulating.

    H / embedded_H are NOT a UTM, as they don't have that necessary
    property.


    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>>
    Except you have defined that H, and thus embeded_H doesn't do (c),
    but when it sees the attempted to go into embedded_H with the same
    input actually aborts its simulation and goes to Ĥ.qn which causes
    the machine Ĥ to halt.


    embedded_H could do (c) 10,000 times before aborting which would have to >>> be the actual behavior of the actual input because embedded_H remains in >>> pure UTM mode until it aborts.

    No such thing. UTM isn't a "Mode" but an identity.

    if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.
    But that is flat out not the truth. The input simulated by embedded_H necessarily must have exact same behavior as simulated by a pure UTM
    until the simulation of this input is aborted because aborting the
    simulation of its input is the only one of three features added to a UTM
    that changes the behavior of its input relative to a pure UTM.

    Which makes it NOT a UTM, so embedded_H doesn't actually act like a UTM.

    It MUST act like H, or you have LIED about following the requirement for building Ĥ.


    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it.
    (b) Even aborting the simulation after N steps doesn't change the first
    N steps.

    N steps could be 10,000 recursive simulations.


    Rigth, and then one more recusrive simulation by a REAL UTM past that
    point will see the outer embedded_H abort its simulation, go to Qn and Ĥ
    will then halt, showing embedded_H was wrong to say it couldn't.

    Aborted simulations don't, by themselves, show non-halting behavior.

    The only case that this doesn't work is if embedded_H actually never
    does abort, but then H can't either, so H doesn't answer, and fails to
    be a decider.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Tue Apr 18 22:21:33 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/2023 10:10 PM, Richard Damon wrote:
    On 4/18/23 10:57 PM, olcott wrote:
    On 4/18/2023 9:31 PM, Richard Damon wrote:
    On 4/18/23 7:13 PM, olcott wrote:
    On 4/18/2023 5:30 PM, Richard Damon wrote:
    On 4/18/23 11:58 AM, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>> correctly simulated input can possibly reach its own final state >>>>>>>> and
    halt. It does this by correctly recognizing several non-halting >>>>>>>> behavior
    patterns in a finite number of steps of correct simulation.
    Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does >>>>>>> give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we
    see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you >>>>>>> have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of
    its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>> because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an
    actual UTM. That you make this claim shows you don't actually
    know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal
    vehicle, since it started as one and just had some extra features >>>>>>> axded.


    My reviewers cannot show that any of the extra features added to >>>>>>>> the UTM
    change the behavior of the simulated input for the first N steps >>>>>>>> of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change
    the first N steps.

    No one claims that it doesn't correctly reproduce the first N
    steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>> that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever >>>>>>>> it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>

    When we see (after N steps) that D correctly simulated by H cannot >>>>>>>> possibly reach its simulated final state in any finite number of >>>>>>>> steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.


    Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
    shown by the fact that D(D) does halt.

    It might show that no possible H could simulate the input to a
    final state, but that isn't the definition of Halting. Halting is
    strictly about the behavior of the machine itself.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    computation that halts… “the Turing machine will halt whenever it
    enters
    a final state” (Linz:1990:234)

    Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as >>> it must to be saying that its input is non-halting.

    This is because embedded_H and H must be identical machines, and thus
    do exactly the same thing when given the same input.


    Non-halting behavior patterns can be matched in N steps
    ⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
    number of steps

    Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a >>> finite number of steps.

    You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
    because it doesn't stop simulating until it reaches a final state, or
    it just keeps simulating.

    H / embedded_H are NOT a UTM, as they don't have that necessary
    property.


    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
    behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which
    simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>
    Except you have defined that H, and thus embeded_H doesn't do (c),
    but when it sees the attempted to go into embedded_H with the same
    input actually aborts its simulation and goes to Ĥ.qn which causes
    the machine Ĥ to halt.


    embedded_H could do (c) 10,000 times before aborting which would have to
    be the actual behavior of the actual input because embedded_H remains in
    pure UTM mode until it aborts.

    No such thing. UTM isn't a "Mode" but an identity.

    if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.
    But that is flat out not the truth. The input simulated by embedded_H necessarily must have exact same behavior as simulated by a pure UTM
    until the simulation of this input is aborted because aborting the
    simulation of its input is the only one of three features added to a UTM
    that changes the behavior of its input relative to a pure UTM.

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it.
    (b) Even aborting the simulation after N steps doesn't change the first
    N steps.

    N steps could be 10,000 recursive simulations.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 07:14:07 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000 necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is NOT
    what does the "simulation by H" show, but what is the actual behavior of
    the actual machine the input represents.


    H (Ĥ) (Ĥ) is asking about the behavior of Ĥ (Ĥ)

    PERIOD
    DEFINITION.

    When you are looking at the wrong question, you tend to get the wrong
    answer.

    Looking at the definition of H:

    WM is the description of machine M

    H WM w -> qy if M w will halt and to qn if M w will never halting.


    Nothing about "H's simulation of the input", just the actual behavior of
    the machine described.

    You are stuck in your ignorance and thing that because H is defined as a simulator, that somehow that changes the requirements, it doesn't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 10:05:50 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is NOT
    what does the "simulation by H" show, but what is the actual behavior of
    the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.

    My reviewers cannot show that any of the extra features added to the UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the first
    N steps.

    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
    behavior of the simulation of N steps by embedded_H because embedded_H
    has the exact same behavior as a UTM for these first N steps, and you
    already agreed with this.

    Did you quit believing in UTMs?




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Wed Apr 19 19:47:27 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation. Inputs
    that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see
    that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its
    input
    it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an actual
    UTM. That you make this claim shows you don't actually know what a
    UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to
    the UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps
    of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D >>>>> presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever it >>>>> enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL machine,
    not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot
    possibly reach its simulated final state in any finite number of steps >>>>> of correct simulation then we have conclusive proof that D presents
    non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines so it is unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider thinks
    that Px is non-halting which is an obvious error due to a design flaw
    in the architecture of your decider.  Only the Flibble Signaling
    Simulating Halt Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt decision to its
    caller in finite time and as Px discards this result and exits, Px
    ALWAYS halts. Given your H doesn't do this and instead returns a result
    of non-halting for Px shows us that your halt decider is invalid. Only
    the Flibble Signaling Simulating Halt Decider (SSHD) is a solution to
    the halting problem.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Wed Apr 19 14:39:59 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation. Inputs >>>>>> that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see
    that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its
    input
    it derives the exact same N steps that a pure UTM would derive
    because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an actual
    UTM. That you make this claim shows you don't actually know what a
    UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle,
    since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to
    the UTM
    change the behavior of the simulated input for the first N steps
    of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    No one claims that it doesn't correctly reproduce the first N steps
    of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D
    simulated by simulating halt decider H are the actual behavior that D >>>>>> presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever >>>>>> it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.


    When we see (after N steps) that D correctly simulated by H cannot >>>>>> possibly reach its simulated final state in any finite number of
    steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is erroneous: >>>

    My new paper anchors its ideas in actual Turing machines so it is
    unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider thinks
    that Px is non-halting which is an obvious error due to a design flaw
    in the architecture of your decider.  Only the Flibble Signaling
    Simulating Halt Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt decision to its
    caller in finite time

    Although H must always return to some caller H is not allowed to return
    to any caller that essentially calls H in infinite recursion.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Wed Apr 19 16:10:39 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>> correctly simulated input can possibly reach its own final state >>>>>>>> and
    halt. It does this by correctly recognizing several non-halting >>>>>>>> behavior
    patterns in a finite number of steps of correct simulation.
    Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does >>>>>>> give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we
    see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you >>>>>>> have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of
    its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>> because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an
    actual UTM. That you make this claim shows you don't actually
    know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal
    vehicle, since it started as one and just had some extra features >>>>>>> axded.


    My reviewers cannot show that any of the extra features added to >>>>>>>> the UTM
    change the behavior of the simulated input for the first N steps >>>>>>>> of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change
    the first N steps.

    No one claims that it doesn't correctly reproduce the first N
    steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>> that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever >>>>>>>> it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>

    When we see (after N steps) that D correctly simulated by H cannot >>>>>>>> possibly reach its simulated final state in any finite number of >>>>>>>> steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is
    erroneous:


    My new paper anchors its ideas in actual Turing machines so it is
    unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* >>>> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider
    thinks that Px is non-halting which is an obvious error due to a
    design flaw in the architecture of your decider.  Only the Flibble
    Signaling Simulating Halt Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt decision to
    its caller in finite time

    Although H must always return to some caller H is not allowed to return
    to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code for Px.

    such recursion is not a
    necessary feature of SHDs invoked from the program being analyzed, the infinite recursion in your H is present because your H has a critical
    design flaw.

    /Flibble

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Wed Apr 19 21:32:56 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation.
    Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see >>>>>> that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its >>>>>>> input
    it derives the exact same N steps that a pure UTM would derive
    because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an
    actual UTM. That you make this claim shows you don't actually know >>>>>> what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle, >>>>>> since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
    change the behavior of the simulated input for the first N steps >>>>>>> of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.

    No one claims that it doesn't correctly reproduce the first N
    steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>> simulated by simulating halt decider H are the actual behavior
    that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever >>>>>>> it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>

    When we see (after N steps) that D correctly simulated by H cannot >>>>>>> possibly reach its simulated final state in any finite number of >>>>>>> steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is
    erroneous:


    My new paper anchors its ideas in actual Turing machines so it is
    unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider
    thinks that Px is non-halting which is an obvious error due to a
    design flaw in the architecture of your decider.  Only the Flibble
    Signaling Simulating Halt Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt decision to its
    caller in finite time

    Although H must always return to some caller H is not allowed to return
    to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not have any
    infinite recursion thereby proving that such recursion is not a
    necessary feature of SHDs invoked from the program being analyzed, the
    infinite recursion in your H is present because your H has a critical
    design flaw.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Wed Apr 19 22:14:36 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>>> correctly simulated input can possibly reach its own final
    state and
    halt. It does this by correctly recognizing several non-halting >>>>>>>>> behavior
    patterns in a finite number of steps of correct simulation.
    Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that
    does give an answer, which you say will be non-halting, and then >>>>>>>> "Correctly Simulated" by giving it representation to a UTM, we >>>>>>>> see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you >>>>>>>> have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of >>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>>> because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you >>>>>>>> added have removed essential features needed for it to be an
    actual UTM. That you make this claim shows you don't actually
    know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal
    vehicle, since it started as one and just had some extra
    features axded.


    My reviewers cannot show that any of the extra features added >>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N
    steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>> the first N steps.

    No one claims that it doesn't correctly reproduce the first N
    steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>> that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt
    whenever it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>>

    When we see (after N steps) that D correctly simulated by H cannot >>>>>>>>> possibly reach its simulated final state in any finite number >>>>>>>>> of steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is
    erroneous:


    My new paper anchors its ideas in actual Turing machines so it is
    unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the >>>>> actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* >>>>> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider
    thinks that Px is non-halting which is an obvious error due to a
    design flaw in the architecture of your decider.  Only the Flibble >>>>>> Signaling Simulating Halt Decider (SSHD) correctly handles this case. >>>>
    Nope. For H to be a halt decider it must return a halt decision to
    its caller in finite time

    Although H must always return to some caller H is not allowed to return
    to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not have any
    infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code for Px.

    Nope. You SHD is not a halt decider as it has a critical design flaw as
    it doesn't correctly report that Px halts.

    /Flibble.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 18:49:20 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is NOT
    what does the "simulation by H" show, but what is the actual behavior
    of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a UTM.

    You are just proving that you are a pathological liar that doesn't know
    what he is talking about.

    My reviewers cannot show that any of the extra features added to the UTM change the behavior of the simulated input for the first N steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the first
    N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the behavior of the simulation of N steps by embedded_H because embedded_H
    has the exact same behavior as a UTM for these first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ applied to
    (Ĥ) does. You are just proving even more that you don't understand what
    you are talking about and are just a pathological liar.

    Please show an actual crediable reference that supports your idea that a
    Halt Decider gets to use the fact that it can't simulate the input to a
    final state to allow it to say the input is non-halting.

    CREDIABLE, not your own words, or words you have tricked someone to
    agreeing to not understanding your twisted interpreation of them.



    Did you quit believing in UTMs?


    Nope, are you going to learn what a UTM actually is?

    Remember UTM (Ĥ) (Ĥ) shows us the behavior of Ĥ (Ĥ) and that is Halting,
    so the actual behavior of a "Correct Simulation" of the input to H is
    Halting.

    That H gets something different shows that it doesn't actually do a
    "Correct Simulation" but only simulated for N steps, and then did some
    unsound logic.

    YOU FAIL.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Wed Apr 19 17:52:32 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>> state and
    halt. It does this by correctly recognizing several
    non-halting behavior
    patterns in a finite number of steps of correct simulation. >>>>>>>>>> Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that >>>>>>>>> does give an answer, which you say will be non-halting, and
    then "Correctly Simulated" by giving it representation to a
    UTM, we see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>> you have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of >>>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>>>> because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you >>>>>>>>> added have removed essential features needed for it to be an >>>>>>>>> actual UTM. That you make this claim shows you don't actually >>>>>>>>> know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal
    vehicle, since it started as one and just had some extra
    features axded.


    My reviewers cannot show that any of the extra features added >>>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>> steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>> the first N steps.

    No one claims that it doesn't correctly reproduce the first N >>>>>>>>> steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>>> that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>> wrong.


    When we see (after N steps) that D correctly simulated by H >>>>>>>>>> cannot
    possibly reach its simulated final state in any finite number >>>>>>>>>> of steps
    of correct simulation then we have conclusive proof that D >>>>>>>>>> presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly >>>>>>>> recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is
    erroneous:


    My new paper anchors its ideas in actual Turing machines so it is
    unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the >>>>>> actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem
    Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider
    thinks that Px is non-halting which is an obvious error due to a >>>>>>> design flaw in the architecture of your decider.  Only the
    Flibble Signaling Simulating Halt Decider (SSHD) correctly
    handles this case.

    Nope. For H to be a halt decider it must return a halt decision to
    its caller in finite time

    Although H must always return to some caller H is not allowed to return >>>> to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not have
    any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your
    program does its simulation incorrectly.

    My new write-up proves that my Turing-machine based SHD necessarily must simulate the first N steps of its input correctly because for the first
    N steps embedded_H <is> a pure UTM that can't possibly do any simulation incorrectly for the first N steps of simulation.

    it has a critical design flaw as
    it doesn't correctly report that Px halts.

    /Flibble.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 18:16:50 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its
    mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>> 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to >>>> have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is NOT
    what does the "simulation by H" show, but what is the actual behavior
    of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a UTM.

    You are just proving that you are a pathological liar that doesn't know
    what he is talking about.

    My reviewers cannot show that any of the extra features added to the UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
    behavior of the simulation of N steps by embedded_H because embedded_H
    has the exact same behavior as a UTM for these first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ applied to
    (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three features
    that cannot possibly cause its simulation of its input to diverge from
    the simulation of a pure UTM for the first N steps of simulation we know
    that it necessarily does provide the actual behavior specified by this
    input for these N steps.

    Because these N steps can include 10,000 recursive simulations of ⟨Ĥ⟩ by embedded_H, these recursive simulations <are> the actual behavior
    specified by this input.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 20:07:37 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>> mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>>> 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to >>>>> have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is NOT
    what does the "simulation by H" show, but what is the actual
    behavior of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its input
    it derives the exact same N steps that a pure UTM would derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a UTM.

    You are just proving that you are a pathological liar that doesn't
    know what he is talking about.

    My reviewers cannot show that any of the extra features added to the UTM >>> change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
    behavior of the simulation of N steps by embedded_H because embedded_H
    has the exact same behavior as a UTM for these first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ applied to
    (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three features
    that cannot possibly cause its simulation of its input to diverge from
    the simulation of a pure UTM for the first N steps of simulation we know
    that it necessarily does provide the actual behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of a UTM

    You are just showing you don't understand what a "UTM" actually is.

    Note, that UTM isn't just a fancy word for "Simulator", but a very
    specific type of simulator, and since some of your "additions" break
    those requiremnts, your H isn't actually a UTM.

    You are just proving your stupidity.


    Because these N steps can include 10,000 recursive simulations of ⟨Ĥ⟩ by embedded_H, these recursive simulations <are> the actual behavior
    specified by this input.


    And no matter how many steps (N) you design your H / embedded_H to
    simulate Ĥ (Ĥ), there will always be a slightly larger (but still
    finite) number, which if the same input is given to an ACTUAL UTM, that simulation will reach the point that the top level embedded_H decides to
    abort its simulation, transition to Qn, and Ĥ Halts.

    Since you MUST chose your "N" when you design your H, it is a SINGLE
    DEFINED VALUE, and always to small to determine the actual behavior of
    the Ĥ[n] built on that H[n].

    The only Ĥ[n] that is non-halting is when N becomes infinite, but for
    that N, H never answers.

    In fact, your flawed logic is based on the LIE that embedded_H can have
    a different N than H does, which just means you LIED when you said you
    built Ĥ by the requirements.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 19:31:31 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>>> mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>>>> 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is defined >>>>>> to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is
    NOT what does the "simulation by H" show, but what is the actual
    behavior of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its input >>>> it derives the exact same N steps that a pure UTM would derive because >>>> it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a UTM.

    You are just proving that you are a pathological liar that doesn't
    know what he is talking about.

    My reviewers cannot show that any of the extra features added to the
    UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
    behavior of the simulation of N steps by embedded_H because embedded_H >>>> has the exact same behavior as a UTM for these first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ applied to
    (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three features
    that cannot possibly cause its simulation of its input to diverge from
    the simulation of a pure UTM for the first N steps of simulation we know
    that it necessarily does provide the actual behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the
    actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 20:45:40 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>>>> mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>>>>> 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is defined >>>>>>> to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is
    NOT what does the "simulation by H" show, but what is the actual
    behavior of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its
    input
    it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a UTM.

    You are just proving that you are a pathological liar that doesn't
    know what he is talking about.

    My reviewers cannot show that any of the extra features added to
    the UTM
    change the behavior of the simulated input for the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>> behavior of the simulation of N steps by embedded_H because embedded_H >>>>> has the exact same behavior as a UTM for these first N steps, and you >>>>> already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ applied
    to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three features
    that cannot possibly cause its simulation of its input to diverge from
    the simulation of a pure UTM for the first N steps of simulation we know >>> that it necessarily does provide the actual behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL of them.

    Anything less and it isn't a UTM. DEFINITION.

    Your basically claiming that your immortal since you haven't died, YET.

    You are just PROVING that you don't understand what you are talking about.


    Since you don't seem to want to hold to correct definitions, everything
    you say needs to be treated as a likely LIE or DECEPTION.

    You are just proving your incompetence.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 19:52:15 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error* >>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>>>>> mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>> after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is defined >>>>>>>> to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is >>>>>>> NOT what does the "simulation by H" show, but what is the actual >>>>>>> behavior of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its
    input
    it derives the exact same N steps that a pure UTM would derive
    because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a UTM. >>>>>
    You are just proving that you are a pathological liar that doesn't
    know what he is talking about.

    My reviewers cannot show that any of the extra features added to
    the UTM
    change the behavior of the simulated input for the first N steps of >>>>>> simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>> behavior of the simulation of N steps by embedded_H because
    embedded_H
    has the exact same behavior as a UTM for these first N steps, and you >>>>>> already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ applied
    to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three features >>>> that cannot possibly cause its simulation of its input to diverge from >>>> the simulation of a pure UTM for the first N steps of simulation we
    know
    that it necessarily does provide the actual behavior specified by this >>>> input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the
    actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 21:08:01 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error* >>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>> its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>> after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question is >>>>>>>> NOT what does the "simulation by H" show, but what is the actual >>>>>>>> behavior of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of its >>>>>>> input
    it derives the exact same N steps that a pure UTM would derive
    because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a UTM. >>>>>>
    You are just proving that you are a pathological liar that doesn't >>>>>> know what he is talking about.

    My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
    change the behavior of the simulated input for the first N steps of >>>>>>> simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>>> behavior of the simulation of N steps by embedded_H because
    embedded_H
    has the exact same behavior as a UTM for these first N steps, and >>>>>>> you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ applied >>>>>> to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three
    features
    that cannot possibly cause its simulation of its input to diverge from >>>>> the simulation of a pure UTM for the first N steps of simulation we
    know
    that it necessarily does provide the actual behavior specified by this >>>>> input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of a UTM >>>>
    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the >>> actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the input as
    defined, since that would be what the ACTUAL MACHINE does, or the
    COMPLETE SIMULATION done by a UTM, not the PARTIAL simulation done by H.

    You are using a STRAWMAN criteria, so you get the wrong answer.

    That is like drive 10 miles on a road, then getting off, and say they
    all looked the same, so this road must go on forever.

    Note, embedded_H simulates for the EXACT SAME number of steps as H, so
    gets the exact same answer, because it INCORRECTLY aborts at the exact
    same point and decides its input is non-halting, returning that answer
    to its caller, which makes the ACTUAL BEHAVIOR that input represents to
    be Halting.

    You are just proving that you don't understand the simple basic of the
    theory, and are proving yourself to be a ignorant liar.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 21:38:28 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>>>> its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>>> after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question >>>>>>>>>> is NOT what does the "simulation by H" show, but what is the >>>>>>>>>> actual behavior of the actual machine the input represents. >>>>>>>>>>


    When a simulating halt decider correctly simulates N steps of >>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>>> because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a >>>>>>>> UTM.

    You are just proving that you are a pathological liar that
    doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features added >>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N
    steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>> the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>>>>> behavior of the simulation of N steps by embedded_H because
    embedded_H
    has the exact same behavior as a UTM for these first N steps, >>>>>>>>> and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ
    applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three
    features
    that cannot possibly cause its simulation of its input to diverge >>>>>>> from
    the simulation of a pure UTM for the first N steps of simulation >>>>>>> we know
    that it necessarily does provide the actual behavior specified by >>>>>>> this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of
    a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the >>>>> actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL of
    them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the input as
    defined,
    There is only one actual behavior of the actual input and this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of the
    ACTUAL MACHINE which is decribed by the input.

    You are just trying to pass a DECIETFULL LIE about what your H is
    supposed to do, because you just don't understand the theory, because
    you are just too ignorant.


    If you simply don't "believe in" UTMs then you might not see this
    correctly.

    No, you are the one that doesn't seem to "believe" in UTMs. You don't
    seem to understand that a UTM ALWAYS recreates the behavior of the
    machine it is given the description of, because it NEVER stops until it
    is done.

    Anything less is NOT a UTM, and you LIE when you claim your H is one.


    If you fully comprehend UTMs then you understand that 10,000 recursive simulations of ⟨Ĥ⟩ by embedded_H are the actual behavior of ⟨Ĥ⟩.



    Nope, ALL the steps of Ĥ (Ĥ) is the behavior of the input, which is also
    what happens when you give a REAL UTM (one the doesn't stop) the input
    UTM (Ĥ) (Ĥ)

    You are just stuck in using INCORRECT definitions, so you get wrong
    answers. This shows that you really don't understand the very basics of
    what Truth is, because Truth is base on using the RIGHT definitions, the definition defined by the field.

    Thus, YOU LIE when you make your claims, because you are too igno

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 20:25:04 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>>> its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>> after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question >>>>>>>>> is NOT what does the "simulation by H" show, but what is the >>>>>>>>> actual behavior of the actual machine the input represents.



    When a simulating halt decider correctly simulates N steps of
    its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>> because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of a >>>>>>> UTM.

    You are just proving that you are a pathological liar that
    doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features added to >>>>>>>> the UTM
    change the behavior of the simulated input for the first N steps of >>>>>>>> simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change
    the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>>>> behavior of the simulation of N steps by embedded_H because
    embedded_H
    has the exact same behavior as a UTM for these first N steps,
    and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ
    applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three
    features
    that cannot possibly cause its simulation of its input to diverge
    from
    the simulation of a pure UTM for the first N steps of simulation
    we know
    that it necessarily does provide the actual behavior specified by
    this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of a
    UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the >>>> actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL of them. >>>

    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the input as
    defined,
    There is only one actual behavior of the actual input and this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.

    If you simply don't "believe in" UTMs then you might not see this
    correctly.

    If you fully comprehend UTMs then you understand that 10,000 recursive simulations of ⟨Ĥ⟩ by embedded_H are the actual behavior of ⟨Ĥ⟩.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 20:59:49 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>>>>> its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>>>> after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The question >>>>>>>>>>> is NOT what does the "simulation by H" show, but what is the >>>>>>>>>>> actual behavior of the actual machine the input represents. >>>>>>>>>>>


    When a simulating halt decider correctly simulates N steps of >>>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would derive >>>>>>>>>> because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of >>>>>>>>> a UTM.

    You are just proving that you are a pathological liar that
    doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features added >>>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>> steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>> the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
    behavior of the simulation of N steps by embedded_H because >>>>>>>>>> embedded_H
    has the exact same behavior as a UTM for these first N steps, >>>>>>>>>> and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ
    applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three >>>>>>>> features
    that cannot possibly cause its simulation of its input to
    diverge from
    the simulation of a pure UTM for the first N steps of simulation >>>>>>>> we know
    that it necessarily does provide the actual behavior specified >>>>>>>> by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement of >>>>>>> a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>> the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL of
    them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual
    behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the input as
    defined,
    There is only one actual behavior of the actual input and this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of the
    ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of the
    actual input must necessarily be the N steps simulated by embedded_H.

    The only alternative is to simply disbelieve in UTMs.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 22:16:09 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>>>>> after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The
    question is NOT what does the "simulation by H" show, but >>>>>>>>>>>> what is the actual behavior of the actual machine the input >>>>>>>>>>>> represents.



    When a simulating halt decider correctly simulates N steps of >>>>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would
    derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition of >>>>>>>>>> a UTM.

    You are just proving that you are a pathological liar that >>>>>>>>>> doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features added >>>>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>> steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>>> the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
    behavior of the simulation of N steps by embedded_H because >>>>>>>>>>> embedded_H
    has the exact same behavior as a UTM for these first N steps, >>>>>>>>>>> and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>> applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three >>>>>>>>> features
    that cannot possibly cause its simulation of its input to
    diverge from
    the simulation of a pure UTM for the first N steps of
    simulation we know
    that it necessarily does provide the actual behavior specified >>>>>>>>> by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement >>>>>>>> of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>>> the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL of >>>>>> them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>> behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the input as
    defined,
    There is only one actual behavior of the actual input and this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H. >>
    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of the
    ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of the
    actual input must necessarily be the N steps simulated by embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS of a
    UTM, the statement is meaningless.

    A UTM is defined as a simulator whose behavior DOES match the behavior
    of the input. MATCHING is the criteria that determines that it IS one,
    not a result of just calling a machine one.

    By your logic, I could change my cat into a dog my just calling it one.

    Calling your machine a UTM, doesn't make it one, you need to make it
    meet the requirements to earn the title. Your doesn't.

    What happens if you decide to call the color "Red" to be "Green", and
    run through a traffic light when the top light is on. A traffic ticket
    for running a "Red Light", because just because you called it a Green
    Light doesn't make it one.

    YOU FAIL.

    You are showing you don't understand the basics of logic, so your ideas
    are just failures.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 23:04:41 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>>>> error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>>> even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, but >>>>>>>>>>>>>>> what is the actual behavior of the actual machine the >>>>>>>>>>>>>>> input represents.



    When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>>> of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition >>>>>>>>>>>>> of a UTM.

    You are just proving that you are a pathological liar that >>>>>>>>>>>>> doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>>>>> steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>> change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents >>>>>>>>>>>>>> is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>> steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>>>> applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>> three features
    that cannot possibly cause its simulation of its input to >>>>>>>>>>>> diverge from
    the simulation of a pure UTM for the first N steps of
    simulation we know
    that it necessarily does provide the actual behavior
    specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the
    requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>> are the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change the >>>>>>>>>> first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL >>>>>>>>> of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>>>> behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the input
    as defined,
    There is only one actual behavior of the actual input and this
    behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of the
    ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of the
    actual input must necessarily be the N steps simulated by embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS
    of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can include
    10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N steps.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 19 22:29:50 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>> error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>> even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>> question is NOT what does the "simulation by H" show, but >>>>>>>>>>>>> what is the actual behavior of the actual machine the input >>>>>>>>>>>>> represents.



    When a simulating halt decider correctly simulates N steps >>>>>>>>>>>> of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>> derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition >>>>>>>>>>> of a UTM.

    You are just proving that you are a pathological liar that >>>>>>>>>>> doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features >>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>>> steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>> change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents is >>>>>>>>>>>> the
    behavior of the simulation of N steps by embedded_H because >>>>>>>>>>>> embedded_H
    has the exact same behavior as a UTM for these first N >>>>>>>>>>>> steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>> applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with three >>>>>>>>>> features
    that cannot possibly cause its simulation of its input to
    diverge from
    the simulation of a pure UTM for the first N steps of
    simulation we know
    that it necessarily does provide the actual behavior specified >>>>>>>>>> by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement >>>>>>>>> of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>>>> the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the >>>>>>>> first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL
    of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>> behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the input as
    defined,
    There is only one actual behavior of the actual input and this behavior >>>> is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H. >>>
    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of the
    ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of the
    actual input must necessarily be the N steps simulated by embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS of a
    UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can include
    10,000 recursive simulations.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 19 23:41:54 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>>> error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>> even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>>>> defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, but >>>>>>>>>>>>>> what is the actual behavior of the actual machine the >>>>>>>>>>>>>> input represents.



    When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>> of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the definition >>>>>>>>>>>> of a UTM.

    You are just proving that you are a pathological liar that >>>>>>>>>>>> doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features >>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>>>> steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>> change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents >>>>>>>>>>>>> is the
    behavior of the simulation of N steps by embedded_H because >>>>>>>>>>>>> embedded_H
    has the exact same behavior as a UTM for these first N >>>>>>>>>>>>> steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>>> applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with >>>>>>>>>>> three features
    that cannot possibly cause its simulation of its input to >>>>>>>>>>> diverge from
    the simulation of a pure UTM for the first N steps of
    simulation we know
    that it necessarily does provide the actual behavior
    specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the requirement >>>>>>>>>> of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>>>>> the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change the >>>>>>>>> first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but ALL >>>>>>>> of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>>> behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the input as >>>>>> defined,
    There is only one actual behavior of the actual input and this
    behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H. >>>>
    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of the
    ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of the
    actual input must necessarily be the N steps simulated by embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS of
    a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can include
    10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    The numbers from 1 to 10 are not the equivalent of the numbers from 1 to
    a hundred.

    NOT EQUIVALENT is NOT EQUIVALENT.

    It might correctly simulate the first N steps, but that doesn't make it
    a UTM.

    Like I have said it before, your statement is like claiming you are
    immortal because you haven't died YET.

    Being a UTM implies MORE than just correctly simulating part of a
    machines behavior.

    You are just proving you are not qualified to talk about Turing
    Machines, or Logic.

    You are just too ignornat, and make up too much stuff, which is the same
    as just lying.

    Your legacy is that of a kook.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 20 07:23:35 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>>>>> error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>>>> even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ >>>>>>>>>>>>>>>>> is defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>> the input represents.



    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the
    definition of a UTM.

    You are just proving that you are a pathological liar that >>>>>>>>>>>>>> doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first >>>>>>>>>>>>>>> N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>> change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ represents >>>>>>>>>>>>>>> is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>> steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>>>>> applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>> three features
    that cannot possibly cause its simulation of its input to >>>>>>>>>>>>> diverge from
    the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>> simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the
    requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>>> are the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>>> the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>> ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>>>>> behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the input >>>>>>>> as defined,
    There is only one actual behavior of the actual input and this
    behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of
    the ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of the >>>>> actual input must necessarily be the N steps simulated by embedded_H. >>>>>
    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS
    of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can include
    10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N steps.


    Right, but we don't care about that. We care about the TOTAL behavior of
    the input, which H never gets to see, because it gives up.

    We know that

    H (M) w needs to go to qy if M w will halt when actually run (By the
    definition of a Halt decider)

    H (Ĥ) (Ĥ) goes to qn (by your assertions)

    Ĥ (Ĥ) will go to Ĥ.qn and halt when actually run.

    THEREFORE, H was just WRONG, BY DEFINITION.

    Also UTM (Ĥ) (Ĥ) will halt just like Ĥ (Ĥ)

    So, if you want to use the alternate definition, that

    H (M) w needs to go to qy if UTM (M) w halts.

    Note, it is UTM (M) w, which ALWAYS will have the same behavior for a
    given input. Not "the correct simulation done by H".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 20 06:56:46 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>>>>> even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>> is defined to have
    a pathological relationship to embedded_H.


    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>>> the input represents.



    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological liar >>>>>>>>>>>>>>> that doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first >>>>>>>>>>>>>>>> N steps of
    simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>> change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>>> steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE >>>>>>>>>>>>>>> Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>>> three features
    that cannot possibly cause its simulation of its input to >>>>>>>>>>>>>> diverge from
    the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>> simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the
    requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>>>> are the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>>> ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the input >>>>>>>>> as defined,
    There is only one actual behavior of the actual input and this >>>>>>>> behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by
    embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of
    the ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of the >>>>>> actual input must necessarily be the N steps simulated by embedded_H. >>>>>>
    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS
    of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can include
    10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
    simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N
    steps.


    Right, but we don't care about that. We care about the TOTAL behavior of
    the input, which H never gets to see, because it gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    When N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are performed (unless we are playing head games) we can see that ⟨Ĥ⟩ cannot possibly reach its own final state of ⟨Ĥ.qn⟩ in any finite number of steps.

    N steps could be reach (c) or N steps could be reaching (c) 10,000 times.


    We know that

    H (M) w needs to go to qy if M w will halt when actually run (By the definition of a Halt decider)

    H (Ĥ) (Ĥ) goes to qn (by your assertions)

    Ĥ (Ĥ) will go to Ĥ.qn and halt when actually run.

    THEREFORE, H was just WRONG, BY DEFINITION.

    Also UTM (Ĥ) (Ĥ) will halt just like Ĥ (Ĥ)

    So, if you want to use the alternate definition, that

    H (M) w needs to go to qy if UTM (M) w halts.

    Note, it is UTM (M) w, which ALWAYS will have the same behavior for a
    given input. Not "the correct simulation done by H".

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 20 08:06:37 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote:
    On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations because ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>> is defined to have
    a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>>>> the input represents.



    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological liar >>>>>>>>>>>>>>>> that doesn't know what he is talking about.

    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>> first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>>>> steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the MACHINE >>>>>>>>>>>>>>>> Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>>>> three features
    that cannot possibly cause its simulation of its input to >>>>>>>>>>>>>>> diverge from
    the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>> simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>>>>> are the actual behavior of these N steps because

    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>>>> ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the
    input as defined,
    There is only one actual behavior of the actual input and this >>>>>>>>> behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of >>>>>>>> the ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior of >>>>>>> the
    actual input must necessarily be the N steps simulated by
    embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the
    REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can include
    10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
    simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N
    steps.


    Right, but we don't care about that. We care about the TOTAL behavior
    of the input, which H never gets to see, because it gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
    ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Until the outer embedded_H used by Ĥ reaches the point that it decides
    to stop its simulation, and the whole simulation ends with just partial
    results and it decides to go to qn and Ĥ Halts.

    This MUST happen, as you say this is what H does.

    If not, you are just admitting to be a stinking liar.


    When N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are performed (unless we are playing head games) we can see that ⟨Ĥ⟩ cannot possibly reach its own final state of ⟨Ĥ.qn⟩ in any finite number of steps.


    no, (Ĥ) (Ĥ) CAN reach its own final state when simulated by an ACTUAL
    UTM, that doesn't stop. H / embedded_H isn't such a machine (so is not
    even a UTM). It doesn't matter what H gets, it matters what the actual
    machine does.

    N steps could be reach (c) or N steps could be reaching (c) 10,000 times.

    And then the top embedded_H aborts its simulation, and goes to qn,
    making the COMPLETE simulation of that input see that final end.

    You are just exhibiting your God complex. You think H must be God and
    able to get the right answer. H isn't God, and neither are you.

    H, and you, are restricted by the rules. H, by the rules, needs to
    answer about the actual machine represneted by the input, or the results
    of an actual UTM simulating its input, even though it can't do that itself.

    What matters is what ACTUALLY HAPPENS to the ACTUAL MACHINE, not what H "thinks" is going to happen by its limited senses and simulation.

    YOU FAIL.



    We know that

    H (M) w needs to go to qy if M w will halt when actually run (By the
    definition of a Halt decider)

    H (Ĥ) (Ĥ) goes to qn (by your assertions)

    Ĥ (Ĥ) will go to Ĥ.qn and halt when actually run.

    THEREFORE, H was just WRONG, BY DEFINITION.

    Also UTM (Ĥ) (Ĥ) will halt just like Ĥ (Ĥ)

    So, if you want to use the alternate definition, that

    H (M) w needs to go to qy if UTM (M) w halts.

    Note, it is UTM (M) w, which ALWAYS will have the same behavior for a
    given input. Not "the correct simulation done by H".


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 20 09:59:42 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>>>>> compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>>>>> the input represents.



    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological liar >>>>>>>>>>>>>>>>> that doesn't know what he is talking about.

    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>> first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>>>>> steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>>>>> three features
    that cannot possibly cause its simulation of its input >>>>>>>>>>>>>>>> to diverge from
    the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>> simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>>>>> ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>> input as defined,
    There is only one actual behavior of the actual input and this >>>>>>>>>> behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior of >>>>>>>>> the ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior >>>>>>>> of the
    actual input must necessarily be the N steps simulated by
    embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the
    REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can include >>>>>> 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these
    first N
    steps.


    Right, but we don't care about that. We care about the TOTAL behavior
    of the input, which H never gets to see, because it gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
    behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
    ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Until the outer embedded_H used by Ĥ reaches the point that it decides
    to stop its simulation, and the whole simulation ends with just partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are correctly simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached their final state of ⟨Ĥ.qn⟩.




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Thu Apr 20 18:36:02 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its
    correctly simulated input can possibly reach its own final state and >>>>>>> halt. It does this by correctly recognizing several non-halting
    behavior
    patterns in a finite number of steps of correct simulation.
    Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that does
    give an answer, which you say will be non-halting, and then
    "Correctly Simulated" by giving it representation to a UTM, we see >>>>>> that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is you
    have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of its >>>>>>> input
    it derives the exact same N steps that a pure UTM would derive
    because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features you
    added have removed essential features needed for it to be an
    actual UTM. That you make this claim shows you don't actually know >>>>>> what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal vehicle, >>>>>> since it started as one and just had some extra features axded.


    My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
    change the behavior of the simulated input for the first N steps >>>>>>> of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it
    (c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.

    No one claims that it doesn't correctly reproduce the first N
    steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of input D >>>>>>> simulated by simulating halt decider H are the actual behavior
    that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt whenever >>>>>>> it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL
    machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>

    When we see (after N steps) that D correctly simulated by H cannot >>>>>>> possibly reach its simulated final state in any finite number of >>>>>>> steps
    of correct simulation then we have conclusive proof that D
    presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is
    erroneous:


    My new paper anchors its ideas in actual Turing machines so it is
    unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider
    thinks that Px is non-halting which is an obvious error due to a
    design flaw in the architecture of your decider.  Only the Flibble
    Signaling Simulating Halt Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt decision to its
    caller in finite time

    Although H must always return to some caller H is not allowed to return
    to any caller that essentially calls H in infinite recursion.

    That is a contradiction: either H MUST ALWAYS return to its caller or
    MUST NOT; your mistake is in thinking that there is some "get out"
    clause for SHDs.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Thu Apr 20 12:49:57 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>>>> state and
    halt. It does this by correctly recognizing several
    non-halting behavior
    patterns in a finite number of steps of correct simulation. >>>>>>>>>>>> Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that >>>>>>>>>>> does give an answer, which you say will be non-halting, and >>>>>>>>>>> then "Correctly Simulated" by giving it representation to a >>>>>>>>>>> UTM, we see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>>>> you have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps >>>>>>>>>>>> of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>> derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>>> you added have removed essential features needed for it to be >>>>>>>>>>> an actual UTM. That you make this claim shows you don't
    actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>> features axded.


    My reviewers cannot show that any of the extra features >>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>>> steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the first N >>>>>>>>>>> steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of >>>>>>>>>>>> input D
    simulated by simulating halt decider H are the actual
    behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>>> machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>>>> wrong.


    When we see (after N steps) that D correctly simulated by H >>>>>>>>>>>> cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof that D >>>>>>>>>>>> presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>
    It turns out that the non-halting behavior pattern is correctly >>>>>>>>>> recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is >>>>>>>>> erroneous:


    My new paper anchors its ideas in actual Turing machines so it is >>>>>>>> unequivocal. The first two pages re only about the Linz Turing >>>>>>>> machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps >>>>>>>> UTM based simulated by a simulating halt decider are necessarily >>>>>>>> the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem >>>>>>>> Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider >>>>>>>>> thinks that Px is non-halting which is an obvious error due to >>>>>>>>> a design flaw in the architecture of your decider.  Only the >>>>>>>>> Flibble Signaling Simulating Halt Decider (SSHD) correctly
    handles this case.

    Nope. For H to be a halt decider it must return a halt decision
    to its caller in finite time

    Although H must always return to some caller H is not allowed to
    return
    to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not have
    any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code for
    Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your
    program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its simulation
    just like I have defined it as evidenced by the fact that it returns a correct halting decision for Px; something your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you must
    change the behavior of Px away from the behavior that its x86 code
    specifies.

    void Px(void (*x)())
    {
    (void) H(x, x);
    return;
    }

    Px correctly simulated by H cannot possibly reach past its machine
    address of: [00001b3d].

    _Px()
    [00001b32] 55 push ebp
    [00001b33] 8bec mov ebp,esp
    [00001b35] 8b4508 mov eax,[ebp+08]
    [00001b38] 50 push eax // push address of Px
    [00001b39] 8b4d08 mov ecx,[ebp+08]
    [00001b3c] 51 push ecx // push address of Px
    [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408 add esp,+08
    [00001b45] 5d pop ebp
    [00001b46] c3 ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55 push ebp
    [00001c63] 8bec mov ebp,esp
    [00001c65] ebfe jmp 00001c65
    [00001c67] 5d pop ebp
    [00001c68] c3 ret
    Size in bytes:(0007) [00001c68]

    Your system doesn't merely report on the behavior of its input it also interferes with the behavior of its input.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Thu Apr 20 18:32:12 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not its >>>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>>> state and
    halt. It does this by correctly recognizing several
    non-halting behavior
    patterns in a finite number of steps of correct simulation. >>>>>>>>>>> Inputs that
    do terminate are simply simulated until they complete.



    Except t doesn't o this for the "pathological" program.

    The "Pathological Program" when built on such a Decider that >>>>>>>>>> does give an answer, which you say will be non-halting, and >>>>>>>>>> then "Correctly Simulated" by giving it representation to a >>>>>>>>>> UTM, we see that the simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>>> you have added a pattern that isn't always non-halting.

    When a simulating halt decider correctly simulates N steps of >>>>>>>>>>> its input
    it derives the exact same N steps that a pure UTM would
    derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>> you added have removed essential features needed for it to be >>>>>>>>>> an actual UTM. That you make this claim shows you don't
    actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal
    vehicle, since it started as one and just had some extra
    features axded.


    My reviewers cannot show that any of the extra features added >>>>>>>>>>> to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>> steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>>> the first N steps.

    No one claims that it doesn't correctly reproduce the first N >>>>>>>>>> steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of >>>>>>>>>>> input D
    simulated by simulating halt decider H are the actual
    behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>> machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>>> wrong.


    When we see (after N steps) that D correctly simulated by H >>>>>>>>>>> cannot
    possibly reach its simulated final state in any finite number >>>>>>>>>>> of steps
    of correct simulation then we have conclusive proof that D >>>>>>>>>>> presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated.

    It turns out that the non-halting behavior pattern is correctly >>>>>>>>> recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is
    erroneous:


    My new paper anchors its ideas in actual Turing machines so it is >>>>>>> unequivocal. The first two pages re only about the Linz Turing
    machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps
    UTM based simulated by a simulating halt decider are necessarily the >>>>>>> actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem
    Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider >>>>>>>> thinks that Px is non-halting which is an obvious error due to a >>>>>>>> design flaw in the architecture of your decider.  Only the
    Flibble Signaling Simulating Halt Decider (SSHD) correctly
    handles this case.

    Nope. For H to be a halt decider it must return a halt decision to >>>>>> its caller in finite time

    Although H must always return to some caller H is not allowed to
    return
    to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not have
    any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your
    program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its simulation
    just like I have defined it as evidenced by the fact that it returns a
    correct halting decision for Px; something your broken SHD gets wrong.


    My new write-up proves that my Turing-machine based SHD necessarily must simulate the first N steps of its input correctly because for the first
    N steps embedded_H <is> a pure UTM that can't possibly do any simulation incorrectly for the first N steps of simulation.

    Again, the mistake you are making is assuming that a program that
    invokes the decider whilst it is being decided upon must cause an
    infinite recursion when the halt decider is of the simulating type: I
    have shown otherwise.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Thu Apr 20 20:08:30 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or not >>>>>>>>>>>>> its
    correctly simulated input can possibly reach its own final >>>>>>>>>>>>> state and
    halt. It does this by correctly recognizing several
    non-halting behavior
    patterns in a finite number of steps of correct simulation. >>>>>>>>>>>>> Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider that >>>>>>>>>>>> does give an answer, which you say will be non-halting, and >>>>>>>>>>>> then "Correctly Simulated" by giving it representation to a >>>>>>>>>>>> UTM, we see that the simulation reaches a final state. >>>>>>>>>>>>
    Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>>>>> you have added a pattern that isn't always non-halting. >>>>>>>>>>>>
    When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>> of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>>>> you added have removed essential features needed for it to >>>>>>>>>>>> be an actual UTM. That you make this claim shows you don't >>>>>>>>>>>> actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>> features axded.


    My reviewers cannot show that any of the extra features >>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>>>> steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the first >>>>>>>>>>>> N steps of the behavior, that is a Strawman argumen.


    Because of all this we can know that the first N steps of >>>>>>>>>>>>> input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>
    Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>>>> machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>>>>> wrong.


    When we see (after N steps) that D correctly simulated by H >>>>>>>>>>>>> cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof that D >>>>>>>>>>>>> presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>
    It turns out that the non-halting behavior pattern is correctly >>>>>>>>>>> recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is >>>>>>>>>> erroneous:


    My new paper anchors its ideas in actual Turing machines so it is >>>>>>>>> unequivocal. The first two pages re only about the Linz Turing >>>>>>>>> machine based proof.

    The H/D material is now on a single page and all reference
    to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps >>>>>>>>> UTM based simulated by a simulating halt decider are
    necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem >>>>>>>>> Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your decider >>>>>>>>>> thinks that Px is non-halting which is an obvious error due to >>>>>>>>>> a design flaw in the architecture of your decider.  Only the >>>>>>>>>> Flibble Signaling Simulating Halt Decider (SSHD) correctly >>>>>>>>>> handles this case.

    Nope. For H to be a halt decider it must return a halt decision >>>>>>>> to its caller in finite time

    Although H must always return to some caller H is not allowed to >>>>>>> return
    to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not have >>>>>> any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code
    for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your
    program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its simulation
    just like I have defined it as evidenced by the fact that it returns a
    correct halting decision for Px; something your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you must
    change the behavior of Px away from the behavior that its x86 code
    specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I am
    using an entirely different simulation method, one that actually works.


    void Px(void (*x)())
    {
      (void) H(x, x);
      return;
    }

    Px correctly simulated by H cannot possibly reach past its machine
    address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt decision to
    each branch is a perfectly valid SHD design; again a design, unlike
    yours, that actually works.


    Your system doesn't merely report on the behavior of its input it also interferes with the behavior of its input.

    No it doesn't; H returns a value to its caller in finite time so
    satisfies the requirements of a halt decider unlike your SHD which you
    have to "abort" because your decider doesn't satisfy the requirements
    because your design is broken.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Thu Apr 20 14:20:16 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>> not its
    correctly simulated input can possibly reach its own final >>>>>>>>>>>>>> state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct
    simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider >>>>>>>>>>>>> that does give an answer, which you say will be
    non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>> representation to a UTM, we see that the simulation reaches >>>>>>>>>>>>> a final state.

    Thus, your H was WRONG t make the answer. And the problem >>>>>>>>>>>>> is you have added a pattern that isn't always non-halting. >>>>>>>>>>>>>
    When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>>> of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>>>>> you added have removed essential features needed for it to >>>>>>>>>>>>> be an actual UTM. That you make this claim shows you don't >>>>>>>>>>>>> actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>>> features axded.


    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first N >>>>>>>>>>>>>> steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the first >>>>>>>>>>>>> N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>

    Because of all this we can know that the first N steps of >>>>>>>>>>>>>> input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>
    Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>>>>> machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer >>>>>>>>>>>>> is wrong.


    When we see (after N steps) that D correctly simulated by >>>>>>>>>>>>>> H cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof that D >>>>>>>>>>>>>> presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is correctly >>>>>>>>>>>> recognized in the first N steps.

    Your assumption that a program that calls H is non-halting is >>>>>>>>>>> erroneous:


    My new paper anchors its ideas in actual Turing machines so it is >>>>>>>>>> unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>> machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>> to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps >>>>>>>>>> UTM based simulated by a simulating halt decider are
    necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting Problem >>>>>>>>>> Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your
    decider thinks that Px is non-halting which is an obvious >>>>>>>>>>> error due to a design flaw in the architecture of your
    decider.  Only the Flibble Signaling Simulating Halt Decider >>>>>>>>>>> (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt decision >>>>>>>>> to its caller in finite time

    Although H must always return to some caller H is not allowed to >>>>>>>> return
    to any caller that essentially calls H in infinite recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does not
    have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code
    for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your
    program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact that
    it returns a correct halting decision for Px; something your broken
    SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you must
    change the behavior of Px away from the behavior that its x86 code
    specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I am
    using an entirely different simulation method, one that actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its machine
    address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px
    [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px
    [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt decision to
    each branch is a perfectly valid SHD design; again a design, unlike
    yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final
    "return" statement and halts you are incorrect.


    Your system doesn't merely report on the behavior of its input it also
    interferes with the behavior of its input.

    No it doesn't; H returns a value to its caller in finite time so
    satisfies the requirements of a halt decider unlike your SHD which you
    have to "abort" because your decider doesn't satisfy the requirements
    because your design is broken.

    /Flibble


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 20 17:51:02 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>> must compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by H" >>>>>>>>>>>>>>>>>>>>> show, but what is the actual behavior of the actual >>>>>>>>>>>>>>>>>>>>> machine the input represents.



    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological liar >>>>>>>>>>>>>>>>>>> that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>> first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these first >>>>>>>>>>>>>>>>>>>> N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>> with three features
    that cannot possibly cause its simulation of its input >>>>>>>>>>>>>>>>>> to diverge from
    the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>>>> simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>> but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>> 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>> input as defined,
    There is only one actual behavior of the actual input and >>>>>>>>>>>> this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior >>>>>>>>>>> of the ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior >>>>>>>>>> of the
    actual input must necessarily be the N steps simulated by
    embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the
    REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can
    include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>> first N
    steps.


    Right, but we don't care about that. We care about the TOTAL
    behavior of the input, which H never gets to see, because it gives up. >>>>>



    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
    behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which
    simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>
    Until the outer embedded_H used by Ĥ reaches the point that it
    decides to stop its simulation, and the whole simulation ends with
    just partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are correctly
    simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached >> their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have
    never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.

    Referring to an entirely different sequence where there is no such
    pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    Why do you continue to believe that you can get away with this?



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 20 18:40:41 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote:
    On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote:

    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>> must compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" >>>>>>>>>>>>>>>>>>>> show, but what is the actual behavior of the actual >>>>>>>>>>>>>>>>>>>> machine the input represents.



    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features.


    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological liar >>>>>>>>>>>>>>>>>> that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these first >>>>>>>>>>>>>>>>>>> N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>> with three features
    that cannot possibly cause its simulation of its input >>>>>>>>>>>>>>>>> to diverge from
    the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>>> simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>> but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>> 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>>>


    Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>> input as defined,
    There is only one actual behavior of the actual input and >>>>>>>>>>> this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior >>>>>>>>>> of the ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual behavior >>>>>>>>> of the
    actual input must necessarily be the N steps simulated by
    embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the
    REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can
    include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>> first N
    steps.


    Right, but we don't care about that. We care about the TOTAL
    behavior of the input, which H never gets to see, because it gives up. >>>>



    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
    behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
    ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>
    Until the outer embedded_H used by Ĥ reaches the point that it decides
    to stop its simulation, and the whole simulation ends with just
    partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are correctly simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then a UTM when it CORRECTLY AND COMPLETELY simulates this input, it will see those 3000 steps and then
    see one more iteration simulated, then the top level embedded_H will
    abort its simulation and go to Qn which is also Ĥ.qn and Ĥ will halt.

    Thus, this input represents a Halting Computation.

    It doesn't matter that H can't simulate the input to a final state if it
    gives up. What matters is what a REAL UTM (which never give up) will do
    or what the actual machine does.

    You are just working in a fantasy world where you close your eyes to
    what is actually true, and try to pretend, by lying to yourself, that
    things work the way you want.

    This just makes all you8r logic invalid and your results worthless
    because you ignore the actual rules and truth of the systems.

    You have imprisioned you mind in this fantasy, and it seems locked
    yourself in and you can't get out.

    It seems that this is likely your fate for all eternity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 20 19:14:24 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote:
    On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>


    When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>> N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>> first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>>>>> because embedded_H
    has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>> with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>>>>> simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>>
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>>> but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>>> 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>>> input as defined,
    There is only one actual behavior of the actual input and >>>>>>>>>>>>> this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the behavior >>>>>>>>>>>> of the ACTUAL MACHINE which is decribed by the input.

    No matter what the problem definition says the actual
    behavior of the
    actual input must necessarily be the N steps simulated by >>>>>>>>>>> embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the
    REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can
    include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>>> first N
    steps.


    Right, but we don't care about that. We care about the TOTAL
    behavior of the input, which H never gets to see, because it gives >>>>>> up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>>
    Until the outer embedded_H used by Ĥ reaches the point that it
    decides to stop its simulation, and the whole simulation ends with
    just partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are correctly
    simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT YOU)

    Not just someone you have "tricked" into agreeing to a poorly worded
    statement that yo misinterpret to agree with you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 20 18:54:18 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 6:14 PM, Richard Damon wrote:
    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>


    When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>>> N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>>> first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
    has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>> with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>> of simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>>>
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>>>> but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>>>> 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>>>> input as defined,
    There is only one actual behavior of the actual input and >>>>>>>>>>>>>> this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the
    behavior of the ACTUAL MACHINE which is decribed by the input. >>>>>>>>>>>>
    No matter what the problem definition says the actual
    behavior of the
    actual input must necessarily be the N steps simulated by >>>>>>>>>>>> embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the
    REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>> include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>>>> first N
    steps.


    Right, but we don't care about that. We care about the TOTAL
    behavior of the input, which H never gets to see, because it
    gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Until the outer embedded_H used by Ĥ reaches the point that it
    decides to stop its simulation, and the whole simulation ends with
    just partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are correctly >>>> simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have
    never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such
    pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT YOU)


    Professor Sipser.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 20 20:38:34 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/23 7:54 PM, olcott wrote:
    On 4/20/2023 6:14 PM, Richard Damon wrote:
    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>>


    When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>> the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>> doesn't change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>
    Which don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>

    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
    has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>>> with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>>> of simulation we know
    that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>> behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>> steps because

    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>> input, but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>> the actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>> simulates 10,000
    recursive simulations these are the actual behavior of >>>>>>>>>>>>>>>>> ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>> the input as defined,
    There is only one actual behavior of the actual input and >>>>>>>>>>>>>>> this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>> input.

    No matter what the problem definition says the actual >>>>>>>>>>>>> behavior of the
    actual input must necessarily be the N steps simulated by >>>>>>>>>>>>> embedded_H.

    The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>

    NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>> include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
    simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>> these first N
    steps.


    Right, but we don't care about that. We care about the TOTAL
    behavior of the input, which H never gets to see, because it
    gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Until the outer embedded_H used by Ĥ reaches the point that it
    decides to stop its simulation, and the whole simulation ends with >>>>>> just partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are
    correctly
    simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>> reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
    input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such
    pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT YOU)


    Professor Sipser.


    Nope, he agreed that IF H correctly predicted that a CORRECT SIMULATION
    (which by his definition, is the simulation done by an ACTUAL UTM, which
    always agrees with the behavior of the ACTUAL MACHINE) would never halt,
    then H could abort its simulation.

    Thus, he does not actually agree to your definiition.

    The fact that you had the phrase "its correct simulation" is irrelevant, because in his mind, the input has a COPY of H, and thus varying H to
    simulate longer doesn't affect the input.

    That is why I included the line (which you snipped to deceive):

    Not just someone you have "tricked" into agreeing to a poorly worded
    statement that you misinterpret to agree with you.

    WHich just shows that something in you understands that your logic
    doesn't actually hold. You are interpreting the statement you gave him differently then he would understand it, because you are just too stupid
    to know what things actually mean.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 20 21:05:57 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 6:14 PM, Richard Damon wrote:
    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
    On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
    necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>


    When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>>> N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>>> first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    Which don't matter, as the question


    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
    has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>> with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>> of simulation we know
    that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>>>> specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>>>
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>>>> but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>>>> actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>>>> 10,000
    recursive simulations these are the actual behavior of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>>>> input as defined,
    There is only one actual behavior of the actual input and >>>>>>>>>>>>>> this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the
    behavior of the ACTUAL MACHINE which is decribed by the input. >>>>>>>>>>>>
    No matter what the problem definition says the actual
    behavior of the
    actual input must necessarily be the N steps simulated by >>>>>>>>>>>> embedded_H.

    The only alternative is to simply disbelieve in UTMs.


    NOPE, Since H isn't a UTM, because it doesn't meet the
    REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>> include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>>>> first N
    steps.


    Right, but we don't care about that. We care about the TOTAL
    behavior of the input, which H never gets to see, because it
    gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Until the outer embedded_H used by Ĥ reaches the point that it
    decides to stop its simulation, and the whole simulation ends with
    just partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are correctly >>>> simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have
    never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such
    pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT YOU)

    Not just someone you have "tricked" into agreeing to a poorly worded statement that yo misinterpret to agree with you.


    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    "If simulating halt decider H correctly simulates its input D until H
    correctly determines that its simulated D would never stop running
    unless aborted then H can abort its simulation of D and correctly report
    that D specifies a non-halting sequence of configurations."

    He understood that the above paragraph is a tautology. That you do not understand that it is a tautology provides zero evidence that it is not
    a tautology.

    You have already agreed that N steps of an input simulated by a
    simulating halt decider are the actual behavior for these N steps.

    The fact that you agreed with this seems to prove that you will not
    disagree with me at the expense of truth and that you do actually care
    about the truth.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 20 21:43:09 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/2023 9:20 PM, Richard Damon wrote:
    On 4/20/23 10:05 PM, olcott wrote:
    On 4/20/2023 6:14 PM, Richard Damon wrote:
    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that >>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H must compute its mapping >>>>>>>>>>>>>>>>>>>>>>>>>> from never reaches its simulated final state >>>>>>>>>>>>>>>>>>>>>>>>>> of ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman >>>>>>>>>>>>>>>>>>>>>>>>> error. The question is NOT what does the >>>>>>>>>>>>>>>>>>>>>>>>> "simulation by H" show, but what is the actual >>>>>>>>>>>>>>>>>>>>>>>>> behavior of the actual machine the input >>>>>>>>>>>>>>>>>>>>>>>>> represents.



    When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input >>>>>>>>>>>>>>>>>>>>>>>> it derives the exact same N steps that a pure >>>>>>>>>>>>>>>>>>>>>>>> UTM would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>>> the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>>> doesn't change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>>
    Which don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>>

    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
    has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been >>>>>>>>>>>>>>>>>>>>>> augmented with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>>>> of simulation we know
    that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>>> behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>>> steps because

    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>> doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>>> input, but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>>> the actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>>> simulates 10,000
    recursive simulations these are the actual behavior of >>>>>>>>>>>>>>>>>> ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>>> the input as defined,
    There is only one actual behavior of the actual input >>>>>>>>>>>>>>>> and this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>>> input.

    No matter what the problem definition says the actual >>>>>>>>>>>>>> behavior of the
    actual input must necessarily be the N steps simulated by >>>>>>>>>>>>>> embedded_H.

    The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>>

    NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>>> include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
    simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>>> these first N
    steps.


    Right, but we don't care about that. We care about the TOTAL >>>>>>>>> behavior of the input, which H never gets to see, because it >>>>>>>>> gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the >>>>>>>> process*

    Until the outer embedded_H used by Ĥ reaches the point that it
    decides to stop its simulation, and the whole simulation ends
    with just partial results and it decides to go to qn and Ĥ Halts. >>>>>>>

    You keep dodging the key truth when N steps of embedded_H are
    correctly
    simulated by embedded_H and N = 30000 then we know that the actual >>>>>> behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>>> reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to
    have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
    input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such
    pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT YOU)

    Not just someone you have "tricked" into agreeing to a poorly worded
    statement that yo misinterpret to agree with you.


    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    "If simulating halt decider H correctly simulates its input D until H
    correctly determines that its simulated D would never stop running
    unless aborted then H can abort its simulation of D and correctly report
    that D specifies a non-halting sequence of configurations."

    He understood that the above paragraph is a tautology. That you do not
    understand that it is a tautology provides zero evidence that it is not
    a tautology.

    You have already agreed that N steps of an input simulated by a
    simulating halt decider are the actual behavior for these N steps.

    The fact that you agreed with this seems to prove that you will not
    disagree with me at the expense of truth and that you do actually care
    about the truth.




    Right, like I said, *IF* the decider correctly simulates its input D
    until H *CORRECTLY* determines that its Simulate D would never stop
    running unless aborted.

    NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE MACHINE.
    NOT JUST A PARTIAL SIMULATION BY H.

    Unless the simulation is from the frame-of-reference of the pathological relationship it is rejecting apples because lemons are too sour.

    Thus when N steps of ⟨Ĥ⟩ correctly simulated by embedded_H conclusively proves by a form of mathematical induction that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach its simulated final state
    of ⟨Ĥ.qn⟩ in any finite number of steps the Sipser approved criteria has been met.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 20 22:20:04 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/23 10:05 PM, olcott wrote:
    On 4/20/2023 6:14 PM, Richard Damon wrote:
    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote:
    On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
    On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
    from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>>


    When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>> the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>> doesn't change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>
    Which don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>

    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
    has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
    Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>>> with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>>> of simulation we know
    that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>> behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>> steps because

    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>> input, but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>> the actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>> simulates 10,000
    recursive simulations these are the actual behavior of >>>>>>>>>>>>>>>>> ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>> the input as defined,
    There is only one actual behavior of the actual input and >>>>>>>>>>>>>>> this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>> embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>> input.

    No matter what the problem definition says the actual >>>>>>>>>>>>> behavior of the
    actual input must necessarily be the N steps simulated by >>>>>>>>>>>>> embedded_H.

    The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>

    NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless.
    It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>> include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
    simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>> these first N
    steps.


    Right, but we don't care about that. We care about the TOTAL
    behavior of the input, which H never gets to see, because it
    gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*

    Until the outer embedded_H used by Ĥ reaches the point that it
    decides to stop its simulation, and the whole simulation ends with >>>>>> just partial results and it decides to go to qn and Ĥ Halts.


    You keep dodging the key truth when N steps of embedded_H are
    correctly
    simulated by embedded_H and N = 30000 then we know that the actual
    behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>> reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
    input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such
    pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT YOU)

    Not just someone you have "tricked" into agreeing to a poorly worded
    statement that yo misinterpret to agree with you.


    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    "If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D would never stop running
    unless aborted then H can abort its simulation of D and correctly report
    that D specifies a non-halting sequence of configurations."

    He understood that the above paragraph is a tautology. That you do not understand that it is a tautology provides zero evidence that it is not
    a tautology.

    You have already agreed that N steps of an input simulated by a
    simulating halt decider are the actual behavior for these N steps.

    The fact that you agreed with this seems to prove that you will not
    disagree with me at the expense of truth and that you do actually care
    about the truth.




    Right, like I said, *IF* the decider correctly simulates its input D
    until H *CORRECTLY* determines that its Simulate D would never stop
    running unless aborted.

    NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE MACHINE.
    NOT JUST A PARTIAL SIMULATION BY H.

    then H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.


    THAT is a Tautology, if you can simulate a program to the point that you
    can determine that the PROGRAM ITSELF if run (or simulated by a UTM)
    would run forever unless aborted.

    That your "simulation" by H can't get there doesn't count.

    Yes, the N steps of the simulation done by H will match the first N
    steps of the simulation done by a UTM.

    It is also a fact that if the UTM simulates a bit farther, it will reach
    a halting state, so H did not "correctly" determine that that would not
    happen.

    You have a GIVEN H, and that H simulates for ONLY N Steps, but the
    correct determination needs to be about what would happen with a CORRECT SIMULATION of a unlimited number of steps, which this H doesn't do, and
    you can't imagine changing H, because the way you do it changes the
    input, which violates the requirements of the problem.

    Since UTM(D,D) halts, H(D,D) saying that it has "Correctly Determined"
    that the correct simulation would not halt unless aborted is just a
    FALSE statement, as UTM(D,D) does halt, and never aborted its simulation.

    You just don't understand the meaning of the words you are using, so you
    lie to yourself and trap yourself in a prison of falsehood.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 21 07:18:06 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/20/23 10:43 PM, olcott wrote:
    On 4/20/2023 9:20 PM, Richard Damon wrote:
    On 4/20/23 10:05 PM, olcott wrote:
    On 4/20/2023 6:14 PM, Richard Damon wrote:
    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote:
    On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that >>>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H must compute its mapping >>>>>>>>>>>>>>>>>>>>>>>>>>> from never reaches its simulated final state >>>>>>>>>>>>>>>>>>>>>>>>>>> of ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman >>>>>>>>>>>>>>>>>>>>>>>>>> error. The question is NOT what does the >>>>>>>>>>>>>>>>>>>>>>>>>> "simulation by H" show, but what is the actual >>>>>>>>>>>>>>>>>>>>>>>>>> behavior of the actual machine the input >>>>>>>>>>>>>>>>>>>>>>>>>> represents.



    When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input >>>>>>>>>>>>>>>>>>>>>>>>> it derives the exact same N steps that a pure >>>>>>>>>>>>>>>>>>>>>>>>> UTM would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>>>> definition of a UTM.

    You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>>>
    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>>>> the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>>>> doesn't change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>>>
    Which don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>>>

    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H >>>>>>>>>>>>>>>>>>>>>>>>> has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does. >>>>>>>>>>>>>>>>>>>>>>> Because embedded_H is a UTM that has been >>>>>>>>>>>>>>>>>>>>>>> augmented with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N >>>>>>>>>>>>>>>>>>>>>>> steps of simulation we know
    that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>>>> behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>>>> requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>>>> steps because

    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>> doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>>>> input, but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>>>> the actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>>>> simulates 10,000
    recursive simulations these are the actual behavior >>>>>>>>>>>>>>>>>>> of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>>>> the input as defined,
    There is only one actual behavior of the actual input >>>>>>>>>>>>>>>>> and this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated >>>>>>>>>>>>>>>>> by embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>>>> input.

    No matter what the problem definition says the actual >>>>>>>>>>>>>>> behavior of the
    actual input must necessarily be the N steps simulated by >>>>>>>>>>>>>>> embedded_H.

    The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>>>

    NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless. >>>>>>>>>>>>> It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>>>> include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD.

    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ >>>>>>>>>>> correctly
    simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>>>> these first N
    steps.


    Right, but we don't care about that. We care about the TOTAL >>>>>>>>>> behavior of the input, which H never gets to see, because it >>>>>>>>>> gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>>>> behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the >>>>>>>>> process*

    Until the outer embedded_H used by Ĥ reaches the point that it >>>>>>>> decides to stop its simulation, and the whole simulation ends
    with just partial results and it decides to go to qn and Ĥ Halts. >>>>>>>>

    You keep dodging the key truth when N steps of embedded_H are
    correctly
    simulated by embedded_H and N = 30000 then we know that the actual >>>>>>> behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>>>> reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>>>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to
    have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
    input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such
    pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT YOU) >>>>
    Not just someone you have "tricked" into agreeing to a poorly worded
    statement that yo misinterpret to agree with you.


    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    "If simulating halt decider H correctly simulates its input D until H
    correctly determines that its simulated D would never stop running
    unless aborted then H can abort its simulation of D and correctly report >>> that D specifies a non-halting sequence of configurations."

    He understood that the above paragraph is a tautology. That you do not
    understand that it is a tautology provides zero evidence that it is not
    a tautology.

    You have already agreed that N steps of an input simulated by a
    simulating halt decider are the actual behavior for these N steps.

    The fact that you agreed with this seems to prove that you will not
    disagree with me at the expense of truth and that you do actually care
    about the truth.




    Right, like I said, *IF* the decider correctly simulates its input D
    until H *CORRECTLY* determines that its Simulate D would never stop
    running unless aborted.

    NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE
    MACHINE. NOT JUST A PARTIAL SIMULATION BY H.

    Unless the simulation is from the frame-of-reference of the pathological relationship it is rejecting apples because lemons are too sour.

    So, you don't understand the nature of simulation.

    Simulation is NOT "From a frame of reference", but is a recreation of
    what actually happens.

    Remember, the DEFINITION of a Halting Decider is dependent on the actual behavior of the machine represented, and the replacement of that critrea
    with a simulation is based on the fact that the "Simulation" so defined
    will ALWAYS reproduce that result.

    If you claim that the simulation can create a different result, then you
    can't use that simulation as a replacement for the actual behavior that
    was required, so you are just admitting that you logic is flawed, and
    that you are using strawman.


    Thus when N steps of ⟨Ĥ⟩ correctly simulated by embedded_H conclusively proves by a form of mathematical induction that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach its simulated final state
    of ⟨Ĥ.qn⟩ in any finite number of steps the Sipser approved criteria has been met.


    Nope, Please provide your actual proof by induction.

    Note, I am not going to tell you the steps you need to prove to make a
    proof by induction, but you need to clearly make the statements about them.

    I suspect, you don't even know what it actually means to prove something
    by induction.

    Remember too, the goal is to show that the input machine is actually non-halting, and that a correct simulation of this exact input will
    never reach a halting state.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Fri Apr 21 13:17:44 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>> not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct
    simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>> that does give an answer, which you say will be
    non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>> reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem >>>>>>>>>>>>>> is you have added a pattern that isn't always non-halting. >>>>>>>>>>>>>>
    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>> features you added have removed essential features needed >>>>>>>>>>>>>> for it to be an actual UTM. That you make this claim shows >>>>>>>>>>>>>> you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>>>> features axded.


    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first >>>>>>>>>>>>>>> N steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>

    Because of all this we can know that the first N steps of >>>>>>>>>>>>>>> input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>
    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer >>>>>>>>>>>>>> is wrong.


    When we see (after N steps) that D correctly simulated by >>>>>>>>>>>>>>> H cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof that >>>>>>>>>>>>>>> D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting >>>>>>>>>>>> is erroneous:


    My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>> it is
    unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>> machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>> to the x86 language has been stripped and replaced with
    analysis entirely in C.

    With this new paper even Richard admits that the first N steps >>>>>>>>>>> UTM based simulated by a simulating halt decider are
    necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting
    Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>> decider.  Only the Flibble Signaling Simulating Halt Decider >>>>>>>>>>>> (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt
    decision to its caller in finite time

    Although H must always return to some caller H is not allowed >>>>>>>>> to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does not
    have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code >>>>>>> for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your
    program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact that
    it returns a correct halting decision for Px; something your broken
    SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you
    must change the behavior of Px away from the behavior that its x86
    code specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I am
    using an entirely different simulation method, one that actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its machine
    address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px
    [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px
    [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt decision
    to each branch is a perfectly valid SHD design; again a design, unlike
    yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider. Your H is not
    a genuine halt decider as it aborts rather than returning a value to its
    caller in finite time. Think of it this way: if H was not of the
    simulating type then there would be no need to abort any recursion as H
    would not be directly invoking Px, i.e., there would be no recursion.
    Recursion is a problem for you because your halt decider is based on a
    broken design.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 21 10:35:54 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 6:18 AM, Richard Damon wrote:
    On 4/20/23 10:43 PM, olcott wrote:
    On 4/20/2023 9:20 PM, Richard Damon wrote:
    On 4/20/23 10:05 PM, olcott wrote:
    On 4/20/2023 6:14 PM, Richard Damon wrote:
    On 4/20/23 6:51 PM, olcott wrote:
    On 4/20/2023 5:40 PM, Richard Damon wrote:
    On 4/20/23 10:59 AM, olcott wrote:
    On 4/20/2023 7:06 AM, Richard Damon wrote:
    On 4/20/23 7:56 AM, olcott wrote:
    On 4/20/2023 6:23 AM, Richard Damon wrote:
    On 4/20/23 12:04 AM, olcott wrote:
    On 4/19/2023 10:41 PM, Richard Damon wrote:
    On 4/19/23 11:29 PM, olcott wrote:
    On 4/19/2023 9:16 PM, Richard Damon wrote:
    On 4/19/23 9:59 PM, olcott wrote:
    On 4/19/2023 8:38 PM, Richard Damon wrote:
    On 4/19/23 9:25 PM, olcott wrote:
    On 4/19/2023 8:08 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 8:52 PM, olcott wrote:
    On 4/19/2023 7:45 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 8:31 PM, olcott wrote:
    On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>
    *You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
    The actual simulated input: ⟨Ĥ⟩ that >>>>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H must compute its mapping >>>>>>>>>>>>>>>>>>>>>>>>>>>> from never reaches its simulated final state >>>>>>>>>>>>>>>>>>>>>>>>>>>> of ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>>>>

    An YOU keep on falling into your Strawman >>>>>>>>>>>>>>>>>>>>>>>>>>> error. The question is NOT what does the >>>>>>>>>>>>>>>>>>>>>>>>>>> "simulation by H" show, but what is the >>>>>>>>>>>>>>>>>>>>>>>>>>> actual behavior of the actual machine the >>>>>>>>>>>>>>>>>>>>>>>>>>> input represents.



    When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input >>>>>>>>>>>>>>>>>>>>>>>>>> it derives the exact same N steps that a pure >>>>>>>>>>>>>>>>>>>>>>>>>> UTM would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>>>>

    No, it ISN'T a UTM because if fails to meeet >>>>>>>>>>>>>>>>>>>>>>>>> the definition of a UTM.

    You are just proving that you are a >>>>>>>>>>>>>>>>>>>>>>>>> pathological liar that doesn't know what he is >>>>>>>>>>>>>>>>>>>>>>>>> talking about.

    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>>>>> the first N steps of
    simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>>>>> doesn't change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>>>>
    Which don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>>>>

    The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>>>>> represents is the
    behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H >>>>>>>>>>>>>>>>>>>>>>>>>> has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
    already agreed with this.

    No, the actual behavior of the input is what >>>>>>>>>>>>>>>>>>>>>>>>> the MACHINE Ĥ applied to (Ĥ) does. >>>>>>>>>>>>>>>>>>>>>>>> Because embedded_H is a UTM that has been >>>>>>>>>>>>>>>>>>>>>>>> augmented with three features
    that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>>>>> input to diverge from
    the simulation of a pure UTM for the first N >>>>>>>>>>>>>>>>>>>>>>>> steps of simulation we know
    that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>>>>> behavior specified by this
    input for these N steps.

    And is no longer a UTM, since if fails to meet >>>>>>>>>>>>>>>>>>>>>>> the requirement of a UTM

    As you already agreed:
    The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>>>>> steps because

    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the
    first N steps.




    But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>>>>> input, but ALL of them.


    Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>>>>> the actual behavior
    of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>>>>> simulates 10,000
    recursive simulations these are the actual behavior >>>>>>>>>>>>>>>>>>>> of ⟨Ĥ⟩.



    Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>>>>> the input as defined,
    There is only one actual behavior of the actual input >>>>>>>>>>>>>>>>>> and this behavior
    is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated >>>>>>>>>>>>>>>>>> by embedded_H.

    Nope, Read the problem definition.

    The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>>>>> input.

    No matter what the problem definition says the actual >>>>>>>>>>>>>>>> behavior of the
    actual input must necessarily be the N steps simulated >>>>>>>>>>>>>>>> by embedded_H.

    The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>>>>

    NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless. >>>>>>>>>>>>>> It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>>>>> include 10,000 recursive simulations.


    Which means it ISN'T the Equivalent of a UTM. PERIOD. >>>>>>>>>>>>
    Why are you playing head games with this?

    You know and acknowledged that the first N steps of ⟨Ĥ⟩ >>>>>>>>>>>> correctly
    simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>>>>> these first N
    steps.


    Right, but we don't care about that. We care about the TOTAL >>>>>>>>>>> behavior of the input, which H never gets to see, because it >>>>>>>>>>> gives up.




    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn >>>>>>>>>>
    N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the >>>>>>>>>> actual behavior of this input:
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
    (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which
    simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the >>>>>>>>>> process*

    Until the outer embedded_H used by Ĥ reaches the point that it >>>>>>>>> decides to stop its simulation, and the whole simulation ends >>>>>>>>> with just partial results and it decides to go to qn and Ĥ Halts. >>>>>>>>>

    You keep dodging the key truth when N steps of embedded_H are
    correctly
    simulated by embedded_H and N = 30000 then we know that the actual >>>>>>>> behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>>>>> reached
    their final state of ⟨Ĥ.qn⟩.


    No, it has been shown that if N = 3000, then

    the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have
    never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined
    to have
    a pathological relationship to embedded_H.

    No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
    input describes.

    PERIOD.


    Referring to an entirely different sequence where there is no such >>>>>> pathological relationship is like comparing apples to lemons and
    rejecting apples because lemons are too sour.

    So, you just don't understand the meaning of ACTUAL BEHAVIOR


    Why do you continue to believe that you can get away with this?



    Why do YOU?

    Can you name a reliable source that supports your definition? (NOT
    YOU)

    Not just someone you have "tricked" into agreeing to a poorly
    worded statement that yo misinterpret to agree with you.


    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    "If simulating halt decider H correctly simulates its input D until H
    correctly determines that its simulated D would never stop running
    unless aborted then H can abort its simulation of D and correctly
    report
    that D specifies a non-halting sequence of configurations."

    He understood that the above paragraph is a tautology. That you do not >>>> understand that it is a tautology provides zero evidence that it is not >>>> a tautology.

    You have already agreed that N steps of an input simulated by a
    simulating halt decider are the actual behavior for these N steps.

    The fact that you agreed with this seems to prove that you will not
    disagree with me at the expense of truth and that you do actually care >>>> about the truth.




    Right, like I said, *IF* the decider correctly simulates its input D
    until H *CORRECTLY* determines that its Simulate D would never stop
    running unless aborted.

    NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE
    MACHINE. NOT JUST A PARTIAL SIMULATION BY H.

    Unless the simulation is from the frame-of-reference of the pathological
    relationship it is rejecting apples because lemons are too sour.

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D would never stop running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Fri Apr 21 10:16:19 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>>> not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>> that does give an answer, which you say will be
    non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>>> reaches a final state.

    Thus, your H was WRONG t make the answer. And the problem >>>>>>>>>>>>>>> is you have added a pattern that isn't always non-halting. >>>>>>>>>>>>>>>
    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>> features you added have removed essential features needed >>>>>>>>>>>>>>> for it to be an actual UTM. That you make this claim >>>>>>>>>>>>>>> shows you don't actually know what a UTM is.

    This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>>>>> features axded.


    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the first >>>>>>>>>>>>>>>> N steps of simulation:
    (a) Watching the behavior doesn't change it.
    (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>

    Because of all this we can know that the first N steps >>>>>>>>>>>>>>>> of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>
    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it.
    H(D,D) returns non-halting, but D(D) Halts, so the answer >>>>>>>>>>>>>>> is wrong.


    When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>> by H cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof that >>>>>>>>>>>>>>>> D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting >>>>>>>>>>>>> is erroneous:


    My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>>> it is
    unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>>> machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N steps >>>>>>>>>>>> UTM based simulated by a simulating halt decider are
    necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>> decider.  Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>> Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt
    decision to its caller in finite time

    Although H must always return to some caller H is not allowed >>>>>>>>>> to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>> have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine code >>>>>>>> for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your
    program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact
    that it returns a correct halting decision for Px; something your
    broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you
    must change the behavior of Px away from the behavior that its x86
    code specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I
    am using an entirely different simulation method, one that actually
    works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its machine
    address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that _Infinite_Loop() >>>> never halts, forcing it to break out of its infinite loop and jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt decision
    to each branch is a perfectly valid SHD design; again a design,
    unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final
    "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

      Your H is not
    a genuine halt decider as it aborts rather than returning a value to its caller in finite time.  Think of it this way: if H was not of the
    simulating type then there would be no need to abort any recursion as H
    would not be directly invoking Px, i.e., there would be no recursion. Recursion is a problem for you because your halt decider is based on a
    broken design.

    /Flibble

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Fri Apr 21 11:41:47 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 11:36 AM, Mr Flibble wrote:
    On 21/04/2023 4:16 pm, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>> or not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving >>>>>>>>>>>>>>>>> it representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>>> reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>>

    Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>>> of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>> halt whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>>
    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>>> by H cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is
    non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>> so it is
    unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>> Turing
    machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N >>>>>>>>>>>>>> steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>>> decider.  Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not
    allowed to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>>> have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine >>>>>>>>>> code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how
    your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact
    that it returns a correct halting decision for Px; something your >>>>>>> broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you
    must change the behavior of Px away from the behavior that its x86 >>>>>> code specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I
    am using an entirely different simulation method, one that actually
    works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its machine >>>>>> address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to >>>>>> its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again a
    design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final >>>> "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

    Nope, a correctly simulated Px will allow it to reach its own final
    state (termination); your H does NOT perform a correct simulation
    because your H is broken.

    /Flibble


    Strawman deception
    Px correctly simulated by H will never reach its own simulated final
    state of "return" because Px and H have a pathological relationship to
    each other.

    Measuring the behavior of Px simulated by a simulator have no such
    pathological relationship is the same as rejecting apples because lemons
    are too sour. One must compare apples to apples.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Fri Apr 21 17:36:37 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 21/04/2023 4:16 pm, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>>>> not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>> reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had some >>>>>>>>>>>>>>>> extra features axded.


    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>

    Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>> of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>
    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>> by H cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting >>>>>>>>>>>>>> is erroneous:


    My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>>>> it is
    unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>>>> machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N steps >>>>>>>>>>>>> UTM based simulated by a simulating halt decider are >>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>> decider.  Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not allowed >>>>>>>>>>> to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>> have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine
    code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your >>>>>>> program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact
    that it returns a correct halting decision for Px; something your
    broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you
    must change the behavior of Px away from the behavior that its x86
    code specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I
    am using an entirely different simulation method, one that actually
    works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its machine
    address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to >>>>> its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt decision
    to each branch is a perfectly valid SHD design; again a design,
    unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final
    "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

    Nope, a correctly simulated Px will allow it to reach its own final
    state (termination); your H does NOT perform a correct simulation
    because your H is broken.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Fri Apr 21 18:42:58 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 21/04/2023 5:41 pm, olcott wrote:
    On 4/21/2023 11:36 AM, Mr Flibble wrote:
    On 21/04/2023 4:16 pm, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>> or not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>> simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>> argumen.


    Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>> steps of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>> simulated by H cannot
    possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>> finite number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is >>>>>>>>>>>>>>>> non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>> so it is
    unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>> Turing
    machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N >>>>>>>>>>>>>>> steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>> of your decider.  Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>
    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not >>>>>>>>>>>>> allowed to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>> not have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine >>>>>>>>>>> code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how >>>>>>>>> your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact >>>>>>>> that it returns a correct halting decision for Px; something
    your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you >>>>>>> must change the behavior of Px away from the behavior that its
    x86 code specifies.

    Your "x86 code" has nothing to do with how my halt decider works;
    I am using an entirely different simulation method, one that
    actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its
    machine address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and
    jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again a
    design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final >>>>> "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

    Nope, a correctly simulated Px will allow it to reach its own final
    state (termination); your H does NOT perform a correct simulation
    because your H is broken.

    /Flibble


    Strawman deception
    Px correctly simulated by H will never reach its own simulated final
    state of "return" because Px and H have a pathological relationship to
    each other.

    Nope, there is no pathological relationship between Px and H because Px discards the result of H (i.e. it does not try to do the opposite of the
    H halting result as per the definition of the Halting Problem).


    Measuring the behavior of Px simulated by a simulator have no such pathological relationship is the same as rejecting apples because lemons
    are too sour. One must compare apples to apples.

    LOLWUT?!

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Fri Apr 21 13:36:29 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 12:42 PM, Mr Flibble wrote:
    On 21/04/2023 5:41 pm, olcott wrote:
    On 4/21/2023 11:36 AM, Mr Flibble wrote:
    On 21/04/2023 4:16 pm, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>>> or not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>>> simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>
    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>>> argumen.


    Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>> steps of input D
    simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>> actual behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>> simulated by H cannot
    possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>> finite number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>> simulated.

    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is >>>>>>>>>>>>>>>>> non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>>> so it is
    unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>>> Turing
    machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N >>>>>>>>>>>>>>>> steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>>> of your decider.  Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>>
    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not >>>>>>>>>>>>>> allowed to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>>> not have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine >>>>>>>>>>>> code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how >>>>>>>>>> your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact >>>>>>>>> that it returns a correct halting decision for Px; something >>>>>>>>> your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally
    you must change the behavior of Px away from the behavior that >>>>>>>> its x86 code specifies.

    Your "x86 code" has nothing to do with how my halt decider works; >>>>>>> I am using an entirely different simulation method, one that
    actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its
    machine address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and
    jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again a >>>>>>> design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own
    final
    "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do
    this.

    Nope, a correctly simulated Px will allow it to reach its own final
    state (termination); your H does NOT perform a correct simulation
    because your H is broken.

    /Flibble


    Strawman deception
    Px correctly simulated by H will never reach its own simulated final
    state of "return" because Px and H have a pathological relationship to
    each other.

    Nope, there is no pathological relationship between Px and H because Px discards the result of H (i.e. it does not try to do the opposite of the
    H halting result as per the definition of the Halting Problem).


    It seems that you continue to fail to see the nested simulation

    01 void Px(void (*x)())
    02 {
    03 (void) H(x, x);
    04 return;
    05 }
    06
    07 void main()
    08 {
    09 H(Px,Px);
    10 }

    *Execution Trace when H never aborts its simulation*
    main() calls H(Px,Px) that simulates Px(Px) at line 09
    *keeps repeating*
    simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px) at
    line 03 ...



    Measuring the behavior of Px simulated by a simulator have no such
    pathological relationship is the same as rejecting apples because lemons
    are too sour. One must compare apples to apples.

    LOLWUT?!

    /Flibble

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Fri Apr 21 21:02:21 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 21/04/2023 7:36 pm, olcott wrote:
    On 4/21/2023 12:42 PM, Mr Flibble wrote:
    On 21/04/2023 5:41 pm, olcott wrote:
    On 4/21/2023 11:36 AM, Mr Flibble wrote:
    On 21/04/2023 4:16 pm, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts >>>>>>>>>>>>>>>>>>>>> whether or not its
    correctly simulated input can possibly reach its >>>>>>>>>>>>>>>>>>>>> own final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>>>> simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't >>>>>>>>>>>>>>>>>>>> always non-halting.

    When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>> N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>
    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make >>>>>>>>>>>>>>>>>>>> this claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    No one claims that it doesn't correctly reproduce >>>>>>>>>>>>>>>>>>>> the first N steps of the behavior, that is a >>>>>>>>>>>>>>>>>>>> Strawman argumen.


    Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>>> steps of input D
    simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>>> actual behavior that D
    presents to H for these same N steps. >>>>>>>>>>>>>>>>>>>>>
    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>>> simulated by H cannot
    possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>>> finite number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H" >>>>>>>>>>>>>>>>>>> You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>>> simulated.

    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is >>>>>>>>>>>>>>>>>> non-halting is erroneous:


    My new paper anchors its ideas in actual Turing >>>>>>>>>>>>>>>>> machines so it is
    unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>>>> Turing
    machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first >>>>>>>>>>>>>>>>> N steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>>>> of your decider.  Only the Flibble Signaling >>>>>>>>>>>>>>>>>> Simulating Halt Decider (SSHD) correctly handles this >>>>>>>>>>>>>>>>>> case.

    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not >>>>>>>>>>>>>>> allowed to return
    to any caller that essentially calls H in infinite >>>>>>>>>>>>>>> recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>>>> not have any infinite recursion thereby proving that >>>>>>>>>>>>>
    It overrode that behavior that was specified by the machine >>>>>>>>>>>>> code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how >>>>>>>>>>> your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its >>>>>>>>>> simulation just like I have defined it as evidenced by the >>>>>>>>>> fact that it returns a correct halting decision for Px;
    something your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally >>>>>>>>> you must change the behavior of Px away from the behavior that >>>>>>>>> its x86 code specifies.

    Your "x86 code" has nothing to do with how my halt decider
    works; I am using an entirely different simulation method, one >>>>>>>> that actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its
    machine address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px
    [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px
    [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and >>>>>>>>> jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the >>>>>>>> simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again a >>>>>>>> design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own >>>>>>> final
    "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do
    this.

    Nope, a correctly simulated Px will allow it to reach its own final
    state (termination); your H does NOT perform a correct simulation
    because your H is broken.

    /Flibble


    Strawman deception
    Px correctly simulated by H will never reach its own simulated final
    state of "return" because Px and H have a pathological relationship to
    each other.

    Nope, there is no pathological relationship between Px and H because
    Px discards the result of H (i.e. it does not try to do the opposite
    of the H halting result as per the definition of the Halting Problem).


    It seems that you continue to fail to see the nested simulation

    01 void Px(void (*x)())
    02 {
    03  (void) H(x, x);
    04  return;
    05 }
    06
    07 void main()
    08 {
    09  H(Px,Px);
    10 }

    *Execution Trace when H never aborts its simulation*
    main() calls H(Px,Px) that simulates Px(Px) at line 09
    *keeps repeating*
       simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px) at
    line 03 ...

    "nested simulation" (recursion) is a property of your broken halt
    decider and not a property of the Halting Problem itself; the Flibble
    SSHD avoids the problem of nested simulation (recursion) by forking
    (branching) the simulation instead.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Mr Flibble on Fri Apr 21 16:06:38 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 3:02 PM, Mr Flibble wrote:
    On 21/04/2023 7:36 pm, olcott wrote:
    On 4/21/2023 12:42 PM, Mr Flibble wrote:
    On 21/04/2023 5:41 pm, olcott wrote:
    On 4/21/2023 11:36 AM, Mr Flibble wrote:
    On 21/04/2023 4:16 pm, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts >>>>>>>>>>>>>>>>>>>>>> whether or not its
    correctly simulated input can possibly reach its >>>>>>>>>>>>>>>>>>>>>> own final state and
    halt. It does this by correctly recognizing >>>>>>>>>>>>>>>>>>>>>> several non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they >>>>>>>>>>>>>>>>>>>>>> complete.



    Except t doesn't o this for the "pathological" >>>>>>>>>>>>>>>>>>>>> program.

    The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say >>>>>>>>>>>>>>>>>>>>> will be non-halting, and then "Correctly Simulated" >>>>>>>>>>>>>>>>>>>>> by giving it representation to a UTM, we see that >>>>>>>>>>>>>>>>>>>>> the simulation reaches a final state. >>>>>>>>>>>>>>>>>>>>>
    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't >>>>>>>>>>>>>>>>>>>>> always non-halting.

    When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>>> N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>
    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make >>>>>>>>>>>>>>>>>>>>> this claim shows you don't actually know what a UTM >>>>>>>>>>>>>>>>>>>>> is.

    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    No one claims that it doesn't correctly reproduce >>>>>>>>>>>>>>>>>>>>> the first N steps of the behavior, that is a >>>>>>>>>>>>>>>>>>>>> Strawman argumen.


    Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>>>> steps of input D
    simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>>>> actual behavior that D
    presents to H for these same N steps. >>>>>>>>>>>>>>>>>>>>>>
    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of >>>>>>>>>>>>>>>>>>>>> the ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>>>> simulated by H cannot
    possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>>>> finite number of steps
    of correct simulation then we have conclusive >>>>>>>>>>>>>>>>>>>>>> proof that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H" >>>>>>>>>>>>>>>>>>>> You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>>>> simulated.

    It turns out that the non-halting behavior pattern >>>>>>>>>>>>>>>>>>>> is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is >>>>>>>>>>>>>>>>>>> non-halting is erroneous:


    My new paper anchors its ideas in actual Turing >>>>>>>>>>>>>>>>>> machines so it is
    unequivocal. The first two pages re only about the >>>>>>>>>>>>>>>>>> Linz Turing
    machine based proof.

    The H/D material is now on a single page and all >>>>>>>>>>>>>>>>>> reference
    to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first >>>>>>>>>>>>>>>>>> N steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); >>>>>>>>>>>>>>>>>>> your decider thinks that Px is non-halting which is >>>>>>>>>>>>>>>>>>> an obvious error due to a design flaw in the >>>>>>>>>>>>>>>>>>> architecture of your decider.  Only the Flibble >>>>>>>>>>>>>>>>>>> Signaling Simulating Halt Decider (SSHD) correctly >>>>>>>>>>>>>>>>>>> handles this case.

    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not >>>>>>>>>>>>>>>> allowed to return
    to any caller that essentially calls H in infinite >>>>>>>>>>>>>>>> recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>>>>> not have any infinite recursion thereby proving that >>>>>>>>>>>>>>
    It overrode that behavior that was specified by the >>>>>>>>>>>>>> machine code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how >>>>>>>>>>>> your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its >>>>>>>>>>> simulation just like I have defined it as evidenced by the >>>>>>>>>>> fact that it returns a correct halting decision for Px;
    something your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally >>>>>>>>>> you must change the behavior of Px away from the behavior that >>>>>>>>>> its x86 code specifies.

    Your "x86 code" has nothing to do with how my halt decider
    works; I am using an entirely different simulation method, one >>>>>>>>> that actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its >>>>>>>>>> machine address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px
    [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px
    [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and >>>>>>>>>> jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the >>>>>>>>> simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again >>>>>>>>> a design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own >>>>>>>> final
    "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a >>>>>> finite number of steps of correct simulation. It can't possibly do >>>>>> this.

    Nope, a correctly simulated Px will allow it to reach its own final
    state (termination); your H does NOT perform a correct simulation
    because your H is broken.

    /Flibble


    Strawman deception
    Px correctly simulated by H will never reach its own simulated final
    state of "return" because Px and H have a pathological relationship to >>>> each other.

    Nope, there is no pathological relationship between Px and H because
    Px discards the result of H (i.e. it does not try to do the opposite
    of the H halting result as per the definition of the Halting Problem).


    It seems that you continue to fail to see the nested simulation

    01 void Px(void (*x)())
    02 {
    03  (void) H(x, x);
    04  return;
    05 }
    06
    07 void main()
    08 {
    09  H(Px,Px);
    10 }

    *Execution Trace when H never aborts its simulation*
    main() calls H(Px,Px) that simulates Px(Px) at line 09
    *keeps repeating*
        simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px) at
    line 03 ...

    "nested simulation" (recursion) is a property of your broken halt
    decider

    Nested simulation is inherent when any simulating halt decider is
    applied to any of the conventional halting problem counter-example
    inputs. That you may fail to comprehend this is not my mistake.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Fri Apr 21 22:27:46 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 21/04/2023 10:06 pm, olcott wrote:
    On 4/21/2023 3:02 PM, Mr Flibble wrote:
    On 21/04/2023 7:36 pm, olcott wrote:
    On 4/21/2023 12:42 PM, Mr Flibble wrote:
    On 21/04/2023 5:41 pm, olcott wrote:
    On 4/21/2023 11:36 AM, Mr Flibble wrote:
    On 21/04/2023 4:16 pm, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote: >>>>>>>>>>>>>>>>>>>> On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> A simulating halt decider correctly predicts >>>>>>>>>>>>>>>>>>>>>>> whether or not its
    correctly simulated input can possibly reach its >>>>>>>>>>>>>>>>>>>>>>> own final state and
    halt. It does this by correctly recognizing >>>>>>>>>>>>>>>>>>>>>>> several non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they >>>>>>>>>>>>>>>>>>>>>>> complete.



    Except t doesn't o this for the "pathological" >>>>>>>>>>>>>>>>>>>>>> program.

    The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say >>>>>>>>>>>>>>>>>>>>>> will be non-halting, and then "Correctly >>>>>>>>>>>>>>>>>>>>>> Simulated" by giving it representation to a UTM, >>>>>>>>>>>>>>>>>>>>>> we see that the simulation reaches a final state. >>>>>>>>>>>>>>>>>>>>>>
    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't >>>>>>>>>>>>>>>>>>>>>> always non-halting.

    When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>
    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make >>>>>>>>>>>>>>>>>>>>>> this claim shows you don't actually know what a >>>>>>>>>>>>>>>>>>>>>> UTM is.

    This is like saying a NASCAR Racing Car is a >>>>>>>>>>>>>>>>>>>>>> Street Legal vehicle, since it started as one and >>>>>>>>>>>>>>>>>>>>>> just had some extra features axded. >>>>>>>>>>>>>>>>>>>>>>

    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>> the first N steps of simulation: >>>>>>>>>>>>>>>>>>>>>>> (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>> doesn't change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>
    No one claims that it doesn't correctly reproduce >>>>>>>>>>>>>>>>>>>>>> the first N steps of the behavior, that is a >>>>>>>>>>>>>>>>>>>>>> Strawman argumen.


    Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>>>>> steps of input D
    simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>>>>> actual behavior that D
    presents to H for these same N steps. >>>>>>>>>>>>>>>>>>>>>>>
    *computation that halts*… “the Turing machine >>>>>>>>>>>>>>>>>>>>>>> will halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of >>>>>>>>>>>>>>>>>>>>>> the ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>>>>> simulated by H cannot
    possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>>>>> finite number of steps
    of correct simulation then we have conclusive >>>>>>>>>>>>>>>>>>>>>>> proof that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H" >>>>>>>>>>>>>>>>>>>>> You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>>>>> simulated.

    It turns out that the non-halting behavior pattern >>>>>>>>>>>>>>>>>>>>> is correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is >>>>>>>>>>>>>>>>>>>> non-halting is erroneous:


    My new paper anchors its ideas in actual Turing >>>>>>>>>>>>>>>>>>> machines so it is
    unequivocal. The first two pages re only about the >>>>>>>>>>>>>>>>>>> Linz Turing
    machine based proof.

    The H/D material is now on a single page and all >>>>>>>>>>>>>>>>>>> reference
    to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the >>>>>>>>>>>>>>>>>>> first N steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the >>>>>>>>>>>>>>>>>>> Halting Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); >>>>>>>>>>>>>>>>>>>> your decider thinks that Px is non-halting which is >>>>>>>>>>>>>>>>>>>> an obvious error due to a design flaw in the >>>>>>>>>>>>>>>>>>>> architecture of your decider.  Only the Flibble >>>>>>>>>>>>>>>>>>>> Signaling Simulating Halt Decider (SSHD) correctly >>>>>>>>>>>>>>>>>>>> handles this case.

    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not >>>>>>>>>>>>>>>>> allowed to return
    to any caller that essentially calls H in infinite >>>>>>>>>>>>>>>>> recursion.

    The Flibble Signaling Simulating Halt Decider (SSHD) >>>>>>>>>>>>>>>> does not have any infinite recursion thereby proving that >>>>>>>>>>>>>>>
    It overrode that behavior that was specified by the >>>>>>>>>>>>>>> machine code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about >>>>>>>>>>>>> how your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its >>>>>>>>>>>> simulation just like I have defined it as evidenced by the >>>>>>>>>>>> fact that it returns a correct halting decision for Px; >>>>>>>>>>>> something your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally >>>>>>>>>>> you must change the behavior of Px away from the behavior >>>>>>>>>>> that its x86 code specifies.

    Your "x86 code" has nothing to do with how my halt decider >>>>>>>>>> works; I am using an entirely different simulation method, one >>>>>>>>>> that actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its >>>>>>>>>>> machine address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px
    [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px
    [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and >>>>>>>>>>> jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking >>>>>>>>>> the simulation into two branches and returning a different >>>>>>>>>> halt decision to each branch is a perfectly valid SHD design; >>>>>>>>>> again a design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its >>>>>>>>> own final
    "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a >>>>>>> finite number of steps of correct simulation. It can't possibly
    do this.

    Nope, a correctly simulated Px will allow it to reach its own
    final state (termination); your H does NOT perform a correct
    simulation because your H is broken.

    /Flibble


    Strawman deception
    Px correctly simulated by H will never reach its own simulated final >>>>> state of "return" because Px and H have a pathological relationship to >>>>> each other.

    Nope, there is no pathological relationship between Px and H because
    Px discards the result of H (i.e. it does not try to do the opposite
    of the H halting result as per the definition of the Halting Problem). >>>>

    It seems that you continue to fail to see the nested simulation

    01 void Px(void (*x)())
    02 {
    03  (void) H(x, x);
    04  return;
    05 }
    06
    07 void main()
    08 {
    09  H(Px,Px);
    10 }

    *Execution Trace when H never aborts its simulation*
    main() calls H(Px,Px) that simulates Px(Px) at line 09
    *keeps repeating*
        simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px)
    at line 03 ...

    "nested simulation" (recursion) is a property of your broken halt decider

    Nested simulation is inherent when any simulating halt decider is
    applied to any of the conventional halting problem counter-example
    inputs. That you may fail to comprehend this is not my mistake.

    I have shown otherwise, dear.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 21 18:36:25 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D would never stop running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the definition of "correct simulation" that Professer Sipser uses, your arguement is VOID.


    You are just PROVING your stupidity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 21 18:34:19 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/23 11:16 AM, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>>>> not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>> reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>>> derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had some >>>>>>>>>>>>>>>> extra features axded.


    My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>>> added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>

    Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>> of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>
    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>> by H cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is non-halting >>>>>>>>>>>>>> is erroneous:


    My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>>>> it is
    unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>>>> machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N steps >>>>>>>>>>>>> UTM based simulated by a simulating halt decider are >>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>> decider.  Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not allowed >>>>>>>>>>> to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>> have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine
    code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how your >>>>>>> program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact
    that it returns a correct halting decision for Px; something your
    broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you
    must change the behavior of Px away from the behavior that its x86
    code specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I
    am using an entirely different simulation method, one that actually
    works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its machine
    address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to >>>>> its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt decision
    to each branch is a perfectly valid SHD design; again a design,
    unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final
    "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

    So, you're saying that a UTM doesn't do a "Correct Simulation"?

    UTM(Px,Px) will see Px call H, and then H simulation its copy of Px(Px),
    then aborting its simulaiton and returning non-halting to Px and then Px halting


    It is only the PARTIAL simulation by whatever H Px is built on that
    can't reach that state. The UTM will ALWAYS reach that state slightly
    (one recursion) after your H stops its simulation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 21 18:22:11 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D until H
    correctly determines that its simulated D would never stop running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the definition of "correct simulation" that Professer Sipser uses, your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite number of steps because Ĥ is defined to have a pathological relationship
    to embedded_H.

    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even another simulating halt decider such as embedded_H1 having no such
    pathological relationship as the basis of the actual behavior of the
    input to embedded_H we are comparing apples to lemons and rejecting the
    apples because lemons are too sour.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 21 18:18:12 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 5:34 PM, Richard Damon wrote:
    On 4/21/23 11:16 AM, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>> or not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving >>>>>>>>>>>>>>>>> it representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>>> reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>>

    Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>>> of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>> halt whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>>
    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>>> by H cannot
    possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>>> number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is
    non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>> so it is
    unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>> Turing
    machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N >>>>>>>>>>>>>> steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>>> decider.  Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.

    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not
    allowed to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>>> have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine >>>>>>>>>> code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how
    your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact
    that it returns a correct halting decision for Px; something your >>>>>>> broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you
    must change the behavior of Px away from the behavior that its x86 >>>>>> code specifies.

    Your "x86 code" has nothing to do with how my halt decider works; I
    am using an entirely different simulation method, one that actually
    works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its machine >>>>>> address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and jump to >>>>>> its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again a
    design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final >>>> "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

    So, you're saying that a UTM doesn't do a "Correct Simulation"?


    Always with the strawman error.
    I am saying that when Px is correctly simulated by H it cannot possibly
    reach its own simulated "return" instruction in any finite number of
    steps because Px is defined to have a pathological relationship to H.

    When we examine the behavior of Px simulated by a pure simulator or even another simulating halt decider such as H1 having no such pathological relationship as the basis of the actual behavior of the input to H we
    are comparing apples to lemons and rejecting the apples because lemons
    are too sour.


    UTM(Px,Px) will see Px call H, and then H simulation its copy of Px(Px),
    then aborting its simulaiton and returning non-halting to Px and then Px halting


    It is only the PARTIAL simulation by whatever H Px is built on that
    can't reach that state. The UTM will ALWAYS reach that state slightly
    (one recursion) after your H stops its simulation.




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 21 19:33:35 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/23 7:18 PM, olcott wrote:
    On 4/21/2023 5:34 PM, Richard Damon wrote:
    On 4/21/23 11:16 AM, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>> or not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>> simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>> argumen.


    Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>> steps of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>> simulated by H cannot
    possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>> finite number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is >>>>>>>>>>>>>>>> non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>> so it is
    unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>> Turing
    machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N >>>>>>>>>>>>>>> steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>> of your decider.  Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>
    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not >>>>>>>>>>>>> allowed to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>> not have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine >>>>>>>>>>> code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how >>>>>>>>> your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact >>>>>>>> that it returns a correct halting decision for Px; something
    your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you >>>>>>> must change the behavior of Px away from the behavior that its
    x86 code specifies.

    Your "x86 code" has nothing to do with how my halt decider works;
    I am using an entirely different simulation method, one that
    actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its
    machine address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and
    jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again a
    design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final >>>>> "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

    So, you're saying that a UTM doesn't do a "Correct Simulation"?


    Always with the strawman error.
    I am saying that when Px is correctly simulated by H it cannot possibly
    reach its own simulated "return" instruction in any finite number of
    steps because Px is defined to have a pathological relationship to H.

    Since H never "Correctly Simulates" the input per the definition that
    allows using a simulation instead of the actual machines behavior, YOUR
    method is the STRAWMAN.


    When we examine the behavior of Px simulated by a pure simulator or even another simulating halt decider such as H1 having no such pathological relationship as the basis of the actual behavior of the input to H we
    are comparing apples to lemons and rejecting the apples because lemons
    are too sour.

    Maybe, but the question is asking for the oranges that the pure
    simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the actual
    truth of the system.

    H MUST answer about the behavior of the actual machine to be a Halt
    Decider, since that is what the mapping a Halt Decider is supposed to
    answer is based on.



    UTM(Px,Px) will see Px call H, and then H simulation its copy of
    Px(Px), then aborting its simulaiton and returning non-halting to Px
    and then Px halting


    It is only the PARTIAL simulation by whatever H Px is built on that
    can't reach that state. The UTM will ALWAYS reach that state slightly
    (one recursion) after your H stops its simulation.





    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 21 19:35:32 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D until H
    correctly determines that its simulated D would never stop running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the definition
    of "correct simulation" that Professer Sipser uses, your arguement is
    VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite number of steps because Ĥ is defined to have a pathological relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition that
    allows using a simulation instead of the actual machines behavior, YOUR
    method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even another simulating halt decider such as embedded_H1 having no such pathological relationship as the basis of the actual behavior of the
    input to embedded_H we are comparing apples to lemons and rejecting the apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure simulator
    gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the actual
    truth of the system.

    H MUST answer about the behavior of the actual machine to be a Halt
    Decider, since that is what the mapping a Halt Decider is supposed to
    answer is based on.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 21 19:51:05 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D until H >>>> correctly determines that its simulated D would never stop running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the definition
    of "correct simulation" that Professer Sipser uses, your arguement is
    VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite
    number of steps because Ĥ is defined to have a pathological relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition that
    allows using a simulation instead of the actual machines behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
    another simulating halt decider such as embedded_H1 having no such
    pathological relationship as the basis of the actual behavior of the
    input to embedded_H we are comparing apples to lemons and rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the actual
    truth of the system.

    H MUST answer about the behavior of the actual machine to be a Halt
    Decider, since that is what the mapping a Halt Decider is supposed to
    answer is based on.


    When a simulating halt decider or even a plain UTM examines the behavior
    of its input and the SHD or UTM has a pathological relationship to its
    input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual behavior
    of this actual input to the original SHD or UTM.

    I used to think that you were simply lying to play head games, I no
    longer believe this. Now I believe that you are ensnared by group-think.

    Group-think is the way that 40% of the electorate could honestly believe
    that significant voter fraud changed the outcome of the 2020 election
    even though there has very persistently been zero evidence of this. https://www.psychologytoday.com/us/basics/groupthink

    Hopefully they will not believe that Fox news paid $787 million to trick
    people into believing that there was no voter fraud.

    Maybe they will believe that tiny space aliens living in the heads of
    Fox leadership took control of their brains and forced them to pay.

    The actual behavior of the actual input is correctly determined by an
    embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 21 21:02:25 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following verbatim
    paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the definition
    of "correct simulation" that Professer Sipser uses, your arguement
    is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite >>> number of steps because Ĥ is defined to have a pathological relationship >>> to embedded_H.

    Since H never "Correctly Simulates" the input per the definition that
    allows using a simulation instead of the actual machines behavior,
    YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>> another simulating halt decider such as embedded_H1 having no such
    pathological relationship as the basis of the actual behavior of the
    input to embedded_H we are comparing apples to lemons and rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure
    simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the actual
    truth of the system.

    H MUST answer about the behavior of the actual machine to be a Halt
    Decider, since that is what the mapping a Halt Decider is supposed to
    answer is based on.


    When a simulating halt decider or even a plain UTM examines the behavior
    of its input and the SHD or UTM has a pathological relationship to its
    input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM, then
    YES, the UTM will generate an infinite behavior, but so does the machine itself, and ANY UTM will see that same infinite behavior.

    The problem is that you SHD is NOT a UTM, and thus the fact that it
    aborts its simulation and returns an answer changes the behavior of the
    machine that USED it (compared to a UTM), and thus to be "correct", the
    SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no
    longer believe this. Now I believe that you are ensnared by group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies.


    Group-think is the way that 40% of the electorate could honestly believe
    that significant voter fraud changed the outcome of the 2020 election
    even though there has very persistently been zero evidence of this. https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which is
    DEFINIED that H(D,D) needs to return the answer "Halting" if D(D) Halts,
    is correct to give the answer non-halting even though D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million to trick people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views by
    telling them the lies they wanted to hear.

    At least they KNEW they were lying, but didn't care, and had to pay the
    price.

    You don't seem to understand that you are lying just as bad as they were.


    Maybe they will believe that tiny space aliens living in the heads of
    Fox leadership took control of their brains and forced them to pay.

    The actual behavior of the actual input is correctly determined by an embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns.


    But embedded_H isn't "embedded_UTM", so you are just living a lie.

    You are just to ignorant to understand that a UTM can't be modified to
    stop its simulation and still be a UTM.

    That is like saying that all racing cars are street legal, because they
    are based on the design of cars that were street legal.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 21 21:10:51 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following
    verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running >>>>>> unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the
    definition of "correct simulation" that Professer Sipser uses, your
    arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>> cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite >>>> number of steps because Ĥ is defined to have a pathological
    relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition that
    allows using a simulation instead of the actual machines behavior,
    YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>>> another simulating halt decider such as embedded_H1 having no such
    pathological relationship as the basis of the actual behavior of the
    input to embedded_H we are comparing apples to lemons and rejecting the >>>> apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure
    simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the actual
    truth of the system.

    H MUST answer about the behavior of the actual machine to be a Halt
    Decider, since that is what the mapping a Halt Decider is supposed to
    answer is based on.


    When a simulating halt decider or even a plain UTM examines the behavior
    of its input and the SHD or UTM has a pathological relationship to its
    input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM, then
    YES, the UTM will generate an infinite behavior, but so does the machine itself, and ANY UTM will see that same infinite behavior.


    The point is that that behavior of the input to embedded_H must be
    measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.

    I know that this is totally obvious thus I had to conclude that anyone
    denying it must be a liar that is only playing head games for sadistic pleasure.

    I did not take into account the power of group think that got at least
    100 million Americans to believe the election fraud changed the outcome
    of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of Texas
    only turned up one Republican that cheated.

    Only during the 2022 election did it look like this was starting to turn
    around a little bit.

    The problem is that you SHD is NOT a UTM, and thus the fact that it
    aborts its simulation and returns an answer changes the behavior of the machine that USED it (compared to a UTM), and thus to be "correct", the
    SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no
    longer believe this. Now I believe that you are ensnared by group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies.


    Group-think is the way that 40% of the electorate could honestly believe
    that significant voter fraud changed the outcome of the 2020 election
    even though there has very persistently been zero evidence of this.
    https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which is
    DEFINIED that H(D,D) needs to return the answer "Halting" if D(D) Halts,
    is correct to give the answer non-halting even though D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million to trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views by
    telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies.

    At least they KNEW they were lying, but didn't care, and had to pay the price.

    You don't seem to understand that you are lying just as bad as they were.


    I am absolutely not lying Truth is the most important thing to me even
    much more important than love.

    All of this work is aimed at formalizing the notion of truth because the
    HP, LP, IT and Tarski's Undefinability theorem are all instances of the
    same Olcott(2004) pathological self-reference error.


    Maybe they will believe that tiny space aliens living in the heads of
    Fox leadership took control of their brains and forced them to pay.

    The actual behavior of the actual input is correctly determined by an
    embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns.


    But embedded_H isn't "embedded_UTM", so you are just living a lie.


    embedded_H is embedded_UTM for the first N steps even when these N steps include 10,000 recursive simulations.

    After 10,000 recursive simulations even an idiot can infer that more
    will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state
    of ⟨Ĥ.qn⟩ in any finite number of steps.

    You and I both know that mathematical induction proves this in far less
    than 10,000 recursive simulations. Why you deny it when you should know
    this is true is beyond me.

    You are just to ignorant to understand that a UTM can't be modified to
    stop its simulation and still be a UTM.

    That is like saying that all racing cars are street legal, because they
    are based on the design of cars that were street legal.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 21 22:37:44 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following
    verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D
    until H
    correctly determines that its simulated D would never stop running >>>>>>> unless aborted then

    (b) H can abort its simulation of D and correctly report that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the
    definition of "correct simulation" that Professer Sipser uses,
    your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>>> cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite >>>>> number of steps because Ĥ is defined to have a pathological
    relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition
    that allows using a simulation instead of the actual machines
    behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>>>> another simulating halt decider such as embedded_H1 having no such
    pathological relationship as the basis of the actual behavior of the >>>>> input to embedded_H we are comparing apples to lemons and rejecting
    the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure
    simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the actual
    truth of the system.

    H MUST answer about the behavior of the actual machine to be a Halt
    Decider, since that is what the mapping a Halt Decider is supposed
    to answer is based on.


    When a simulating halt decider or even a plain UTM examines the behavior >>> of its input and the SHD or UTM has a pathological relationship to its
    input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual behavior >>> of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM, then
    YES, the UTM will generate an infinite behavior, but so does the
    machine itself, and ANY UTM will see that same infinite behavior.


    The point is that that behavior of the input to embedded_H must be
    measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS the
    behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that anyone denying it must be a liar that is only playing head games for sadistic pleasure.

    No, the fact that you think what you say shows that you are a TOTAL IDIOT.




    I did not take into account the power of group think that got at least
    100 million Americans to believe the election fraud changed the outcome
    of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of Texas
    only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the truth,
    because it shows that you have been wrong, and you fragile ego can't
    handle that.


    Only during the 2022 election did it look like this was starting to turn around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact that it
    aborts its simulation and returns an answer changes the behavior of
    the machine that USED it (compared to a UTM), and thus to be
    "correct", the SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no
    longer believe this. Now I believe that you are ensnared by group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies.


    Group-think is the way that 40% of the electorate could honestly believe >>> that significant voter fraud changed the outcome of the 2020 election
    even though there has very persistently been zero evidence of this.
    https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which is
    DEFINIED that H(D,D) needs to return the answer "Halting" if D(D)
    Halts, is correct to give the answer non-halting even though D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million to trick >>> people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views by
    telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies.

    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what is
    true, because, like you, they think their own ideas are more important
    that what actually is true. As was philosophized, they ignore the truth,
    but listen to what their itching ears what to hear. That fits you to the
    T, as you won't see the errors that are pointed out to you, and you make
    up more lies to try to hide your errors.


    At least they KNEW they were lying, but didn't care, and had to pay
    the price.

    You don't seem to understand that you are lying just as bad as they were.


    I am absolutely not lying Truth is the most important thing to me even
    much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and are just
    making stuff up.

    It seems you have lied so much that you have convinced yourself of your
    lies, and can no longer bear to let the truth in, so you just deny
    anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth because the
    HP, LP, IT and Tarski's Undefinability theorem are all instances of the
    same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is actually
    true, and you need to work with the definitions that exist, not the
    alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind spot,
    as you like dropping that part when you quote what H is supposed to do.

    You seem to see "see" self-references where there are not actual self-references, but the effect of the "self-reference" is built from
    simpler components. It seems you don't even understand what a
    "Self-Reference" actually is, maybe even what a "reference" actually is.

    For the halt decider, P is built on a COPY of the claimed decider and
    given a representation of that resultand machine. Not a single reference
    in sight.



    Maybe they will believe that tiny space aliens living in the heads of
    Fox leadership took control of their brains and forced them to pay.

    The actual behavior of the actual input is correctly determined by an
    embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns.


    But embedded_H isn't "embedded_UTM", so you are just living a lie.


    embedded_H is embedded_UTM for the first N steps even when these N steps include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is.


    After 10,000 recursive simulations even an idiot can infer that more
    will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and aborts
    means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that is
    bigger than N+1


    You and I both know that mathematical induction proves this in far less
    than 10,000 recursive simulations. Why you deny it when you should know
    this is true is beyond me.

    Nope, you are just proving that you don't even know what mathematical
    induction means.

    You are just too stupid.

    You are just proving you are a liar.

    You have meet someone who calls you out on that, and you don't have answers.

    You have just killed your reputation and any hope that someone might
    look at your ideas about truth, as clearly you don't understand what
    truth is.


    You are just to ignorant to understand that a UTM can't be modified to
    stop its simulation and still be a UTM.

    That is like saying that all racing cars are street legal, because
    they are based on the design of cars that were street legal.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mr Flibble@21:1/5 to olcott on Sat Apr 22 05:46:56 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 22/04/2023 12:18 am, olcott wrote:
    On 4/21/2023 5:34 PM, Richard Damon wrote:
    On 4/21/23 11:16 AM, olcott wrote:
    On 4/21/2023 7:17 AM, Mr Flibble wrote:
    On 20/04/2023 8:20 pm, olcott wrote:
    On 4/20/2023 2:08 PM, Mr Flibble wrote:
    On 20/04/2023 6:49 pm, olcott wrote:
    On 4/20/2023 12:32 PM, Mr Flibble wrote:
    On 19/04/2023 11:52 pm, olcott wrote:
    On 4/19/2023 4:14 PM, Mr Flibble wrote:
    On 19/04/2023 10:10 pm, olcott wrote:
    On 4/19/2023 3:32 PM, Mr Flibble wrote:
    On 19/04/2023 8:39 pm, olcott wrote:
    On 4/19/2023 1:47 PM, Mr Flibble wrote:
    On 18/04/2023 11:39 pm, olcott wrote:
    On 4/18/2023 4:55 PM, Mr Flibble wrote:
    On 18/04/2023 4:58 pm, olcott wrote:
    On 4/18/2023 6:32 AM, Richard Damon wrote:
    On 4/18/23 1:00 AM, olcott wrote:
    A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>> or not its
    correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>> final state and
    halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>> non-halting behavior
    patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>> simulation. Inputs that
    do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>


    Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>
    The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>> simulation reaches a final state.

    Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>> non-halting.

    When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
    it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
    it is itself a UTM with extra features.

    But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>
    This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>> some extra features axded.


    My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
    change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of simulation:
    (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
    (c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.

    No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>> argumen.


    Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>> steps of input D
    simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>>> behavior that D
    presents to H for these same N steps.

    *computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr

    Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>> answer is wrong.


    When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>> simulated by H cannot
    possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>> finite number of steps
    of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>> that D presents non-
    halting behavior to H.

    But it isn't "Correctly Simulated by H"
    You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>>
    It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>> correctly
    recognized in the first N steps.

    Your assumption that a program that calls H is >>>>>>>>>>>>>>>> non-halting is erroneous:


    My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>> so it is
    unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>> Turing
    machine based proof.

    The H/D material is now on a single page and all reference >>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>> analysis entirely in C.

    With this new paper even Richard admits that the first N >>>>>>>>>>>>>>> steps
    UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>> necessarily the
    actual behavior of these N steps.

    *Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>> Problem Proofs*
    https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs

    void Px(void (*x)())
    {
         (void) H(x, x);
         return;
    }

    Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>> of your decider.  Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>
    Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>> decision to its caller in finite time

    Although H must always return to some caller H is not >>>>>>>>>>>>> allowed to return
    to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>
    The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>> not have any infinite recursion thereby proving that

    It overrode that behavior that was specified by the machine >>>>>>>>>>> code for Px.

    Nope. You SHD is not a halt decider as

    I was not even talking about my SHD, I was talking about how >>>>>>>>> your program does its simulation incorrectly.

    My SSHD does not do its simulation incorrectly: it does its
    simulation just like I have defined it as evidenced by the fact >>>>>>>> that it returns a correct halting decision for Px; something
    your broken SHD gets wrong.


    In order for you to have Px simulated by H terminate normally you >>>>>>> must change the behavior of Px away from the behavior that its
    x86 code specifies.

    Your "x86 code" has nothing to do with how my halt decider works;
    I am using an entirely different simulation method, one that
    actually works.


    void Px(void (*x)())
    {
       (void) H(x, x);
       return;
    }

    Px correctly simulated by H cannot possibly reach past its
    machine address of: [00001b3d].

    _Px()
    [00001b32] 55         push ebp
    [00001b33] 8bec       mov ebp,esp
    [00001b35] 8b4508     mov eax,[ebp+08]
    [00001b38] 50         push eax      // push address of Px >>>>>>> [00001b39] 8b4d08     mov ecx,[ebp+08]
    [00001b3c] 51         push ecx      // push address of Px >>>>>>> [00001b3d] e800faffff call 00001542 // Call H
    [00001b42] 83c408     add esp,+08
    [00001b45] 5d         pop ebp
    [00001b46] c3         ret
    Size in bytes:(0021) [00001b46]

    What you are doing is the the same as recognizing that
    _Infinite_Loop()
    never halts, forcing it to break out of its infinite loop and
    jump to
    its "ret" instruction

    _Infinite_Loop()
    [00001c62] 55         push ebp
    [00001c63] 8bec       mov ebp,esp
    [00001c65] ebfe       jmp 00001c65
    [00001c67] 5d         pop ebp
    [00001c68] c3         ret
    Size in bytes:(0007) [00001c68]

    No I am not: there is no infinite loop in Px above; forking the
    simulation into two branches and returning a different halt
    decision to each branch is a perfectly valid SHD design; again a
    design, unlike yours, that actually works.

    If you say that Px correctly simulated by H ever reaches its own final >>>>> "return" statement and halts you are incorrect.

    Px halts if H is (or is part of) a genuine halt decider.

    The simulated Px only halts if it reaches its own final state in a
    finite number of steps of correct simulation. It can't possibly do this.

    So, you're saying that a UTM doesn't do a "Correct Simulation"?


    Always with the strawman error.
    I am saying that when Px is correctly simulated by H it cannot possibly
    reach its own simulated "return" instruction in any finite number of
    steps because Px is defined to have a pathological relationship to H.

    When we examine the behavior of Px simulated by a pure simulator or even another simulating halt decider such as H1 having no such pathological relationship as the basis of the actual behavior of the input to H we
    are comparing apples to lemons and rejecting the apples because lemons
    are too sour.

    When Px is correctly simulated Px will terminate (halt) as there is no pathological relationship between Px and H because Px discards the
    result of H rather than trying to do the opposite of H's result.

    /Flibble

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 09:36:07 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/21/2023 9:37 PM, Richard Damon wrote:
    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following
    verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D >>>>>>>> until H
    correctly determines that its simulated D would never stop running >>>>>>>> unless aborted then

    (b) H can abort its simulation of D and correctly report that D >>>>>>>> specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the
    definition of "correct simulation" that Professer Sipser uses,
    your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>>>> cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite
    number of steps because Ĥ is defined to have a pathological
    relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition
    that allows using a simulation instead of the actual machines
    behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>>>>> another simulating halt decider such as embedded_H1 having no such >>>>>> pathological relationship as the basis of the actual behavior of the >>>>>> input to embedded_H we are comparing apples to lemons and
    rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure
    simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the actual
    truth of the system.

    H MUST answer about the behavior of the actual machine to be a Halt
    Decider, since that is what the mapping a Halt Decider is supposed
    to answer is based on.


    When a simulating halt decider or even a plain UTM examines the
    behavior
    of its input and the SHD or UTM has a pathological relationship to its >>>> input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual
    behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM, then
    YES, the UTM will generate an infinite behavior, but so does the
    machine itself, and ANY UTM will see that same infinite behavior.


    The point is that that behavior of the input to embedded_H must be
    measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS the
    behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that anyone
    denying it must be a liar that is only playing head games for sadistic
    pleasure.

    No, the fact that you think what you say shows that you are a TOTAL IDIOT.




    I did not take into account the power of group think that got at least
    100 million Americans to believe the election fraud changed the outcome
    of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of Texas
    only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the truth, because it shows that you have been wrong, and you fragile ego can't
    handle that.


    Only during the 2022 election did it look like this was starting to turn
    around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact that it
    aborts its simulation and returns an answer changes the behavior of
    the machine that USED it (compared to a UTM), and thus to be
    "correct", the SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no
    longer believe this. Now I believe that you are ensnared by
    group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies.


    Group-think is the way that 40% of the electorate could honestly
    believe
    that significant voter fraud changed the outcome of the 2020 election
    even though there has very persistently been zero evidence of this.
    https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which is
    DEFINIED that H(D,D) needs to return the answer "Halting" if D(D)
    Halts, is correct to give the answer non-halting even though D(D) Ha;ts. >>>
    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million to
    trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views by
    telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies.

    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what is
    true, because, like you, they think their own ideas are more important
    that what actually is true. As was philosophized, they ignore the truth,
    but listen to what their itching ears what to hear. That fits you to the
    T, as you won't see the errors that are pointed out to you, and you make
    up more lies to try to hide your errors.


    At least they KNEW they were lying, but didn't care, and had to pay
    the price.

    You don't seem to understand that you are lying just as bad as they
    were.


    I am absolutely not lying Truth is the most important thing to me even
    much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and are just making stuff up.

    It seems you have lied so much that you have convinced yourself of your
    lies, and can no longer bear to let the truth in, so you just deny
    anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth because the
    HP, LP, IT and Tarski's Undefinability theorem are all instances of the
    same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is actually
    true, and you need to work with the definitions that exist, not the
    alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind spot,
    as you like dropping that part when you quote what H is supposed to do.

    You seem to see "see" self-references where there are not actual self-references, but the effect of the "self-reference" is built from
    simpler components. It seems you don't even understand what a "Self-Reference" actually is, maybe even what a "reference" actually is.

    For the halt decider, P is built on a COPY of the claimed decider and
    given a representation of that resultand machine. Not a single reference
    in sight.



    Maybe they will believe that tiny space aliens living in the heads of
    Fox leadership took control of their brains and forced them to pay.

    The actual behavior of the actual input is correctly determined by an
    embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns.


    But embedded_H isn't "embedded_UTM", so you are just living a lie.


    embedded_H is embedded_UTM for the first N steps even when these N steps
    include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is.


    After 10,000 recursive simulations even an idiot can infer that more
    will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state >> of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and aborts means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that is
    bigger than N+1


    You and I both know that mathematical induction proves this in far less
    than 10,000 recursive simulations. Why you deny it when you should know
    this is true is beyond me.

    Nope, you are just proving that you don't even know what mathematical induction means.

    You are just too stupid.

    You are just proving you are a liar.


    You know that a halt decider must compute the mapping from its actual
    input based on the actual specified behavior of this input and then
    contradict yourself insisting that the actual behavior of this actual
    input is the wrong behavior to measure.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Apr 24 19:35:40 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/24/23 10:36 AM, olcott wrote:
    On 4/21/2023 9:37 PM, Richard Damon wrote:
    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following
    verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its input D >>>>>>>>> until H
    correctly determines that its simulated D would never stop running >>>>>>>>> unless aborted then

    (b) H can abort its simulation of D and correctly report that D >>>>>>>>> specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
    is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the
    definition of "correct simulation" that Professer Sipser uses, >>>>>>>> your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>>>>> cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite
    number of steps because Ĥ is defined to have a pathological
    relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition
    that allows using a simulation instead of the actual machines
    behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
    another simulating halt decider such as embedded_H1 having no such >>>>>>> pathological relationship as the basis of the actual behavior of the >>>>>>> input to embedded_H we are comparing apples to lemons and
    rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure
    simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the
    actual truth of the system.

    H MUST answer about the behavior of the actual machine to be a
    Halt Decider, since that is what the mapping a Halt Decider is
    supposed to answer is based on.


    When a simulating halt decider or even a plain UTM examines the
    behavior
    of its input and the SHD or UTM has a pathological relationship to its >>>>> input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual
    behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM,
    then YES, the UTM will generate an infinite behavior, but so does
    the machine itself, and ANY UTM will see that same infinite behavior.


    The point is that that behavior of the input to embedded_H must be
    measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS the
    behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that anyone
    denying it must be a liar that is only playing head games for sadistic
    pleasure.

    No, the fact that you think what you say shows that you are a TOTAL
    IDIOT.




    I did not take into account the power of group think that got at least
    100 million Americans to believe the election fraud changed the outcome
    of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of Texas
    only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the
    truth, because it shows that you have been wrong, and you fragile ego
    can't handle that.


    Only during the 2022 election did it look like this was starting to turn >>> around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact that it
    aborts its simulation and returns an answer changes the behavior of
    the machine that USED it (compared to a UTM), and thus to be
    "correct", the SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no
    longer believe this. Now I believe that you are ensnared by
    group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies.


    Group-think is the way that 40% of the electorate could honestly
    believe
    that significant voter fraud changed the outcome of the 2020 election >>>>> even though there has very persistently been zero evidence of this.
    https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which is
    DEFINIED that H(D,D) needs to return the answer "Halting" if D(D)
    Halts, is correct to give the answer non-halting even though D(D)
    Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million to
    trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views by
    telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies.

    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what is
    true, because, like you, they think their own ideas are more important
    that what actually is true. As was philosophized, they ignore the
    truth, but listen to what their itching ears what to hear. That fits
    you to the T, as you won't see the errors that are pointed out to you,
    and you make up more lies to try to hide your errors.


    At least they KNEW they were lying, but didn't care, and had to pay
    the price.

    You don't seem to understand that you are lying just as bad as they
    were.


    I am absolutely not lying Truth is the most important thing to me even
    much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and are
    just making stuff up.

    It seems you have lied so much that you have convinced yourself of
    your lies, and can no longer bear to let the truth in, so you just
    deny anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth because the >>> HP, LP, IT and Tarski's Undefinability theorem are all instances of the
    same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is actually
    true, and you need to work with the definitions that exist, not the
    alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind
    spot, as you like dropping that part when you quote what H is supposed
    to do.

    You seem to see "see" self-references where there are not actual
    self-references, but the effect of the "self-reference" is built from
    simpler components. It seems you don't even understand what a
    "Self-Reference" actually is, maybe even what a "reference" actually is.

    For the halt decider, P is built on a COPY of the claimed decider and
    given a representation of that resultand machine. Not a single
    reference in sight.



    Maybe they will believe that tiny space aliens living in the heads of >>>>> Fox leadership took control of their brains and forced them to pay.

    The actual behavior of the actual input is correctly determined by an >>>>> embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns. >>>>>

    But embedded_H isn't "embedded_UTM", so you are just living a lie.


    embedded_H is embedded_UTM for the first N steps even when these N steps >>> include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is.


    After 10,000 recursive simulations even an idiot can infer that more
    will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state
    of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and
    aborts means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that is
    bigger than N+1


    You and I both know that mathematical induction proves this in far less
    than 10,000 recursive simulations. Why you deny it when you should know
    this is true is beyond me.

    Nope, you are just proving that you don't even know what mathematical
    induction means.

    You are just too stupid.

    You are just proving you are a liar.


    You know that a halt decider must compute the mapping from its actual
    input based on the actual specified behavior of this input and then contradict yourself insisting that the actual behavior of this actual
    input is the wrong behavior to measure.



    Right, and the "ACtual Specified Behavior" of the input is DEFINED to be
    the ACTUAL BEHAVIOR of the machine that input represents, which will be identical to the actual behavior of that input processed by an ACTUAL
    UTM (which, by definition don't stop until they reach a final step).

    By THAT definition, D(D) Halts since H(D,D) returns non-halting, and
    thus is wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Apr 24 23:29:24 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 10:36 AM, olcott wrote:
    On 4/21/2023 9:37 PM, Richard Damon wrote:
    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following >>>>>>>>>> verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its input >>>>>>>>>> D until H
    correctly determines that its simulated D would never stop >>>>>>>>>> running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D >>>>>>>>>> specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>> is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the
    definition of "correct simulation" that Professer Sipser uses, >>>>>>>>> your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H >>>>>>>> it cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>> finite
    number of steps because Ĥ is defined to have a pathological
    relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition >>>>>>> that allows using a simulation instead of the actual machines
    behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
    another simulating halt decider such as embedded_H1 having no such >>>>>>>> pathological relationship as the basis of the actual behavior of >>>>>>>> the
    input to embedded_H we are comparing apples to lemons and
    rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure
    simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the
    actual truth of the system.

    H MUST answer about the behavior of the actual machine to be a
    Halt Decider, since that is what the mapping a Halt Decider is
    supposed to answer is based on.


    When a simulating halt decider or even a plain UTM examines the
    behavior
    of its input and the SHD or UTM has a pathological relationship to >>>>>> its
    input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual
    behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM,
    then YES, the UTM will generate an infinite behavior, but so does
    the machine itself, and ANY UTM will see that same infinite behavior. >>>>>

    The point is that that behavior of the input to embedded_H must be
    measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS the
    behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that anyone >>>> denying it must be a liar that is only playing head games for sadistic >>>> pleasure.

    No, the fact that you think what you say shows that you are a TOTAL
    IDIOT.




    I did not take into account the power of group think that got at least >>>> 100 million Americans to believe the election fraud changed the outcome >>>> of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of Texas
    only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the
    truth, because it shows that you have been wrong, and you fragile ego
    can't handle that.


    Only during the 2022 election did it look like this was starting to
    turn
    around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact that it
    aborts its simulation and returns an answer changes the behavior of
    the machine that USED it (compared to a UTM), and thus to be
    "correct", the SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no >>>>>> longer believe this. Now I believe that you are ensnared by
    group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies.


    Group-think is the way that 40% of the electorate could honestly
    believe
    that significant voter fraud changed the outcome of the 2020 election >>>>>> even though there has very persistently been zero evidence of this. >>>>>> https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which
    is DEFINIED that H(D,D) needs to return the answer "Halting" if
    D(D) Halts, is correct to give the answer non-halting even though
    D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million to >>>>>> trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views
    by telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies.

    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what is
    true, because, like you, they think their own ideas are more
    important that what actually is true. As was philosophized, they
    ignore the truth, but listen to what their itching ears what to hear.
    That fits you to the T, as you won't see the errors that are pointed
    out to you, and you make up more lies to try to hide your errors.


    At least they KNEW they were lying, but didn't care, and had to pay
    the price.

    You don't seem to understand that you are lying just as bad as they
    were.


    I am absolutely not lying Truth is the most important thing to me even >>>> much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and are
    just making stuff up.

    It seems you have lied so much that you have convinced yourself of
    your lies, and can no longer bear to let the truth in, so you just
    deny anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth because
    the
    HP, LP, IT and Tarski's Undefinability theorem are all instances of the >>>> same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is
    actually true, and you need to work with the definitions that exist,
    not the alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind
    spot, as you like dropping that part when you quote what H is
    supposed to do.

    You seem to see "see" self-references where there are not actual
    self-references, but the effect of the "self-reference" is built from
    simpler components. It seems you don't even understand what a
    "Self-Reference" actually is, maybe even what a "reference" actually is. >>>
    For the halt decider, P is built on a COPY of the claimed decider and
    given a representation of that resultand machine. Not a single
    reference in sight.



    Maybe they will believe that tiny space aliens living in the heads of >>>>>> Fox leadership took control of their brains and forced them to pay. >>>>>>
    The actual behavior of the actual input is correctly determined by an >>>>>> embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns. >>>>>>

    But embedded_H isn't "embedded_UTM", so you are just living a lie.


    embedded_H is embedded_UTM for the first N steps even when these N
    steps
    include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is.


    After 10,000 recursive simulations even an idiot can infer that more
    will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state
    of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and
    aborts means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that is
    bigger than N+1


    You and I both know that mathematical induction proves this in far less >>>> than 10,000 recursive simulations. Why you deny it when you should know >>>> this is true is beyond me.

    Nope, you are just proving that you don't even know what mathematical
    induction means.

    You are just too stupid.

    You are just proving you are a liar.


    You know that a halt decider must compute the mapping from its actual
    input based on the actual specified behavior of this input and then
    contradict yourself insisting that the actual behavior of this actual
    input is the wrong behavior to measure.



    Right, and the "ACtual Specified Behavior" of the input is DEFINED to be
    the ACTUAL BEHAVIOR of the machine that input represents,

    *When you say that P must be ~P instead of P we know that you are wacky*

    The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated by embedded_H. From these N steps we can prove by mathematical induction
    that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own final state of ⟨Ĥ.qn⟩ in any finite number of steps.

    which will be
    identical to the actual behavior of that input processed by an ACTUAL
    UTM (which, by definition don't stop until they reach a final step).


    The verified facts prove otherwise, people that persistently deny
    verified facts may be in danger of Hell fire, depending on their
    motives.

    My motive is to mathematically formalize the notion of True(L,x) thus
    refuting Tarski and Gödel.

    We really need this now because AI systems are hallucinating: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

    By THAT definition, D(D) Halts since H(D,D) returns non-halting, and
    thus is wrong.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Apr 25 07:56:28 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/25/23 12:29 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 10:36 AM, olcott wrote:
    On 4/21/2023 9:37 PM, Richard Damon wrote:
    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>> verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its input >>>>>>>>>>> D until H
    correctly determines that its simulated D would never stop >>>>>>>>>>> running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D >>>>>>>>>>> specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>> is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the
    definition of "correct simulation" that Professer Sipser uses, >>>>>>>>>> your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H >>>>>>>>> it cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>>> finite
    number of steps because Ĥ is defined to have a pathological >>>>>>>>> relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the definition >>>>>>>> that allows using a simulation instead of the actual machines
    behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or >>>>>>>>> even
    another simulating halt decider such as embedded_H1 having no such >>>>>>>>> pathological relationship as the basis of the actual behavior >>>>>>>>> of the
    input to embedded_H we are comparing apples to lemons and
    rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure >>>>>>>> simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the
    actual truth of the system.

    H MUST answer about the behavior of the actual machine to be a >>>>>>>> Halt Decider, since that is what the mapping a Halt Decider is >>>>>>>> supposed to answer is based on.


    When a simulating halt decider or even a plain UTM examines the
    behavior
    of its input and the SHD or UTM has a pathological relationship
    to its
    input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual
    behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM,
    then YES, the UTM will generate an infinite behavior, but so does
    the machine itself, and ANY UTM will see that same infinite behavior. >>>>>>

    The point is that that behavior of the input to embedded_H must be
    measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS the
    behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that anyone >>>>> denying it must be a liar that is only playing head games for sadistic >>>>> pleasure.

    No, the fact that you think what you say shows that you are a TOTAL
    IDIOT.




    I did not take into account the power of group think that got at least >>>>> 100 million Americans to believe the election fraud changed the
    outcome
    of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of Texas >>>>> only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the
    truth, because it shows that you have been wrong, and you fragile
    ego can't handle that.


    Only during the 2022 election did it look like this was starting to
    turn
    around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact that
    it aborts its simulation and returns an answer changes the
    behavior of the machine that USED it (compared to a UTM), and thus >>>>>> to be "correct", the SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no >>>>>>> longer believe this. Now I believe that you are ensnared by
    group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies.


    Group-think is the way that 40% of the electorate could honestly >>>>>>> believe
    that significant voter fraud changed the outcome of the 2020
    election
    even though there has very persistently been zero evidence of this. >>>>>>> https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which
    is DEFINIED that H(D,D) needs to return the answer "Halting" if
    D(D) Halts, is correct to give the answer non-halting even though
    D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million
    to trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views
    by telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies.

    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what is
    true, because, like you, they think their own ideas are more
    important that what actually is true. As was philosophized, they
    ignore the truth, but listen to what their itching ears what to
    hear. That fits you to the T, as you won't see the errors that are
    pointed out to you, and you make up more lies to try to hide your
    errors.


    At least they KNEW they were lying, but didn't care, and had to
    pay the price.

    You don't seem to understand that you are lying just as bad as
    they were.


    I am absolutely not lying Truth is the most important thing to me even >>>>> much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and are
    just making stuff up.

    It seems you have lied so much that you have convinced yourself of
    your lies, and can no longer bear to let the truth in, so you just
    deny anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth
    because the
    HP, LP, IT and Tarski's Undefinability theorem are all instances of
    the
    same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is
    actually true, and you need to work with the definitions that exist,
    not the alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind
    spot, as you like dropping that part when you quote what H is
    supposed to do.

    You seem to see "see" self-references where there are not actual
    self-references, but the effect of the "self-reference" is built
    from simpler components. It seems you don't even understand what a
    "Self-Reference" actually is, maybe even what a "reference" actually
    is.

    For the halt decider, P is built on a COPY of the claimed decider
    and given a representation of that resultand machine. Not a single
    reference in sight.



    Maybe they will believe that tiny space aliens living in the
    heads of
    Fox leadership took control of their brains and forced them to pay. >>>>>>>
    The actual behavior of the actual input is correctly determined
    by an
    embedded UTM that has been adapted to watch the behavior of its
    simulation of its input and match any non-halting behavior patterns. >>>>>>>

    But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>

    embedded_H is embedded_UTM for the first N steps even when these N
    steps
    include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is.


    After 10,000 recursive simulations even an idiot can infer that more >>>>> will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final >>>>> state
    of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and
    aborts means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that is
    bigger than N+1


    You and I both know that mathematical induction proves this in far
    less
    than 10,000 recursive simulations. Why you deny it when you should
    know
    this is true is beyond me.

    Nope, you are just proving that you don't even know what
    mathematical induction means.

    You are just too stupid.

    You are just proving you are a liar.


    You know that a halt decider must compute the mapping from its actual
    input based on the actual specified behavior of this input and then
    contradict yourself insisting that the actual behavior of this actual
    input is the wrong behavior to measure.



    Right, and the "ACtual Specified Behavior" of the input is DEFINED to
    be the ACTUAL BEHAVIOR of the machine that input represents,

    *When you say that P must be ~P instead of P we know that you are wacky*

    What ~P


    The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated by embedded_H. From these N steps we can prove by mathematical induction
    that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own final state of ⟨Ĥ.qn⟩ in any finite number of steps.

    But we don't care about the "First N steps of (Ĥ) correctly simulated",
    we care about the behavior of the actual machine Ĥ (Ĥ) or the actual
    FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]


    which will be identical to the actual behavior of that input processed
    by an ACTUAL UTM (which, by definition don't stop until they reach a
    final step).


    The verified facts prove otherwise, people that persistently deny
    verified facts may be in danger of Hell fire, depending on their
    motives.

    Nope, the actual VERIFIED FACTS prove what I say.


    My motive is to mathematically formalize the notion of True(L,x) thus refuting Tarski and Gödel.

    Eccept that you don't even seem to understand your own terminology.


    We really need this now because AI systems are hallucinating: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

    By THAT definition, D(D) Halts since H(D,D) returns non-halting, and
    thus is wrong.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Tue Apr 25 22:45:43 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:29 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 10:36 AM, olcott wrote:
    On 4/21/2023 9:37 PM, Richard Damon wrote:
    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation.



    MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>>> verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its >>>>>>>>>>>> input D until H
    correctly determines that its simulated D would never stop >>>>>>>>>>>> running
    unless aborted then

    (b) H can abort its simulation of D and correctly report that D >>>>>>>>>>>> specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM

    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>>> is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the >>>>>>>>>>> definition of "correct simulation" that Professer Sipser >>>>>>>>>>> uses, your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H >>>>>>>>>> it cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>>>> finite
    number of steps because Ĥ is defined to have a pathological >>>>>>>>>> relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the
    definition that allows using a simulation instead of the actual >>>>>>>>> machines behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or >>>>>>>>>> even
    another simulating halt decider such as embedded_H1 having no >>>>>>>>>> such
    pathological relationship as the basis of the actual behavior >>>>>>>>>> of the
    input to embedded_H we are comparing apples to lemons and
    rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure >>>>>>>>> simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the >>>>>>>>> actual truth of the system.

    H MUST answer about the behavior of the actual machine to be a >>>>>>>>> Halt Decider, since that is what the mapping a Halt Decider is >>>>>>>>> supposed to answer is based on.


    When a simulating halt decider or even a plain UTM examines the >>>>>>>> behavior
    of its input and the SHD or UTM has a pathological relationship >>>>>>>> to its
    input then when another SHD or UTM not having a pathological
    relationship to this input is an incorrect proxy for the actual >>>>>>>> behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM, >>>>>>> then YES, the UTM will generate an infinite behavior, but so does >>>>>>> the machine itself, and ANY UTM will see that same infinite
    behavior.


    The point is that that behavior of the input to embedded_H must be >>>>>> measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS
    the behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that
    anyone
    denying it must be a liar that is only playing head games for
    sadistic
    pleasure.

    No, the fact that you think what you say shows that you are a TOTAL
    IDIOT.




    I did not take into account the power of group think that got at
    least
    100 million Americans to believe the election fraud changed the
    outcome
    of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of Texas >>>>>> only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the
    truth, because it shows that you have been wrong, and you fragile
    ego can't handle that.


    Only during the 2022 election did it look like this was starting
    to turn
    around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact that >>>>>>> it aborts its simulation and returns an answer changes the
    behavior of the machine that USED it (compared to a UTM), and
    thus to be "correct", the SHD needs to take that into account.


    I used to think that you were simply lying to play head games, I no >>>>>>>> longer believe this. Now I believe that you are ensnared by
    group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies. >>>>>>>

    Group-think is the way that 40% of the electorate could honestly >>>>>>>> believe
    that significant voter fraud changed the outcome of the 2020
    election
    even though there has very persistently been zero evidence of this. >>>>>>>> https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider, which >>>>>>> is DEFINIED that H(D,D) needs to return the answer "Halting" if
    D(D) Halts, is correct to give the answer non-halting even though >>>>>>> D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million >>>>>>>> to trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain views >>>>>>> by telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies. >>>>>
    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what is
    true, because, like you, they think their own ideas are more
    important that what actually is true. As was philosophized, they
    ignore the truth, but listen to what their itching ears what to
    hear. That fits you to the T, as you won't see the errors that are
    pointed out to you, and you make up more lies to try to hide your
    errors.


    At least they KNEW they were lying, but didn't care, and had to
    pay the price.

    You don't seem to understand that you are lying just as bad as
    they were.


    I am absolutely not lying Truth is the most important thing to me
    even
    much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and are
    just making stuff up.

    It seems you have lied so much that you have convinced yourself of
    your lies, and can no longer bear to let the truth in, so you just
    deny anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth
    because the
    HP, LP, IT and Tarski's Undefinability theorem are all instances
    of the
    same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is
    actually true, and you need to work with the definitions that
    exist, not the alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind
    spot, as you like dropping that part when you quote what H is
    supposed to do.

    You seem to see "see" self-references where there are not actual
    self-references, but the effect of the "self-reference" is built
    from simpler components. It seems you don't even understand what a
    "Self-Reference" actually is, maybe even what a "reference"
    actually is.

    For the halt decider, P is built on a COPY of the claimed decider
    and given a representation of that resultand machine. Not a single
    reference in sight.



    Maybe they will believe that tiny space aliens living in the
    heads of
    Fox leadership took control of their brains and forced them to pay. >>>>>>>>
    The actual behavior of the actual input is correctly determined >>>>>>>> by an
    embedded UTM that has been adapted to watch the behavior of its >>>>>>>> simulation of its input and match any non-halting behavior
    patterns.


    But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>>

    embedded_H is embedded_UTM for the first N steps even when these N >>>>>> steps
    include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is.


    After 10,000 recursive simulations even an idiot can infer that more >>>>>> will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final >>>>>> state
    of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and
    aborts means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that is
    bigger than N+1


    You and I both know that mathematical induction proves this in far >>>>>> less
    than 10,000 recursive simulations. Why you deny it when you should >>>>>> know
    this is true is beyond me.

    Nope, you are just proving that you don't even know what
    mathematical induction means.

    You are just too stupid.

    You are just proving you are a liar.


    You know that a halt decider must compute the mapping from its actual
    input based on the actual specified behavior of this input and then
    contradict yourself insisting that the actual behavior of this actual
    input is the wrong behavior to measure.



    Right, and the "ACtual Specified Behavior" of the input is DEFINED to
    be the ACTUAL BEHAVIOR of the machine that input represents,

    *When you say that P must be ~P instead of P we know that you are wacky*

    What ~P


    The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
    necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated
    by embedded_H. From these N steps we can prove by mathematical induction
    that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own >> final state of ⟨Ĥ.qn⟩ in any finite number of steps.

    But we don't care about the "First N steps of (Ĥ) correctly simulated",
    we care about the behavior of the actual machine Ĥ (Ĥ) or the actual
    FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]

    The actual behavior of the input is the behavior of N steps correctly
    simulated by embedded_H because embedded_H remains a UTM until it aborts
    its simulation.

    That these N steps provide a sufficient mathematical induction proof
    that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own final state of ⟨Ĥ.qn⟩ in any finite number of steps is the correct basis for the halt status decision by embedded_H.

    That no textbook ever noticed that the behavior under pathological self- reference(Olcott 2004) could possibly vary from behavior when PSR does
    not exist is only because everyone rejected to notion of a simulation as
    any basis for halt decider out-of-hand without review.

    For the whole history of the halting problem everyone simply assumed
    that the halt decider must provide a correct yes/no answer when no
    correct yes/no answer exists.

    No one ever noticed that the pathological input would be trapped in
    recursive simulation that never reaches any final state when this counter-example input is input to a simulating halt decider.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Apr 26 08:07:29 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/25/23 11:45 PM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:29 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 10:36 AM, olcott wrote:
    On 4/21/2023 9:37 PM, Richard Damon wrote:
    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation. >>>>>>>>>>>>>>


    MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>>>> verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its >>>>>>>>>>>>> input D until H
    correctly determines that its simulated D would never stop >>>>>>>>>>>>> running
    unless aborted then

    (b) H can abort its simulation of D and correctly report >>>>>>>>>>>>> that D
    specifies a non-halting sequence of configurations.

    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM >>>>>>>>>>>>
    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>>>> is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the >>>>>>>>>>>> definition of "correct simulation" that Professer Sipser >>>>>>>>>>>> uses, your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by >>>>>>>>>>> embedded_H it cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>>>>> finite
    number of steps because Ĥ is defined to have a pathological >>>>>>>>>>> relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the
    definition that allows using a simulation instead of the
    actual machines behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM >>>>>>>>>>> or even
    another simulating halt decider such as embedded_H1 having no >>>>>>>>>>> such
    pathological relationship as the basis of the actual behavior >>>>>>>>>>> of the
    input to embedded_H we are comparing apples to lemons and >>>>>>>>>>> rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the pure >>>>>>>>>> simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the >>>>>>>>>> actual truth of the system.

    H MUST answer about the behavior of the actual machine to be a >>>>>>>>>> Halt Decider, since that is what the mapping a Halt Decider is >>>>>>>>>> supposed to answer is based on.


    When a simulating halt decider or even a plain UTM examines the >>>>>>>>> behavior
    of its input and the SHD or UTM has a pathological relationship >>>>>>>>> to its
    input then when another SHD or UTM not having a pathological >>>>>>>>> relationship to this input is an incorrect proxy for the actual >>>>>>>>> behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a UTM, >>>>>>>> then YES, the UTM will generate an infinite behavior, but so
    does the machine itself, and ANY UTM will see that same infinite >>>>>>>> behavior.


    The point is that that behavior of the input to embedded_H must be >>>>>>> measured relative to the pathological relationship or it is not
    measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS
    the behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that
    anyone
    denying it must be a liar that is only playing head games for
    sadistic
    pleasure.

    No, the fact that you think what you say shows that you are a
    TOTAL IDIOT.




    I did not take into account the power of group think that got at >>>>>>> least
    100 million Americans to believe the election fraud changed the
    outcome
    of the 2020 election even though there is zero evidence of this
    anywhere. Even a huge cash prize offered by the Lt. governor of
    Texas
    only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the
    truth, because it shows that you have been wrong, and you fragile
    ego can't handle that.


    Only during the 2022 election did it look like this was starting >>>>>>> to turn
    around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact that >>>>>>>> it aborts its simulation and returns an answer changes the
    behavior of the machine that USED it (compared to a UTM), and
    thus to be "correct", the SHD needs to take that into account. >>>>>>>>

    I used to think that you were simply lying to play head games, >>>>>>>>> I no
    longer believe this. Now I believe that you are ensnared by
    group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies. >>>>>>>>

    Group-think is the way that 40% of the electorate could
    honestly believe
    that significant voter fraud changed the outcome of the 2020 >>>>>>>>> election
    even though there has very persistently been zero evidence of >>>>>>>>> this.
    https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider,
    which is DEFINIED that H(D,D) needs to return the answer
    "Halting" if D(D) Halts, is correct to give the answer
    non-halting even though D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787 million >>>>>>>>> to trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain
    views by telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies. >>>>>>
    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what
    is true, because, like you, they think their own ideas are more
    important that what actually is true. As was philosophized, they
    ignore the truth, but listen to what their itching ears what to
    hear. That fits you to the T, as you won't see the errors that are >>>>>> pointed out to you, and you make up more lies to try to hide your
    errors.


    At least they KNEW they were lying, but didn't care, and had to >>>>>>>> pay the price.

    You don't seem to understand that you are lying just as bad as >>>>>>>> they were.


    I am absolutely not lying Truth is the most important thing to me >>>>>>> even
    much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and are >>>>>> just making stuff up.

    It seems you have lied so much that you have convinced yourself of >>>>>> your lies, and can no longer bear to let the truth in, so you just >>>>>> deny anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth
    because the
    HP, LP, IT and Tarski's Undefinability theorem are all instances >>>>>>> of the
    same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is
    actually true, and you need to work with the definitions that
    exist, not the alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind
    spot, as you like dropping that part when you quote what H is
    supposed to do.

    You seem to see "see" self-references where there are not actual
    self-references, but the effect of the "self-reference" is built
    from simpler components. It seems you don't even understand what a >>>>>> "Self-Reference" actually is, maybe even what a "reference"
    actually is.

    For the halt decider, P is built on a COPY of the claimed decider
    and given a representation of that resultand machine. Not a single >>>>>> reference in sight.



    Maybe they will believe that tiny space aliens living in the >>>>>>>>> heads of
    Fox leadership took control of their brains and forced them to >>>>>>>>> pay.

    The actual behavior of the actual input is correctly determined >>>>>>>>> by an
    embedded UTM that has been adapted to watch the behavior of its >>>>>>>>> simulation of its input and match any non-halting behavior
    patterns.


    But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>>>

    embedded_H is embedded_UTM for the first N steps even when these >>>>>>> N steps
    include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is.


    After 10,000 recursive simulations even an idiot can infer that more >>>>>>> will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final >>>>>>> state
    of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and
    aborts means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that is >>>>>> bigger than N+1


    You and I both know that mathematical induction proves this in
    far less
    than 10,000 recursive simulations. Why you deny it when you
    should know
    this is true is beyond me.

    Nope, you are just proving that you don't even know what
    mathematical induction means.

    You are just too stupid.

    You are just proving you are a liar.


    You know that a halt decider must compute the mapping from its actual >>>>> input based on the actual specified behavior of this input and then
    contradict yourself insisting that the actual behavior of this actual >>>>> input is the wrong behavior to measure.



    Right, and the "ACtual Specified Behavior" of the input is DEFINED
    to be the ACTUAL BEHAVIOR of the machine that input represents,

    *When you say that P must be ~P instead of P we know that you are wacky*

    What ~P


    The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
    necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated
    by embedded_H. From these N steps we can prove by mathematical induction >>> that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
    final state of ⟨Ĥ.qn⟩ in any finite number of steps.

    But we don't care about the "First N steps of (Ĥ) correctly
    simulated", we care about the behavior of the actual machine Ĥ (Ĥ) or
    the actual FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]

    The actual behavior of the input is the behavior of N steps correctly simulated by embedded_H because embedded_H remains a UTM until it aborts
    its simulation.


    ILLOGICAL STATEMENT.

    Something can't be "a UTM until" as a UTM is a full identity, and
    something is or isn't one.

    That is like saying you are immortal until you die.

    False premise means unsound logic.


    Actual Behavior of the input is DEFINED to be the behavior of the actual machine run on the actual input, which Halts. PERIOD.

    That these N steps provide a sufficient mathematical induction proof
    that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own final state of ⟨Ĥ.qn⟩ in any finite number of steps is the correct basis for the halt status decision by embedded_H.

    No;e. please show the ACTUAL "Induction proof" of you claim


    That no textbook ever noticed that the behavior under pathological self- reference(Olcott 2004) could possibly vary from behavior when PSR does
    not exist is only because everyone rejected to notion of a simulation as
    any basis for halt decider out-of-hand without review.

    Because you don't uderstand what actual behavior means, and that it
    can't change based on who is looking at it.

    For the whole history of the halting problem everyone simply assumed
    that the halt decider must provide a correct yes/no answer when no
    correct yes/no answer exists.

    Except that a correct answer does exist, it is what ever is the opposite
    of what the decider H gives. The fact that H can't give it doesn't mean
    it doesn't exist.


    No one ever noticed that the pathological input would be trapped in
    recursive simulation that never reaches any final state when this counter-example input is input to a simulating halt decider.


    Except if the rcursive simulation never is stop, then the decider isn't
    a decider. So the decider MUST make up its mind, or be disqualified, and
    is always wrong.

    YOU are just showing you don't understand how programs work.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Apr 26 21:34:35 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/26/2023 7:07 AM, Richard Damon wrote:
    On 4/25/23 11:45 PM, olcott wrote:
    On 4/25/2023 6:56 AM, Richard Damon wrote:
    On 4/25/23 12:29 AM, olcott wrote:
    On 4/24/2023 6:35 PM, Richard Damon wrote:
    On 4/24/23 10:36 AM, olcott wrote:
    On 4/21/2023 9:37 PM, Richard Damon wrote:
    On 4/21/23 10:10 PM, olcott wrote:
    On 4/21/2023 8:02 PM, Richard Damon wrote:
    On 4/21/23 8:51 PM, olcott wrote:
    On 4/21/2023 6:35 PM, Richard Damon wrote:
    On 4/21/23 7:22 PM, olcott wrote:
    On 4/21/2023 5:36 PM, Richard Damon wrote:
    On 4/21/23 11:35 AM, olcott wrote:
    On 4/21/2023 6:18 AM, Richard Damon wrote:

    So, you don't understand the nature of simulation. >>>>>>>>>>>>>>>


    MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>>>>> verbatim paragraph is correct:

    a) If simulating halt decider H correctly simulates its >>>>>>>>>>>>>> input D until H
    correctly determines that its simulated D would never stop >>>>>>>>>>>>>> running
    unless aborted then

    (b) H can abort its simulation of D and correctly report >>>>>>>>>>>>>> that D
    specifies a non-halting sequence of configurations. >>>>>>>>>>>>>>
    Thus it is established that:

    The behavior of D correctly simulated by H
    is the correct behavior to measure.

    *IF* H correctly simulates per the definition of a UTM >>>>>>>>>>>>>
    It doesn't, so it isn't.


    The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>>>>> is the correct behavior to measure.


    Since the simulation done by embedded_H does not meet the >>>>>>>>>>>>> definition of "correct simulation" that Professer Sipser >>>>>>>>>>>>> uses, your arguement is VOID.


    You are just PROVING your stupidity.

    Always with the strawman error.
    I am saying that when ⟨Ĥ⟩ is correctly simulated by >>>>>>>>>>>> embedded_H it cannot
    possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in >>>>>>>>>>>> any finite
    number of steps because Ĥ is defined to have a pathological >>>>>>>>>>>> relationship
    to embedded_H.

    Since H never "Correctly Simulates" the input per the
    definition that allows using a simulation instead of the >>>>>>>>>>> actual machines behavior, YOUR method is the STRAWMAN.




    When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM >>>>>>>>>>>> or even
    another simulating halt decider such as embedded_H1 having >>>>>>>>>>>> no such
    pathological relationship as the basis of the actual
    behavior of the
    input to embedded_H we are comparing apples to lemons and >>>>>>>>>>>> rejecting the
    apples because lemons are too sour.


    Maybe, but the question is asking for the lemons that the >>>>>>>>>>> pure simulator gives, not the apples that you H gives.

    H is just doing the wrong thing.

    Your failure to see that just shows how blind you are to the >>>>>>>>>>> actual truth of the system.

    H MUST answer about the behavior of the actual machine to be >>>>>>>>>>> a Halt Decider, since that is what the mapping a Halt Decider >>>>>>>>>>> is supposed to answer is based on.


    When a simulating halt decider or even a plain UTM examines >>>>>>>>>> the behavior
    of its input and the SHD or UTM has a pathological
    relationship to its
    input then when another SHD or UTM not having a pathological >>>>>>>>>> relationship to this input is an incorrect proxy for the
    actual behavior
    of this actual input to the original SHD or UTM.

    Nope. If an input has your "pathological" relationship to a
    UTM, then YES, the UTM will generate an infinite behavior, but >>>>>>>>> so does the machine itself, and ANY UTM will see that same
    infinite behavior.


    The point is that that behavior of the input to embedded_H must be >>>>>>>> measured relative to the pathological relationship or it is not >>>>>>>> measuring the actual behavior of the actual input.


    No, the behavior measured must be the DEFINED behavior, which IS >>>>>>> the behavior of the ACTUAL MACHINE.

    That Halts, so H gets the wrong answer.


    I know that this is totally obvious thus I had to conclude that >>>>>>>> anyone
    denying it must be a liar that is only playing head games for
    sadistic
    pleasure.

    No, the fact that you think what you say shows that you are a
    TOTAL IDIOT.




    I did not take into account the power of group think that got at >>>>>>>> least
    100 million Americans to believe the election fraud changed the >>>>>>>> outcome
    of the 2020 election even though there is zero evidence of this >>>>>>>> anywhere. Even a huge cash prize offered by the Lt. governor of >>>>>>>> Texas
    only turned up one Republican that cheated.

    Nope, you just don't understand the truth. You are ready for the >>>>>>> truth, because it shows that you have been wrong, and you fragile >>>>>>> ego can't handle that.


    Only during the 2022 election did it look like this was starting >>>>>>>> to turn
    around a little bit.

    You have been wrong a lot longer than that.



    The problem is that you SHD is NOT a UTM, and thus the fact
    that it aborts its simulation and returns an answer changes the >>>>>>>>> behavior of the machine that USED it (compared to a UTM), and >>>>>>>>> thus to be "correct", the SHD needs to take that into account. >>>>>>>>>

    I used to think that you were simply lying to play head games, >>>>>>>>>> I no
    longer believe this. Now I believe that you are ensnared by >>>>>>>>>> group-think.


    Nope, YOU are the one ensnared in your own fantasy world of lies. >>>>>>>>>

    Group-think is the way that 40% of the electorate could
    honestly believe
    that significant voter fraud changed the outcome of the 2020 >>>>>>>>>> election
    even though there has very persistently been zero evidence of >>>>>>>>>> this.
    https://www.psychologytoday.com/us/basics/groupthink

    And you fantasy world is why you think that a Halt Decider,
    which is DEFINIED that H(D,D) needs to return the answer
    "Halting" if D(D) Halts, is correct to give the answer
    non-halting even though D(D) Ha;ts.

    You are just beliving your own lies.


    Hopefully they will not believe that Fox news paid $787
    million to trick
    people into believing that there was no voter fraud.

    No, they are paying $787 million BECAUSE they tried to gain
    views by telling them the lies they wanted to hear.


    Yes, but even now 30% of the electorate may still believe the lies. >>>>>>>
    So, you seem to beleive in 100% of your lies.

    Yes, there is a portion of the population that fails to see what >>>>>>> is true, because, like you, they think their own ideas are more
    important that what actually is true. As was philosophized, they >>>>>>> ignore the truth, but listen to what their itching ears what to
    hear. That fits you to the T, as you won't see the errors that
    are pointed out to you, and you make up more lies to try to hide >>>>>>> your errors.


    At least they KNEW they were lying, but didn't care, and had to >>>>>>>>> pay the price.

    You don't seem to understand that you are lying just as bad as >>>>>>>>> they were.


    I am absolutely not lying Truth is the most important thing to >>>>>>>> me even
    much more important than love.

    THen why to you lie so much, or are you just that stupid.

    It is clear you just don't know what you are talking about and
    are just making stuff up.

    It seems you have lied so much that you have convinced yourself
    of your lies, and can no longer bear to let the truth in, so you >>>>>>> just deny anything that goes against your lies.

    You have killed your own mind.



    All of this work is aimed at formalizing the notion of truth
    because the
    HP, LP, IT and Tarski's Undefinability theorem are all instances >>>>>>>> of the
    same Olcott(2004) pathological self-reference error.


    So, maybe you need to realize that Truth has to match what is
    actually true, and you need to work with the definitions that
    exist, not the alternate ideas you make up.

    A Halt Decider is DEFINED that

    H(M,w) needs to answer about the behavior of M(w).

    You don't see to understand that, and it seems to even be a blind >>>>>>> spot, as you like dropping that part when you quote what H is
    supposed to do.

    You seem to see "see" self-references where there are not actual >>>>>>> self-references, but the effect of the "self-reference" is built >>>>>>> from simpler components. It seems you don't even understand what >>>>>>> a "Self-Reference" actually is, maybe even what a "reference"
    actually is.

    For the halt decider, P is built on a COPY of the claimed decider >>>>>>> and given a representation of that resultand machine. Not a
    single reference in sight.



    Maybe they will believe that tiny space aliens living in the >>>>>>>>>> heads of
    Fox leadership took control of their brains and forced them to >>>>>>>>>> pay.

    The actual behavior of the actual input is correctly
    determined by an
    embedded UTM that has been adapted to watch the behavior of its >>>>>>>>>> simulation of its input and match any non-halting behavior >>>>>>>>>> patterns.


    But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>>>>

    embedded_H is embedded_UTM for the first N steps even when these >>>>>>>> N steps
    include 10,000 recursive simulations.

    Nope. Just your LIES. You clearly don't understand what a UTM is. >>>>>>>

    After 10,000 recursive simulations even an idiot can infer that >>>>>>>> more
    will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own >>>>>>>> final state
    of ⟨Ĥ.qn⟩ in any finite number of steps.

    The fact that if embedded_H does 10,000 recursive simulations and >>>>>>> aborts means that H^ will halt after 10,001.

    Your propblem is you logic only works if you can find an N that
    is bigger than N+1


    You and I both know that mathematical induction proves this in >>>>>>>> far less
    than 10,000 recursive simulations. Why you deny it when you
    should know
    this is true is beyond me.

    Nope, you are just proving that you don't even know what
    mathematical induction means.

    You are just too stupid.

    You are just proving you are a liar.


    You know that a halt decider must compute the mapping from its actual >>>>>> input based on the actual specified behavior of this input and then >>>>>> contradict yourself insisting that the actual behavior of this actual >>>>>> input is the wrong behavior to measure.



    Right, and the "ACtual Specified Behavior" of the input is DEFINED
    to be the ACTUAL BEHAVIOR of the machine that input represents,

    *When you say that P must be ~P instead of P we know that you are
    wacky*

    What ~P


    The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
    necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly
    simulated
    by embedded_H. From these N steps we can prove by mathematical
    induction
    that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
    final state of ⟨Ĥ.qn⟩ in any finite number of steps.

    But we don't care about the "First N steps of (Ĥ) correctly
    simulated", we care about the behavior of the actual machine Ĥ (Ĥ) or
    the actual FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]

    The actual behavior of the input is the behavior of N steps correctly
    simulated by embedded_H because embedded_H remains a UTM until it aborts
    its simulation.


    ILLOGICAL STATEMENT.

    The actual behavior of the actual input is not necessarily the behavior
    of a non-input as it has been assumed since forever.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 27 07:19:23 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/26/23 10:34 PM, olcott wrote:

    The actual behavior of the actual input is not necessarily the behavior
    of a non-input as it has been assumed since forever.



    But it isn't a "non-input" but is an actual property of the actual
    input, and the property DEFINED as what the decider is supposed to decide.

    Your inability to understand this simple requirement has made you life a
    total waste.

    You just don't seem to understand even the simplest of truths, likely
    because you are just a pathological liar and truth means nothing to you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 27 20:15:07 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:34 PM, olcott wrote:

    The actual behavior of the actual input is not necessarily the
    behavior of a non-input as it has been assumed since forever.



    But it isn't a "non-input" but is an actual property of the actual
    input, and the property DEFINED as what the decider is supposed to decide.


    The actual behavior of the actual input MUST take into account that pathological relationship between Ĥ and embedded_H.

    Your inability to understand this simple requirement has made you life a total waste.

    You just don't seem to understand even the simplest of truths, likely
    because you are just a pathological liar and truth means nothing to you.
    I have said that this is my life's one legacy.
    Everyone besides you believes that I believe what I say.
    I can't be an actual liar if I believe what I say.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Apr 27 22:41:24 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/27/23 9:15 PM, olcott wrote:
    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:34 PM, olcott wrote:

    The actual behavior of the actual input is not necessarily the
    behavior of a non-input as it has been assumed since forever.



    But it isn't a "non-input" but is an actual property of the actual
    input, and the property DEFINED as what the decider is supposed to
    decide.


    The actual behavior of the actual input MUST take into account that pathological relationship between Ĥ and embedded_H.

    Your inability to understand this simple requirement has made you life
    a total waste.

    You just don't seem to understand even the simplest of truths, likely
    because you are just a pathological liar and truth means nothing to you.
    I have said that this is my life's one legacy.
    Everyone besides you believes that I believe what I say.
    I can't be an actual liar if I believe what I say.


    You are just proving yourself to be a liar.

    Just because you "believe" it doesn't totally make it not a lie. An
    "innocent" mistake is not a lie, but when said with a blatant disregard
    for the actual truth, it becomes a lie.

    You "Legacy" is that you were an ignorant lying idiot.

    If you REALLY actually believe the CRAP that you spew out, then you are
    just proving that you are mentally incompetent.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Apr 27 22:15:55 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:15 PM, olcott wrote:
    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:34 PM, olcott wrote:

    The actual behavior of the actual input is not necessarily the
    behavior of a non-input as it has been assumed since forever.



    But it isn't a "non-input" but is an actual property of the actual
    input, and the property DEFINED as what the decider is supposed to
    decide.


    The actual behavior of the actual input MUST take into account that
    pathological relationship between Ĥ and embedded_H.

    Your inability to understand this simple requirement has made you
    life a total waste.

    You just don't seem to understand even the simplest of truths, likely
    because you are just a pathological liar and truth means nothing to you.
    I have said that this is my life's one legacy.
    Everyone besides you believes that I believe what I say.
    I can't be an actual liar if I believe what I say.


    You are just proving yourself to be a liar.

    Just because you "believe" it doesn't totally make it not a lie.

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 28 07:40:37 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/27/23 11:15 PM, olcott wrote:
    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:15 PM, olcott wrote:
    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:34 PM, olcott wrote:

    The actual behavior of the actual input is not necessarily the
    behavior of a non-input as it has been assumed since forever.



    But it isn't a "non-input" but is an actual property of the actual
    input, and the property DEFINED as what the decider is supposed to
    decide.


    The actual behavior of the actual input MUST take into account that
    pathological relationship between Ĥ and embedded_H.

    Your inability to understand this simple requirement has made you
    life a total waste.

    You just don't seem to understand even the simplest of truths,
    likely because you are just a pathological liar and truth means
    nothing to you.
    I have said that this is my life's one legacy.
    Everyone besides you believes that I believe what I say.
    I can't be an actual liar if I believe what I say.


    You are just proving yourself to be a liar.

    Just because you "believe" it doesn't totally make it not a lie.

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie


    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
    When I went to school, history books were full of lies, and I won't
    teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is incorrect.

    Again, you fail by the fallacy of attempting proof by example.


    For example, note that in the recent defamation suit, its wasn't needed
    to prove that they "Knew" the statement to be for sure false, but to
    have a blatant disregard for what is true.


    You have been presented able evidence that you statements are untrue,
    and any normal competent person would see it, therefore your repeating
    the statements are just pathological lies. Lies because they are wrong
    and pathological because you appear to be incapable of actually knowing
    the truth.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 28 11:14:04 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I won't
      teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 09:59:33 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 6:40 AM, Richard Damon wrote:
    On 4/27/23 11:15 PM, olcott wrote:
    On 4/27/2023 9:41 PM, Richard Damon wrote:
    On 4/27/23 9:15 PM, olcott wrote:
    On 4/27/2023 6:19 AM, Richard Damon wrote:
    On 4/26/23 10:34 PM, olcott wrote:

    The actual behavior of the actual input is not necessarily the
    behavior of a non-input as it has been assumed since forever.



    But it isn't a "non-input" but is an actual property of the actual
    input, and the property DEFINED as what the decider is supposed to
    decide.


    The actual behavior of the actual input MUST take into account that
    pathological relationship between Ĥ and embedded_H.

    Your inability to understand this simple requirement has made you
    life a total waste.

    You just don't seem to understand even the simplest of truths,
    likely because you are just a pathological liar and truth means
    nothing to you.
    I have said that this is my life's one legacy.
    Everyone besides you believes that I believe what I say.
    I can't be an actual liar if I believe what I say.


    You are just proving yourself to be a liar.

    Just because you "believe" it doesn't totally make it not a lie.

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an
    intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an
    intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an
    intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an
    intentional untruth. https://www.dictionary.com/browse/lie

    YES IT DOES (and you call me stupid) !!!
    a false statement made with deliberate intent to deceive; an
    intentional untruth. https://www.dictionary.com/browse/lie


    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
      When I went to school, history books were full of lies, and I won't
     teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 10:21:20 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I
    won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.

    In other words you honestly believe that an honest mistake is a lie.
    THAT MAKES YOU STUPID !!! (yet not a liar)


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 10:26:45 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I
    won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. The
    dictionary definition of lying is “to make a false statement with the intention to deceive” (OED 1989) but there are numerous problems with
    this definition. It is both too narrow, since it requires falsity, and
    too broad, since it allows for lying about something other than what is
    being stated, and lying to someone who is believed to be listening in
    but who is not being addressed.

    The most widely accepted definition of lying is the following: “A lie is
    a statement made by one who does not believe it with the intention that
    someone else shall be led to believe it” (Isenberg 1973, 248) (cf.
    “[lying is] making a statement believed to be false, with the intention
    of getting another to accept it as true” (Primoratz 1984, 54n2)). This definition does not specify the addressee, however. It may be restated
    as follows:

    (L1) To lie =df to make a believed-false statement to another person
    with the intention that the other person believe that statement to be true.

    L1 is the traditional definition of lying. According to L1, there are at
    least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful
    (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to another
    person (addressee condition).

    Fourth, lying requires that the person intend that that other person
    believe the untruthful statement to be true (intention to deceive the
    addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 28 11:44:32 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I
    won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. The dictionary definition of lying is “to make a false statement with the intention to deceive” (OED 1989) but there are numerous problems with
    this definition. It is both too narrow, since it requires falsity, and
    too broad, since it allows for lying about something other than what is
    being stated, and lying to someone who is believed to be listening in
    but who is not being addressed.

    The most widely accepted definition of lying is the following: “A lie is
    a statement made by one who does not believe it with the intention that someone else shall be led to believe it” (Isenberg 1973, 248) (cf. “[lying is] making a statement believed to be false, with the intention
    of getting another to accept it as true” (Primoratz 1984, 54n2)). This definition does not specify the addressee, however. It may be restated
    as follows:

    (L1) To lie =df to make a believed-false statement to another person
    with the intention that the other person believe that statement to be true.

    L1 is the traditional definition of lying. According to L1, there are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to another
    person (addressee condition).

    Fourth, lying requires that the person intend that that other person
    believe the untruthful statement to be true (intention to deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi



    So, you are trying to use arguments to justify that you can say "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept truth
    differed from your ideas does not excuse you from claiming that you can
    say them as FACT, and not be a liar.

    The fact that your error has been pointed out an enormous number of
    times, makes you blatant disregard for the actual truth, a suitable
    stand in for your own belief.

    If you don't understand from all instruction you have been given that
    you are wrong, you are just proved to be totally mentally incapable.

    If you want to claim that you are not a liar by reason of insanity, make
    that plea, but that just becomes an admission that you are a
    pathological liar, a liar because of a mental illness.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 11:05:49 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:21 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I
    won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.

    In other words you honestly believe that an honest mistake is a lie.
    THAT MAKES YOU STUPID !!!  (yet not a liar)


    So, you ADMIT that you ideas are a "Mistake"?


    No, to the best of my knowledge I have correctly proved all of my
    assertions are semantic tautologies thus necessarily true.

    The fact that few besides me understand that they are semantic
    tautologies is not actual rebuttal at all.

    You ADMIT that your statements are untrue because you ideas, while
    sincerly held by you, are admitted to be WRONG?

    Note, these definition point to statements which are made that are
    clearly false can be considered as lies on their face value.


    I can call you a liar on the basis that when you sleep at night you
    probably lie down. This is not what is meant by liar.

    Note also, I tend to use the term "Pathological liar", which implies
    this sort error, the speaker, due to mental deficiencies have lost the ability to actual know what is true or false. This seems to describe you
    to the T.

    I also use the term "Ignorant Liar" which means you lie out of a lack of knowledge of the truth.

    I am not a liar in any sense of the common accepted definition of liar
    that requires that four conditions be met.

    there are at least four necessary conditions for lying:

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful
    (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to another
    person (addressee condition).

    Fourth, lying requires that the person intend that that other person
    believe the untruthful statement to be true (intention to deceive the
    addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi

    That you continue to call me a "liar" while failing to disclose that you
    are are not referring to what everyone else means by the term meets the
    legal definition of "actual malice"

    https://www.mtsu.edu/first-amendment/article/889/actual-malice

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 10:50:57 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I
    won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. The
    dictionary definition of lying is “to make a false statement with the
    intention to deceive” (OED 1989) but there are numerous problems with
    this definition. It is both too narrow, since it requires falsity, and
    too broad, since it allows for lying about something other than what
    is being stated, and lying to someone who is believed to be listening
    in but who is not being addressed.

    The most widely accepted definition of lying is the following: “A lie
    is a statement made by one who does not believe it with the intention
    that someone else shall be led to believe it” (Isenberg 1973, 248)
    (cf. “[lying is] making a statement believed to be false, with the
    intention of getting another to accept it as true” (Primoratz 1984,
    54n2)). This definition does not specify the addressee, however. It
    may be restated as follows:

    (L1) To lie =df to make a believed-false statement to another person
    with the intention that the other person believe that statement to be
    true.

    L1 is the traditional definition of lying. According to L1, there are
    at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful
    (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to another
    person (addressee condition).

    Fourth, lying requires that the person intend that that other person
    believe the untruthful statement to be true (intention to deceive the
    addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi



    So, you are trying to use arguments to justify that you can say "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept truth
    differed from your ideas does not excuse you from claiming that you can
    say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify that
    it is a semantic tautology does not even make my assertion false.

    The fact that your error has been pointed out an enormous number of
    times, makes you blatant disregard for the actual truth, a suitable
    stand in for your own belief.


    That fact that no one has understood my semantic tautologies only proves
    that no one has understood my semantic tautologies. It does not even
    prove that my assertion is incorrect.

    If you don't understand from all instruction you have been given that
    you are wrong, you are just proved to be totally mentally incapable.

    If you want to claim that you are not a liar by reason of insanity, make
    that plea, but that just becomes an admission that you are a
    pathological liar, a liar because of a mental illness.


    That you continue to believe that lies do not require an intention to
    deceive after the above has been pointed out makes you willfully
    ignorant, yet still not a liar.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 28 12:41:53 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/23 12:05 PM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:21 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I >>>>>> won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is
    incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.

    In other words you honestly believe that an honest mistake is a lie.
    THAT MAKES YOU STUPID !!!  (yet not a liar)


    So, you ADMIT that you ideas are a "Mistake"?


    No, to the best of my knowledge I have correctly proved all of my
    assertions are semantic tautologies thus necessarily true.

    The fact that few besides me understand that they are semantic
    tautologies is not actual rebuttal at all.

    No, but the fact that you can't rebute the claims against your
    arguments, and really haven't tried, implies that you know that your
    claims are baseless.


    IF your counter to the fact that you have made clearly factually
    incorrect statements is that "Honest Mistakes" are not lies, just shows
    what you consider your grounds to defined yourself.


    You ADMIT that your statements are untrue because you ideas, while
    sincerly held by you, are admitted to be WRONG?

    Note, these definition point to statements which are made that are
    clearly false can be considered as lies on their face value.


    I can call you a liar on the basis that when you sleep at night you
    probably lie down. This is not what is meant by liar.

    So, you admit you don't understand the defintion of liar?


    Note also, I tend to use the term "Pathological liar", which implies
    this sort error, the speaker, due to mental deficiencies have lost the
    ability to actual know what is true or false. This seems to describe
    you to the T.

    I also use the term "Ignorant Liar" which means you lie out of a lack
    of knowledge of the truth.

    I am not a liar in any sense of the common accepted definition of liar
    that requires that four conditions be met.

    But are by MY definition that I posted, one who makes false or
    misleading statments.


    there are at least four necessary conditions for lying:

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to another
    person (addressee condition).

    Fourth, lying requires that the person intend that that other person
    believe the untruthful statement to be true (intention to deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi

    That you continue to call me a "liar" while failing to disclose that you
    are are not referring to what everyone else means by the term meets the
    legal definition of "actual malice"

    https://www.mtsu.edu/first-amendment/article/889/actual-malice


    So, you don't think that definition 3 or 5 of the reference you made,
    that did NOT require knowledge of the error by the person.

    Note, YOU don't get to limit the definition of a word as it is used by
    another. That shows YOU don't understand how communication works.

    There is a significant difference between an "Honest Mistake" and being
    a denier of the truth when presented.

    Unless you want to retrack all your statements about the "Trump Lie"
    since some of the people seem to honestly believe it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 11:58:50 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 12:05 PM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:21 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I >>>>>>> won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is >>>>>>> incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.

    In other words you honestly believe that an honest mistake is a lie.
    THAT MAKES YOU STUPID !!!  (yet not a liar)


    So, you ADMIT that you ideas are a "Mistake"?


    No, to the best of my knowledge I have correctly proved all of my
    assertions are semantic tautologies thus necessarily true.

    The fact that few besides me understand that they are semantic
    tautologies is not actual rebuttal at all.

    No, but the fact that you can't rebute the claims against your
    arguments, and really haven't tried, implies that you know that your
    claims are baseless.


    IF your counter to the fact that you have made clearly factually
    incorrect statements is that "Honest Mistakes" are not lies, just shows
    what you consider your grounds to defined yourself.


    You ADMIT that your statements are untrue because you ideas, while
    sincerly held by you, are admitted to be WRONG?

    Note, these definition point to statements which are made that are
    clearly false can be considered as lies on their face value.


    I can call you a liar on the basis that when you sleep at night you
    probably lie down. This is not what is meant by liar.

    So, you admit you don't understand the defintion of liar?


    Note also, I tend to use the term "Pathological liar", which implies
    this sort error, the speaker, due to mental deficiencies have lost
    the ability to actual know what is true or false. This seems to
    describe you to the T.

    I also use the term "Ignorant Liar" which means you lie out of a lack
    of knowledge of the truth.

    I am not a liar in any sense of the common accepted definition of liar
    that requires that four conditions be met.

    But are by MY definition that I posted, one who makes false or
    misleading statments.


    there are at least four necessary conditions for lying:

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful
    (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to another
    person (addressee condition).

    Fourth, lying requires that the person intend that that other person
    believe the untruthful statement to be true (intention to deceive the
    addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi

    That you continue to call me a "liar" while failing to disclose that you
    are are not referring to what everyone else means by the term meets the
    legal definition of "actual malice"

    https://www.mtsu.edu/first-amendment/article/889/actual-malice


    So, you don't think that definition 3 or 5 of the reference you made,
    that did NOT require knowledge of the error by the person.


    The SEP article references the four required conditions for
    "The most widely accepted definition of lying"

    The most widely accepted definition of lying is the following: “A lie is
    a statement made by one who does not believe it with the intention that
    someone else shall be led to believe it” (Isenberg 1973, 248) (cf.
    “[lying is] making a statement believed to be false, with the intention
    of getting another to accept it as true” (Primoratz 1984, 54n2)). This definition does not specify the addressee, however. It may be restated
    as follows:

    (L1) To lie =df to make a believed-false statement to another person
    with the intention that the other person believe that statement to be true.

    L1 is the traditional definition of lying. According to L1, there are at
    least four necessary conditions for lying.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 12:15:33 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I >>>>>>> won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement is >>>>>>> incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. The
    dictionary definition of lying is “to make a false statement with
    the intention to deceive” (OED 1989) but there are numerous problems >>>> with this definition. It is both too narrow, since it requires
    falsity, and too broad, since it allows for lying about something
    other than what is being stated, and lying to someone who is
    believed to be listening in but who is not being addressed.

    The most widely accepted definition of lying is the following: “A
    lie is a statement made by one who does not believe it with the
    intention that someone else shall be led to believe it” (Isenberg
    1973, 248) (cf. “[lying is] making a statement believed to be false, >>>> with the intention of getting another to accept it as true”
    (Primoratz 1984, 54n2)). This definition does not specify the
    addressee, however. It may be restated as follows:

    (L1) To lie =df to make a believed-false statement to another person
    with the intention that the other person believe that statement to
    be true.

    L1 is the traditional definition of lying. According to L1, there
    are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful
    (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to
    another person (addressee condition).

    Fourth, lying requires that the person intend that that other person
    believe the untruthful statement to be true (intention to deceive
    the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi



    So, you are trying to use arguments to justify that you can say
    "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept truth
    differed from your ideas does not excuse you from claiming that you
    can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify that
    it is a semantic tautology does not even make my assertion false.


    So, you admit that you don't know that actually meaning of a FACT.


    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning.

    Semantic tautologies are the only kind of facts that are necessarily
    true in all possible worlds.

    The fact that your error has been pointed out an enormous number of
    times, makes you blatant disregard for the actual truth, a suitable
    stand in for your own belief.


    That fact that no one has understood my semantic tautologies only proves
    that no one has understood my semantic tautologies. It does not even
    prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but then try
    to change the rules at the far end, without understanding that you are accepting things your logic likely rejects, shows that you don't
    understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic terminology.

    You present "semantic tautologies" based on FALSE definition and results
    that you can not prove.


    It may seem that way from the POV of not understanding what I am saying.
    The entire body of analytical truth is a set of semantic tautologies.
    That you are unfamiliar with the meaning of these terms is no actual
    rebuttal at all.


    If you don't understand from all instruction you have been given that
    you are wrong, you are just proved to be totally mentally incapable.

    If you want to claim that you are not a liar by reason of insanity,
    make that plea, but that just becomes an admission that you are a
    pathological liar, a liar because of a mental illness.


    That you continue to believe that lies do not require an intention to
    deceive after the above has been pointed out makes you willfully
    ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you that
    you are wrong, but you continue to spout words that have been proven incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a semantic tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic
    tautology.

    Cats are animals is necessarily true even if no cats ever physically
    existed.

    Also, I am not "ignorant", since that means not having knowledge or
    awareness of something, but I do understand what you are saying and
    aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is necessarily true you remain sufficiently ignorant.

    YOU are the ignorant
    one, as you don't seem to understand enough to even comment about the rebutalls to your claims.

    THAT show ignorance, and stupidity.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 28 17:21:53 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/23 1:15 PM, olcott wrote:
    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and I >>>>>>>> won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement >>>>>>>> is incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. The
    dictionary definition of lying is “to make a false statement with
    the intention to deceive” (OED 1989) but there are numerous
    problems with this definition. It is both too narrow, since it
    requires falsity, and too broad, since it allows for lying about
    something other than what is being stated, and lying to someone who
    is believed to be listening in but who is not being addressed.

    The most widely accepted definition of lying is the following: “A
    lie is a statement made by one who does not believe it with the
    intention that someone else shall be led to believe it” (Isenberg
    1973, 248) (cf. “[lying is] making a statement believed to be
    false, with the intention of getting another to accept it as true” >>>>> (Primoratz 1984, 54n2)). This definition does not specify the
    addressee, however. It may be restated as follows:

    (L1) To lie =df to make a believed-false statement to another
    person with the intention that the other person believe that
    statement to be true.

    L1 is the traditional definition of lying. According to L1, there
    are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to be
    false; that is, lying requires that the statement be untruthful
    (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to
    another person (addressee condition).

    Fourth, lying requires that the person intend that that other
    person believe the untruthful statement to be true (intention to
    deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi



    So, you are trying to use arguments to justify that you can say
    "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept truth
    differed from your ideas does not excuse you from claiming that you
    can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify that
    it is a semantic tautology does not even make my assertion false.


    So, you admit that you don't know that actually meaning of a FACT.


    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning.

    Semantic tautologies are the only kind of facts that are necessarily
    true in all possible worlds.

    The fact that your error has been pointed out an enormous number of
    times, makes you blatant disregard for the actual truth, a suitable
    stand in for your own belief.


    That fact that no one has understood my semantic tautologies only proves >>> that no one has understood my semantic tautologies. It does not even
    prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but then
    try to change the rules at the far end, without understanding that you
    are accepting things your logic likely rejects, shows that you don't
    understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic terminology.

    You present "semantic tautologies" based on FALSE definition and
    results that you can not prove.


    It may seem that way from the POV of not understanding what I am saying.
    The entire body of analytical truth is a set of semantic tautologies.
    That you are unfamiliar with the meaning of these terms is no actual
    rebuttal at all.


    If you don't understand from all instruction you have been given
    that you are wrong, you are just proved to be totally mentally
    incapable.

    If you want to claim that you are not a liar by reason of insanity,
    make that plea, but that just becomes an admission that you are a
    pathological liar, a liar because of a mental illness.


    That you continue to believe that lies do not require an intention to
    deceive after the above has been pointed out makes you willfully
    ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you that
    you are wrong, but you continue to spout words that have been proven
    incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a semantic tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic tautology.

    Except that isn't the meaning of a "Tautology".

    The COMMON definition is "the saying of the same thing twice in
    different words, generally considered to be a fault of style (e.g., they arrived one after the other in succession)".

    The Meaning in the fielc of Logic is "In mathematical logic, a tautology
    (from Greek: ταυτολογία) is a formula or assertion that is true in every
    possible interpretation."

    So, neither of them point to the meaning of the words.

    If you are just making up words, you are admitting you have lost from
    the start.

    The problem is that word meanings, especially for "natural" language are
    to ill defined to be used to form the basis of formal logic. You need to
    work with FORMAL definitions, which become part of the Truth Makers of
    the system. At that point, either you semantic tautologies are real
    tautologies because they are alway true in every model, or they are not tautologies.


    Cats are animals is necessarily true even if no cats ever physically
    existed.

    Nope. If cats don't exist in the system, the statement is not
    necessarily true. For instance, the statement is NOT true in the system
    of the Natural Numbers.


    Also, I am not "ignorant", since that means not having knowledge or
    awareness of something, but I do understand what you are saying and
    aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is necessarily true you remain sufficiently ignorant.

    As far as you have explained, it is an illogical concept based on
    undefined grounds. You refuse to state whether your "semantic" is "by
    the meaning of the words" at which point you need understand that either
    you are using the "natural" meaning and break the rules of formal logic,
    or you mean the formal meaning within the system, at which point what is
    the difference between your "semantic" connections as you define them
    and the classical meaning of semantic being related to showable by a
    chain of connections to the truth makers of the system.

    Note, if you take that later definition, then either you need to cripple
    the logic you allow or the implication operator and the principle of
    explosion both exist in your system. (If you don't define the
    implication operator as a base operation, but do include "not", "and"
    and "or" as operation, it can just be defined in the system).


    YOU are the ignorant one, as you don't seem to understand enough to
    even comment about the rebutalls to your claims.

    THAT show ignorance, and stupidity.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Fri Apr 28 17:17:09 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/2023 4:21 PM, Richard Damon wrote:
    On 4/28/23 1:15 PM, olcott wrote:
    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and >>>>>>>>> I won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement >>>>>>>>> is incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others.
    The dictionary definition of lying is “to make a false statement >>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>> problems with this definition. It is both too narrow, since it
    requires falsity, and too broad, since it allows for lying about
    something other than what is being stated, and lying to someone
    who is believed to be listening in but who is not being addressed. >>>>>>
    The most widely accepted definition of lying is the following: “A >>>>>> lie is a statement made by one who does not believe it with the
    intention that someone else shall be led to believe it” (Isenberg >>>>>> 1973, 248) (cf. “[lying is] making a statement believed to be
    false, with the intention of getting another to accept it as true” >>>>>> (Primoratz 1984, 54n2)). This definition does not specify the
    addressee, however. It may be restated as follows:

    (L1) To lie =df to make a believed-false statement to another
    person with the intention that the other person believe that
    statement to be true.

    L1 is the traditional definition of lying. According to L1, there
    are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to be >>>>>> false; that is, lying requires that the statement be untruthful
    (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to
    another person (addressee condition).

    Fourth, lying requires that the person intend that that other
    person believe the untruthful statement to be true (intention to
    deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi



    So, you are trying to use arguments to justify that you can say
    "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept
    truth differed from your ideas does not excuse you from claiming
    that you can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify that >>>> it is a semantic tautology does not even make my assertion false.


    So, you admit that you don't know that actually meaning of a FACT.


    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning.

    Semantic tautologies are the only kind of facts that are necessarily
    true in all possible worlds.

    The fact that your error has been pointed out an enormous number of
    times, makes you blatant disregard for the actual truth, a suitable
    stand in for your own belief.


    That fact that no one has understood my semantic tautologies only
    proves
    that no one has understood my semantic tautologies. It does not even
    prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but then
    try to change the rules at the far end, without understanding that
    you are accepting things your logic likely rejects, shows that you
    don't understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic
    terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic
    terminology.

    You present "semantic tautologies" based on FALSE definition and
    results that you can not prove.


    It may seem that way from the POV of not understanding what I am saying.
    The entire body of analytical truth is a set of semantic tautologies.
    That you are unfamiliar with the meaning of these terms is no actual
    rebuttal at all.


    If you don't understand from all instruction you have been given
    that you are wrong, you are just proved to be totally mentally
    incapable.

    If you want to claim that you are not a liar by reason of insanity,
    make that plea, but that just becomes an admission that you are a
    pathological liar, a liar because of a mental illness.


    That you continue to believe that lies do not require an intention to
    deceive after the above has been pointed out makes you willfully
    ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you that
    you are wrong, but you continue to spout words that have been proven
    incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a semantic
    tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic
    tautology.

    Except that isn't the meaning of a "Tautology".


    In logic, a formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. In other words, it cannot be false. It cannot be untrue.

    https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.

    What I actually mean is analytic truth, yet math people will have no
    clue about this because all of math is syntactic rather than semantic. https://plato.stanford.edu/entries/analytic-synthetic/

    Because of this I coined my own term [semantic tautology] as the most self-descriptive term that I could find as a place-holder for my notion.

    The COMMON definition is "the saying of the same thing twice in
    different words, generally considered to be a fault of style (e.g., they arrived one after the other in succession)".

    The Meaning in the fielc of Logic is "In mathematical logic, a tautology (from Greek: ταυτολογία) is a formula or assertion that is true in every
    possible interpretation."

    So, neither of them point to the meaning of the words.


    Did I say that I am limiting the application [semantic tautology] to words?

    When dealing with logic a [semantic tautology] may simply be a tautology(logic). When dealing with formalized natural language it may
    be more clear to refer to it as as [semantic tautology] in that the
    semantic meaning of natural language expression are formalized as
    axioms.

    If you are just making up words, you are admitting you have lost from
    the start.

    The problem is that word meanings, especially for "natural" language are
    to ill defined to be used to form the basis of formal logic. You need to

    Not when natural language is formalized.
    Semantic Grammar and the Power of Computational Language

    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    work with FORMAL definitions, which become part of the Truth Makers of
    the system. At that point, either you semantic tautologies are real tautologies because they are alway true in every model, or they are not tautologies.


    Cats are animals in the currently existing model of the world, Cats may
    not exist in other possible worlds. [semantic tautology] applies with a
    model of the world.


    Cats are animals is necessarily true even if no cats ever physically
    existed.

    Nope. If cats don't exist in the system, the statement is not
    necessarily true. For instance, the statement is NOT true in the system
    of the Natural Numbers.


    Cats are animals at the semantic level in the current model of the
    world. The model of the world has GUID placeholders for the notion of
    {cats} and {animals} and for every other unique sense meaning.


    Also, I am not "ignorant", since that means not having knowledge or
    awareness of something, but I do understand what you are saying and
    aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is
    necessarily true you remain sufficiently ignorant.

    As far as you have explained, it is an illogical concept based on
    undefined grounds. You refuse to state whether your "semantic" is "by
    the meaning of the words" at which point you need understand that either

    When I refer to {semantic} and don't restrict this to the meaning of
    words then it applies to every formal language expression, natural
    language expression and formalized natural language expression.

    That you assume otherwise is your mistake.

    you are using the "natural" meaning and break the rules of formal logic,
    or you mean the formal meaning within the system, at which point what is
    the difference between your "semantic" connections as you define them
    and the classical meaning of semantic being related to showable by a
    chain of connections to the truth makers of the system.


    We don't need to formalize the notions of {cats} and {animals} to know
    that cats <are> animals according to the meaning of those terms.

    Note, if you take that later definition, then either you need to cripple
    the logic you allow or the implication operator and the principle of explosion both exist in your system. (If you don't define the
    implication operator as a base operation,

    I have already said quite a few times that I am probably replacing the implication operator with the Semantic Necessity operator: ⊨□

    That you can't seem to remember key points that I make and repeat many
    times is very annoying.

    but do include "not", "and"
    and "or" as operation, it can just be defined in the system).


    YOU are the ignorant one, as you don't seem to understand enough to
    even comment about the rebutalls to your claims.

    THAT show ignorance, and stupidity.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Apr 28 23:05:30 2023
    XPost: sci.logic, comp.theory, sci.math
    XPost: alt.philosophy

    On 4/28/23 6:17 PM, olcott wrote:
    On 4/28/2023 4:21 PM, Richard Damon wrote:
    On 4/28/23 1:15 PM, olcott wrote:
    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, and >>>>>>>>>> I won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the statement >>>>>>>>>> is incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others.
    The dictionary definition of lying is “to make a false statement >>>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>>> problems with this definition. It is both too narrow, since it
    requires falsity, and too broad, since it allows for lying about >>>>>>> something other than what is being stated, and lying to someone
    who is believed to be listening in but who is not being addressed. >>>>>>>
    The most widely accepted definition of lying is the following: “A >>>>>>> lie is a statement made by one who does not believe it with the
    intention that someone else shall be led to believe it” (Isenberg >>>>>>> 1973, 248) (cf. “[lying is] making a statement believed to be
    false, with the intention of getting another to accept it as
    true” (Primoratz 1984, 54n2)). This definition does not specify >>>>>>> the addressee, however. It may be restated as follows:

    (L1) To lie =df to make a believed-false statement to another
    person with the intention that the other person believe that
    statement to be true.

    L1 is the traditional definition of lying. According to L1, there >>>>>>> are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement
    condition).

    Second, lying requires that the person believe the statement to
    be false; that is, lying requires that the statement be
    untruthful (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to
    another person (addressee condition).

    Fourth, lying requires that the person intend that that other
    person believe the untruthful statement to be true (intention to >>>>>>> deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi



    So, you are trying to use arguments to justify that you can say
    "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept
    truth differed from your ideas does not excuse you from claiming
    that you can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify that >>>>> it is a semantic tautology does not even make my assertion false.


    So, you admit that you don't know that actually meaning of a FACT.


    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning.

    Semantic tautologies are the only kind of facts that are necessarily
    true in all possible worlds.

    The fact that your error has been pointed out an enormous number
    of times, makes you blatant disregard for the actual truth, a
    suitable stand in for your own belief.


    That fact that no one has understood my semantic tautologies only
    proves
    that no one has understood my semantic tautologies. It does not even >>>>> prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but then
    try to change the rules at the far end, without understanding that
    you are accepting things your logic likely rejects, shows that you
    don't understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic
    terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic
    terminology.

    You present "semantic tautologies" based on FALSE definition and
    results that you can not prove.


    It may seem that way from the POV of not understanding what I am saying. >>> The entire body of analytical truth is a set of semantic tautologies.
    That you are unfamiliar with the meaning of these terms is no actual
    rebuttal at all.


    If you don't understand from all instruction you have been given
    that you are wrong, you are just proved to be totally mentally
    incapable.

    If you want to claim that you are not a liar by reason of
    insanity, make that plea, but that just becomes an admission that
    you are a pathological liar, a liar because of a mental illness.


    That you continue to believe that lies do not require an intention to >>>>> deceive after the above has been pointed out makes you willfully
    ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you
    that you are wrong, but you continue to spout words that have been
    proven incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a semantic
    tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic
    tautology.

    Except that isn't the meaning of a "Tautology".


    In logic, a formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. In other words, it cannot be false. It cannot be untrue.

    Right, but that means using the rules of the field, so only definition
    of that field.

    Thus, your "Meaning of the Words" needs to quote ONLY actual definitions
    that have been accepted in the field.


    https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.

    What I actually mean is analytic truth, yet math people will have no
    clue about this because all of math is syntactic rather than semantic. https://plato.stanford.edu/entries/analytic-synthetic/

    I thought you previously were claiming that all of mathematics had to be analytic!

    And why do you call out an article about analytic-synthetic when you are
    making a distintion between semantic and syntactic? That seems to be a non-sequitor.

    And math is NOT just syntactic, as syntax can't express many of the
    properties used in math.


    Because of this I coined my own term [semantic tautology] as the most self-descriptive term that I could find as a place-holder for my notion.


    Right, do don't understand how math works, so you make up terms that you
    can't actually define to fix it.


    The COMMON definition is "the saying of the same thing twice in
    different words, generally considered to be a fault of style (e.g.,
    they arrived one after the other in succession)".

    The Meaning in the fielc of Logic is "In mathematical logic, a
    tautology (from Greek: ταυτολογία) is a formula or assertion that is
    true in every possible interpretation."

    So, neither of them point to the meaning of the words.


    Did I say that I am limiting the application [semantic tautology] to words?

    You haven't given any other definition, so yes, by default you have.

    You can't use the classic semantic of logic, since you disagree with how
    that works, so you only have words. (Classic logic semantics lets you
    show the principle of explosion works, so you can't be using that).


    When dealing with logic a [semantic tautology] may simply be a tautology(logic). When dealing with formalized natural language it may
    be more clear to refer to it as as [semantic tautology] in that the
    semantic meaning of natural language expression are formalized as
    axioms.

    In other words, you don't know what you are talking about and using word
    salad.


    If you are just making up words, you are admitting you have lost from
    the start.

    The problem is that word meanings, especially for "natural" language
    are to ill defined to be used to form the basis of formal logic. You
    need to

    Not when natural language is formalized.
    Semantic Grammar and the Power of Computational Language

    But then you need to use that formalize version, and be in a system that
    uses it.


    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    So, you are admitting you don't know how formal logic works.

    Note, ChatGPT is proven to not understand how to get actually correct
    answer (or at least doesn't always apply those rules).


    work with FORMAL definitions, which become part of the Truth Makers of
    the system. At that point, either you semantic tautologies are real
    tautologies because they are alway true in every model, or they are
    not tautologies.


    Cats are animals in the currently existing model of the world, Cats may
    not exist in other possible worlds. [semantic tautology] applies with a
    model of the world.

    WHICH model of the world?

    (Note, you didn't use any UUID's, so you can't argue with them)

    Cats are also a type of tractor.

    It depends on WHICH model of (what part of) the world you are working.

    Also, it depends on actually being in a model of "the world" and not
    somethint else.

    You are just showing how little you understand about the basis of formal
    logic.



    Cats are animals is necessarily true even if no cats ever physically
    existed.

    Nope. If cats don't exist in the system, the statement is not
    necessarily true. For instance, the statement is NOT true in the
    system of the Natural Numbers.


    Cats are animals at the semantic level in the current model of the
    world. The model of the world has GUID placeholders for the notion of
    {cats} and {animals} and for every other unique sense meaning.

    No, you didn't use them, and the GUIDs only apply to the system that
    actually defines them.

    So in *A* model of the world, with the addition of the GUIDs on the
    terms, you can make that claim.

    The is not a unique "The" model of the world.



    Also, I am not "ignorant", since that means not having knowledge or
    awareness of something, but I do understand what you are saying and
    aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is
    necessarily true you remain sufficiently ignorant.

    As far as you have explained, it is an illogical concept based on
    undefined grounds. You refuse to state whether your "semantic" is "by
    the meaning of the words" at which point you need understand that either

    When I refer to {semantic} and don't restrict this to the meaning of
    words then it applies to every formal language expression, natural
    language expression and formalized natural language expression.

    So, you don't understand what you are talking about.

    SO, you admit that you system falls to the principle of explosion, as
    the classic definition of semantic in classic logic is enough to allow it.


    That you assume otherwise is your mistake.

    In other words, you don't know how to say things precisly,


    you are using the "natural" meaning and break the rules of formal
    logic, or you mean the formal meaning within the system, at which
    point what is the difference between your "semantic" connections as
    you define them and the classical meaning of semantic being related to
    showable by a chain of connections to the truth makers of the system.


    We don't need to formalize the notions of {cats} and {animals} to know
    that cats <are> animals according to the meaning of those terms.

    Unless they are tractors, or something else using the word.


    Note, if you take that later definition, then either you need to
    cripple the logic you allow or the implication operator and the
    principle of explosion both exist in your system. (If you don't define
    the implication operator as a base operation,

    I have already said quite a few times that I am probably replacing the implication operator with the Semantic Necessity operator: ⊨□

    But are you removing the AND and OR and NOT operator, if not, anything
    done by implication can be done with a combination of those.

    I don't think you actually understand how the operator works.

    Also, can you actually DEFINE (not just show an exampe) of what this
    operator defines?


    That you can't seem to remember key points that I make and repeat many
    times is very annoying.

    The fact that you never actually define things, and ignore my comments
    make that your fault.

    I think the problem is you don't know how to do any of the things I ask
    about, so when I keep asking you to do them, you get annoyed because I
    keep showing how stupid you are.



    but do include "not", "and" and "or" as operation, it can just be
    defined in the system).


    YOU are the ignorant one, as you don't seem to understand enough to
    even comment about the rebutalls to your claims.

    THAT show ignorance, and stupidity.




    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Apr 29 11:51:14 2023
    XPost: sci.logic, comp.theory

    On 4/28/2023 10:05 PM, Richard Damon wrote:
    On 4/28/23 6:17 PM, olcott wrote:
    On 4/28/2023 4:21 PM, Richard Damon wrote:
    On 4/28/23 1:15 PM, olcott wrote:
    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, >>>>>>>>>>> and I won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the
    statement is incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. >>>>>>>> The dictionary definition of lying is “to make a false statement >>>>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>>>> problems with this definition. It is both too narrow, since it >>>>>>>> requires falsity, and too broad, since it allows for lying about >>>>>>>> something other than what is being stated, and lying to someone >>>>>>>> who is believed to be listening in but who is not being addressed. >>>>>>>>
    The most widely accepted definition of lying is the following: >>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>> the intention that someone else shall be led to believe it”
    (Isenberg 1973, 248) (cf. “[lying is] making a statement
    believed to be false, with the intention of getting another to >>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition does >>>>>>>> not specify the addressee, however. It may be restated as follows: >>>>>>>>
    (L1) To lie =df to make a believed-false statement to another
    person with the intention that the other person believe that
    statement to be true.

    L1 is the traditional definition of lying. According to L1,
    there are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement >>>>>>>> condition).

    Second, lying requires that the person believe the statement to >>>>>>>> be false; that is, lying requires that the statement be
    untruthful (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to >>>>>>>> another person (addressee condition).

    Fourth, lying requires that the person intend that that other
    person believe the untruthful statement to be true (intention to >>>>>>>> deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>


    So, you are trying to use arguments to justify that you can say
    "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept
    truth differed from your ideas does not excuse you from claiming >>>>>>> that you can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify
    that
    it is a semantic tautology does not even make my assertion false.


    So, you admit that you don't know that actually meaning of a FACT.


    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>
    Semantic tautologies are the only kind of facts that are necessarily
    true in all possible worlds.

    The fact that your error has been pointed out an enormous number >>>>>>> of times, makes you blatant disregard for the actual truth, a
    suitable stand in for your own belief.


    That fact that no one has understood my semantic tautologies only
    proves
    that no one has understood my semantic tautologies. It does not even >>>>>> prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but then
    try to change the rules at the far end, without understanding that
    you are accepting things your logic likely rejects, shows that you
    don't understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic >>>> terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic >>>> terminology.

    You present "semantic tautologies" based on FALSE definition and
    results that you can not prove.


    It may seem that way from the POV of not understanding what I am
    saying.
    The entire body of analytical truth is a set of semantic tautologies.
    That you are unfamiliar with the meaning of these terms is no actual
    rebuttal at all.


    If you don't understand from all instruction you have been given >>>>>>> that you are wrong, you are just proved to be totally mentally
    incapable.

    If you want to claim that you are not a liar by reason of
    insanity, make that plea, but that just becomes an admission that >>>>>>> you are a pathological liar, a liar because of a mental illness. >>>>>>>

    That you continue to believe that lies do not require an intention to >>>>>> deceive after the above has been pointed out makes you willfully
    ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you
    that you are wrong, but you continue to spout words that have been
    proven incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a semantic >>>> tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic
    tautology.

    Except that isn't the meaning of a "Tautology".


    In logic, a formula is satisfiable if it is true under at least one
    interpretation, and thus a tautology is a formula whose negation is
    unsatisfiable. In other words, it cannot be false. It cannot be untrue.

    Right, but that means using the rules of the field, so only definition
    of that field.


    I could augment this field yet this might not be required for
    mathematical expressions. It might be the case that ordinary model
    theory will work just fine.

    Non-standard models of arithmetic seems a little too strange.

    Thus, your "Meaning of the Words" needs to quote ONLY actual definitions
    that have been accepted in the field.

    There have not been any accepted definitions of formalized natural
    language in the field of mathematics. The closest thing in mathematics
    is the categorical propositions. In the field of formalized natural
    language different approaches are used.


    https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.

    What I actually mean is analytic truth, yet math people will have no
    clue about this because all of math is syntactic rather than semantic.
    https://plato.stanford.edu/entries/analytic-synthetic/

    I thought you previously were claiming that all of mathematics had to be analytic!


    This is probably beyond your knowledge of philosophy.
    The key philosopher in the field Quine seems to be a
    blithering idiot that can't even understand that bachelors
    are unmarried. I am referring to the logical positivist
    view of the analytic / synthetic distinction.

    *Logical positivist definitions*

    analytic proposition: a proposition whose truth depends solely on the
    meaning of its terms

    analytic proposition: a proposition that is true (or false) by definition

    analytic proposition: a proposition that is made true (or false) solely
    by the conventions of language

    https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_distinction

    And why do you call out an article about analytic-synthetic when you are making a distintion between semantic and syntactic? That seems to be a non-sequitor.


    This again is your lack of knowledge of philosophy analytic <is>
    semantic.

    And math is NOT just syntactic, as syntax can't express many of the properties used in math.


    I am in the process of learning much more about model theory, it seems
    to have some weird quirks.


    Because of this I coined my own term [semantic tautology] as the most
    self-descriptive term that I could find as a place-holder for my notion.


    Right, do don't understand how math works, so you make up terms that you can't actually define to fix it.


    The analytic synthetic distinction is from philosophy as well as the formalization of natural language is not within mathematics.


    The COMMON definition is "the saying of the same thing twice in
    different words, generally considered to be a fault of style (e.g.,
    they arrived one after the other in succession)".

    The Meaning in the fielc of Logic is "In mathematical logic, a
    tautology (from Greek: ταυτολογία) is a formula or assertion that is
    true in every possible interpretation."

    So, neither of them point to the meaning of the words.


    Did I say that I am limiting the application [semantic tautology] to
    words?

    You haven't given any other definition, so yes, by default you have.


    That may seem that way to someone not very familiar with the term.

    You can't use the classic semantic of logic, since you disagree with how
    that works, so you only have words. (Classic logic semantics lets you
    show the principle of explosion works, so you can't be using that).


    I am starting with the syllogism as my logical basis, it makes sure to
    anchor the meaning of its terms in defined sets. This may end up being
    very much like model theory.


    When dealing with logic a [semantic tautology] may simply be a
    tautology(logic). When dealing with formalized natural language it may
    be more clear to refer to it as as [semantic tautology] in that the
    semantic meaning of natural language expression are formalized as
    axioms.

    In other words, you don't know what you are talking about and using word salad.

    I don't know enough about what I am talking about when referring to
    model theory. My knowledge of formalized semantics comes from Rudolf
    Carnap's (1952) meaning postulates. These same idea can be applied to
    math.



    If you are just making up words, you are admitting you have lost from
    the start.

    The problem is that word meanings, especially for "natural" language
    are to ill defined to be used to form the basis of formal logic. You
    need to

    Not when natural language is formalized.
    Semantic Grammar and the Power of Computational Language

    But then you need to use that formalize version, and be in a system that
    uses it.

    Not at all, no human can do this. Steven Wolfram is referring to what
    large language models are doing. These models computed literally one
    billion years worth of human research in a short amount of time. I am
    referring to the 60 minutes story.


    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    So, you are admitting you don't know how formal logic works.


    I am not saying anything like that. It is more along the lines that you
    do not know enough about formalized natural language.

    Note, ChatGPT is proven to not understand how to get actually correct
    answer (or at least doesn't always apply those rules).


    It does deduction stochastically.


    work with FORMAL definitions, which become part of the Truth Makers
    of the system. At that point, either you semantic tautologies are
    real tautologies because they are alway true in every model, or they
    are not tautologies.


    Cats are animals in the currently existing model of the world, Cats may
    not exist in other possible worlds. [semantic tautology] applies with
    a model of the world.

    WHICH model of the world?


    the currently existing model of the world
    the currently existing model of the world
    the currently existing model of the world

    (Note, you didn't use any UUID's, so you can't argue with them)


    I don't need to use GUID's myself to point out that they can be used in
    place of ambiguous finite strings that have many subtly different sense meanings. A "cat" could be an abbreviation for a brand of earth moving equipment.

    Cats are also a type of tractor.

    It depends on WHICH model of (what part of) the world you are working.


    I am assuming that the complete model of the current world already
    exists as a type hierarchy of GUIDs that are mapped to equivalent
    English words.

    Also, it depends on actually being in a model of "the world" and not somethint else.


    The "cats" are animals is an aspect of the current model of the world in English. That "cats" are also the abbreviation of a brand of Earth
    moving equipment is mapped from a different GUID.

    You are just showing how little you understand about the basis of formal logic.


    No, you are showing how little you understand of knowledge ontologies.



    Cats are animals is necessarily true even if no cats ever physically
    existed.

    Nope. If cats don't exist in the system, the statement is not
    necessarily true. For instance, the statement is NOT true in the
    system of the Natural Numbers.


    Cats are animals at the semantic level in the current model of the
    world. The model of the world has GUID placeholders for the notion of
    {cats} and {animals} and for every other unique sense meaning.

    No, you didn't use them, and the GUIDs only apply to the system that
    actually defines them.


    I have said that I am talking about knowledge ontology type hierarchies
    about 5000 times, none recently.

    So in *A* model of the world, with the addition of the GUIDs on the
    terms, you can make that claim.

    The is not a unique "The" model of the world.

    I am only referring to the current worlds of all possible worlds.
    Possible worlds is from philosophy so you probably won't know about it.

    You continue to conflate your own lack of knowledge of philosophy for my
    lack of knowledge of logic. My knowledge of logic is pretty good with
    the exception of model theory.


    Also, I am not "ignorant", since that means not having knowledge or
    awareness of something, but I do understand what you are saying and
    aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is
    necessarily true you remain sufficiently ignorant.

    As far as you have explained, it is an illogical concept based on
    undefined grounds. You refuse to state whether your "semantic" is "by
    the meaning of the words" at which point you need understand that either

    When I refer to {semantic} and don't restrict this to the meaning of
    words then it applies to every formal language expression, natural
    language expression and formalized natural language expression.

    So, you don't understand what you are talking about.


    Again this is your ignorance and not mine.

    Semantics (from Ancient Greek σημαντικός (sēmantikós) 'significant')[a][1] is the study of reference, meaning, or truth. The
    term can be used to refer to subfields of several distinct disciplines, including philosophy, linguistics and computer science. https://en.wikipedia.org/wiki/Semantics

    SO, you admit that you system falls to the principle of explosion, as
    the classic definition of semantic in classic logic is enough to allow it.


    I am not sure. I have to learn more model theory first.
    I am sure that no semantic meaning can be correctly
    derived on the basis of a contradiction or a falsehood.


    That you assume otherwise is your mistake.

    In other words, you don't know how to say things precisly,

    A notable feature of relevance logics is that they are paraconsistent
    logics: the existence of a contradiction will not cause "explosion".
    This follows from the fact that a conditional with a contradictory
    antecedent that does not share any propositional or predicate letters
    with the consequent cannot be true (or derivable). https://en.wikipedia.org/wiki/Relevance_logic


    you are using the "natural" meaning and break the rules of formal
    logic, or you mean the formal meaning within the system, at which
    point what is the difference between your "semantic" connections as
    you define them and the classical meaning of semantic being related
    to showable by a chain of connections to the truth makers of the system. >>>

    We don't need to formalize the notions of {cats} and {animals} to know
    that cats <are> animals according to the meaning of those terms.

    Unless they are tractors, or something else using the word.
    That is why I (and the CYC project) use GUIDs.



    Note, if you take that later definition, then either you need to
    cripple the logic you allow or the implication operator and the
    principle of explosion both exist in your system. (If you don't
    define the implication operator as a base operation,

    I have already said quite a few times that I am probably replacing the
    implication operator with the Semantic Necessity operator: ⊨□

    But are you removing the AND and OR and NOT operator,

    I never said anything like that, where do you get this stuff from?

    if not, anything
    done by implication can be done with a combination of those.


    Propositional logic has been adapted so that there is some semantic
    connection between its terms. Relevance logic may be sufficient.

    I am examining these things at the foundational basic architecture level
    you mistake this for a lack of understanding of the details. All of the
    the details have not been fully reverse engineered yet.

    I don't think you actually understand how the operator works.


    Its truth table tells me everything that I need to know.

    Also, can you actually DEFINE (not just show an exampe) of what this
    operator defines?

    I can't do that because you do not have a sufficient understand of the
    term semantic in the you assumed it only applies to the meaning of
    words.


    That you can't seem to remember key points that I make and repeat many
    times is very annoying.

    The fact that you never actually define things, and ignore my comments
    make that your fault.

    I think the problem is you don't know how to do any of the things I ask about, so when I keep asking you to do them, you get annoyed because I
    keep showing how stupid you are.


    I am mostly ignorant of model theory and am actually correcting that.
    You seem mostly ignorant of philosophy thus cannot understand the
    philosophy of logic.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Apr 29 14:25:07 2023
    XPost: sci.logic, comp.theory

    On 4/29/23 12:51 PM, olcott wrote:
    On 4/28/2023 10:05 PM, Richard Damon wrote:
    On 4/28/23 6:17 PM, olcott wrote:
    On 4/28/2023 4:21 PM, Richard Damon wrote:
    On 4/28/23 1:15 PM, olcott wrote:
    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, >>>>>>>>>>>> and I won't   teach lies to kids.

    5 to express what is false; convey a false impression. >>>>>>>>>>>>

    It does not ALWAYS require actual knowledge that the
    statement is incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar. >>>>>>>>>>


    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. >>>>>>>>> The dictionary definition of lying is “to make a false
    statement with the intention to deceive” (OED 1989) but there >>>>>>>>> are numerous problems with this definition. It is both too
    narrow, since it requires falsity, and too broad, since it
    allows for lying about something other than what is being
    stated, and lying to someone who is believed to be listening in >>>>>>>>> but who is not being addressed.

    The most widely accepted definition of lying is the following: >>>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>>> the intention that someone else shall be led to believe it” >>>>>>>>> (Isenberg 1973, 248) (cf. “[lying is] making a statement
    believed to be false, with the intention of getting another to >>>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition >>>>>>>>> does not specify the addressee, however. It may be restated as >>>>>>>>> follows:

    (L1) To lie =df to make a believed-false statement to another >>>>>>>>> person with the intention that the other person believe that >>>>>>>>> statement to be true.

    L1 is the traditional definition of lying. According to L1,
    there are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement >>>>>>>>> condition).

    Second, lying requires that the person believe the statement to >>>>>>>>> be false; that is, lying requires that the statement be
    untruthful (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to >>>>>>>>> another person (addressee condition).

    Fourth, lying requires that the person intend that that other >>>>>>>>> person believe the untruthful statement to be true (intention >>>>>>>>> to deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>>


    So, you are trying to use arguments to justify that you can say >>>>>>>> "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept >>>>>>>> truth differed from your ideas does not excuse you from claiming >>>>>>>> that you can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify >>>>>>> that
    it is a semantic tautology does not even make my assertion false. >>>>>>>

    So, you admit that you don't know that actually meaning of a FACT. >>>>>>

    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>>
    Semantic tautologies are the only kind of facts that are necessarily >>>>> true in all possible worlds.

    The fact that your error has been pointed out an enormous number >>>>>>>> of times, makes you blatant disregard for the actual truth, a
    suitable stand in for your own belief.


    That fact that no one has understood my semantic tautologies only >>>>>>> proves
    that no one has understood my semantic tautologies. It does not even >>>>>>> prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but
    then try to change the rules at the far end, without understanding >>>>>> that you are accepting things your logic likely rejects, shows
    that you don't understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic >>>>> terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic >>>>> terminology.

    You present "semantic tautologies" based on FALSE definition and
    results that you can not prove.


    It may seem that way from the POV of not understanding what I am
    saying.
    The entire body of analytical truth is a set of semantic tautologies. >>>>> That you are unfamiliar with the meaning of these terms is no actual >>>>> rebuttal at all.


    If you don't understand from all instruction you have been given >>>>>>>> that you are wrong, you are just proved to be totally mentally >>>>>>>> incapable.

    If you want to claim that you are not a liar by reason of
    insanity, make that plea, but that just becomes an admission
    that you are a pathological liar, a liar because of a mental
    illness.


    That you continue to believe that lies do not require an
    intention to
    deceive after the above has been pointed out makes you willfully >>>>>>> ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you
    that you are wrong, but you continue to spout words that have been >>>>>> proven incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a
    semantic
    tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic
    tautology.

    Except that isn't the meaning of a "Tautology".


    In logic, a formula is satisfiable if it is true under at least one
    interpretation, and thus a tautology is a formula whose negation is
    unsatisfiable. In other words, it cannot be false. It cannot be untrue.

    Right, but that means using the rules of the field, so only definition
    of that field.


    I could augment this field yet this might not be required for
    mathematical expressions. It might be the case that ordinary model
    theory will work just fine.

    You can add axioms to a field to make a "meta" or an extention to the field.

    You can not "restrict" a field and still be related to it, if you do
    that, you need to start all over.

    Since you want to eliminate some logic procedure as being valid, you
    need to start back at the begining, which you don't seem to understand,
    which makes all your arguments moot.

    And, you can't "Augment" a field and then use a definition that doesn't understand about the augmentation to show anything about the original field.


    Non-standard models of arithmetic seems a little too strange.

    Why do you need non-standard models of arithmetic?


    Thus, your "Meaning of the Words" needs to quote ONLY actual
    definitions that have been accepted in the field.

    There have not been any accepted definitions of formalized natural
    language in the field of mathematics. The closest thing in mathematics
    is the categorical propositions. In the field of formalized natural
    language different approaches are used.

    So? You are making claims about things IN the field of mathematics.
    Since you can't show your "rules" work in it, you are just making
    word/symbol salad.

    Note, "Categorical Propositions" are much simpler than the logic used in
    actual mathematics. From everything I have seen, you are just ignorant
    of how Formal Logic actually works. Your "Formalized Natural Language"
    seems much more to be just in the hand-wavy Philosophy side of things.



    https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.

    What I actually mean is analytic truth, yet math people will have no
    clue about this because all of math is syntactic rather than semantic.
    https://plato.stanford.edu/entries/analytic-synthetic/

    I thought you previously were claiming that all of mathematics had to
    be analytic!


    This is probably beyond your knowledge of philosophy.
    The key philosopher in the field Quine seems to be a
    blithering idiot that can't even understand that bachelors
    are unmarried. I am referring to the logical positivist
    view of the analytic / synthetic distinction.

    Again, hand-wavy Philosophy, not real Formal Logic.


    *Logical positivist definitions*

    analytic proposition: a proposition whose truth depends solely on the
    meaning of its terms

    analytic proposition: a proposition that is true (or false) by definition

    analytic proposition: a proposition that is made true (or false) solely
    by the conventions of language

    https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_distinction

    And that form of logic can't prove the Pythogorean Theorem because it
    doesn't have the needed tools.


    And why do you call out an article about analytic-synthetic when you
    are making a distintion between semantic and syntactic? That seems to
    be a non-sequitor.


    This again is your lack of knowledge of philosophy analytic <is>
    semantic.




    And math is NOT just syntactic, as syntax can't express many of the
    properties used in math.


    I am in the process of learning much more about model theory, it seems
    to have some weird quirks.

    So, you ar just NOW looking to learn things about which you have been
    making bold claims about for years?




    Because of this I coined my own term [semantic tautology] as the most
    self-descriptive term that I could find as a place-holder for my notion.


    Right, do don't understand how math works, so you make up terms that
    you can't actually define to fix it.


    The analytic synthetic distinction is from philosophy as well as the formalization of natural language is not within mathematics.

    So, why do you bring something not related to the fields of logic that
    you claim to be working in?

    Halting Problem, Incompleteness, these are not just "philosophical"
    arguements, but proofs in well defined formal systems, which it appears
    you totally do not understand.



    The COMMON definition is "the saying of the same thing twice in
    different words, generally considered to be a fault of style (e.g.,
    they arrived one after the other in succession)".

    The Meaning in the fielc of Logic is "In mathematical logic, a
    tautology (from Greek: ταυτολογία) is a formula or assertion that is
    true in every possible interpretation."

    So, neither of them point to the meaning of the words.


    Did I say that I am limiting the application [semantic tautology] to
    words?

    You haven't given any other definition, so yes, by default you have.


    That may seem that way to someone not very familiar with the term.

    So DEFINE IT.

    Your failure to is PROOF that you don't actually know what it means.


    You can't use the classic semantic of logic, since you disagree with
    how that works, so you only have words. (Classic logic semantics lets
    you show the principle of explosion works, so you can't be using that).


    I am starting with the syllogism as my logical basis, it makes sure to
    anchor the meaning of its terms in defined sets. This may end up being
    very much like model theory.

    Yes, you need to start at the beginning



    When dealing with logic a [semantic tautology] may simply be a
    tautology(logic). When dealing with formalized natural language it may
    be more clear to refer to it as as [semantic tautology] in that the
    semantic meaning of natural language expression are formalized as
    axioms.

    In other words, you don't know what you are talking about and using
    word salad.

    I don't know enough about what I am talking about when referring to
    model theory. My knowledge of formalized semantics comes from Rudolf
    Carnap's (1952) meaning postulates. These same idea can be applied to
    math.

    IF you admit you don't know enough about it, how can you be so positive
    you have things that depend on it right.




    If you are just making up words, you are admitting you have lost
    from the start.

    The problem is that word meanings, especially for "natural" language
    are to ill defined to be used to form the basis of formal logic. You
    need to

    Not when natural language is formalized.
    Semantic Grammar and the Power of Computational Language

    But then you need to use that formalize version, and be in a system
    that uses it.

    Not at all, no human can do this. Steven Wolfram is referring to what
    large language models are doing. These models computed literally one
    billion years worth of human research in a short amount of time. I am referring to the 60 minutes story.

    So, you don't understand the nature of Formal Systems.



    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    So, you are admitting you don't know how formal logic works.


    I am not saying anything like that. It is more along the lines that you
    do not know enough about formalized natural language.

    And you don't understand how Formal Logic works. Since you are making
    claims about Formal Logic, that shows your error.


    Note, ChatGPT is proven to not understand how to get actually correct
    answer (or at least doesn't always apply those rules).


    It does deduction stochastically.

    Nope. It does NOT use "formally correct logic" in its processing, as
    such, it is quite able to come up with incorrect answers, as it has been
    well documented at doing.



    work with FORMAL definitions, which become part of the Truth Makers
    of the system. At that point, either you semantic tautologies are
    real tautologies because they are alway true in every model, or they
    are not tautologies.


    Cats are animals in the currently existing model of the world, Cats may
    not exist in other possible worlds. [semantic tautology] applies with
    a model of the world.

    WHICH model of the world?


    the currently existing model of the world
    the currently existing model of the world
    the currently existing model of the world

    So, you think there is only one model?

    You are just proving you don't understand logic.


    (Note, you didn't use any UUID's, so you can't argue with them)


    I don't need to use GUID's myself to point out that they can be used in
    place of ambiguous finite strings that have many subtly different sense meanings. A "cat" could be an abbreviation for a brand of earth moving equipment.

    But then you can't actually make the claim that the thing called "a cat"
    needs to be the thing you are thinking of and not the machine.


    Cats are also a type of tractor.

    It depends on WHICH model of (what part of) the world you are working.


    I am assuming that the complete model of the current world already
    exists as a type hierarchy of GUIDs that are mapped to equivalent
    English words.

    So, you assume something that isn't true, thus your logic is unsound.


    Also, it depends on actually being in a model of "the world" and not
    somethint else.


    The "cats" are animals is an aspect of the current model of the world in English. That "cats" are also the abbreviation of a brand of Earth
    moving equipment is mapped from a different GUID.

    But "cats" means many different things. Note "cats" isn't an
    abbreviation, but a "slang" term, note it has a number of other
    different meanings. You can't


    You are just showing how little you understand about the basis of
    formal logic.


    No, you are showing how little you understand of knowledge ontologies.

    Nope, You are, that you think there is just one model.

    There are MANY ways to organize the knowledge, so there isn't just one
    model to use.




    Cats are animals is necessarily true even if no cats ever
    physically existed.

    Nope. If cats don't exist in the system, the statement is not
    necessarily true. For instance, the statement is NOT true in the
    system of the Natural Numbers.


    Cats are animals at the semantic level in the current model of the
    world. The model of the world has GUID placeholders for the notion of
    {cats} and {animals} and for every other unique sense meaning.

    No, you didn't use them, and the GUIDs only apply to the system that
    actually defines them.


    I have said that I am talking about knowledge ontology type hierarchies
    about 5000 times, none recently.

    Then why do you poke you nose into thing not based on it.

    Also, why do you think there is just one?


    So in *A* model of the world, with the addition of the GUIDs on the
    terms, you can make that claim.

    The is not a unique "The" model of the world.

    I am only referring to the current worlds of all possible worlds.
    Possible worlds is from philosophy so you probably won't know about it.

    I'm not talking about multiple worlds, but multiple MODELS.


    You continue to conflate your own lack of knowledge of philosophy for my
    lack of knowledge of logic. My knowledge of logic is pretty good with
    the exception of model theory.

    Then why do you make so many errors. You make just about every error
    possible.



    Also, I am not "ignorant", since that means not having knowledge
    or awareness of something, but I do understand what you are saying >>>>>> and aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is >>>>> necessarily true you remain sufficiently ignorant.

    As far as you have explained, it is an illogical concept based on
    undefined grounds. You refuse to state whether your "semantic" is
    "by the meaning of the words" at which point you need understand
    that either

    When I refer to {semantic} and don't restrict this to the meaning of
    words then it applies to every formal language expression, natural
    language expression and formalized natural language expression.

    So, you don't understand what you are talking about.


    Again this is your ignorance and not mine.

    Semantics (from Ancient Greek σημαντικός (sēmantikós) 'significant')[a][1] is the study of reference, meaning, or truth. The
    term can be used to refer to subfields of several distinct disciplines, including philosophy, linguistics and computer science. https://en.wikipedia.org/wiki/Semantics

    So, when you say there must be a "semantic relationship" what do you
    mean. Note in the article how broad "semantics" is described.

    THat is like saying there needs to be an "Arithmetic Relationship"
    between two numbers, there can be MANY different ways to build an
    Arithmatic Relationship, just like there are many different things that
    can be looked at as "semantic".


    SO, you admit that you system falls to the principle of explosion, as
    the classic definition of semantic in classic logic is enough to allow
    it.


    I am not sure. I have to learn more model theory first.
    I am sure that no semantic meaning can be correctly
    derived on the basis of a contradiction or a falsehood.

    The issue is that a system that supports a contradiction has had all of
    its semantics canceled out of it. In the presence of that level of
    error, "meaning" is lost.



    That you assume otherwise is your mistake.

    In other words, you don't know how to say things precisly,

    A notable feature of relevance logics is that they are paraconsistent
    logics: the existence of a contradiction will not cause "explosion".
    This follows from the fact that a conditional with a contradictory
    antecedent that does not share any propositional or predicate letters
    with the consequent cannot be true (or derivable). https://en.wikipedia.org/wiki/Relevance_logic

    And you understand that Relevence Logic is weaker than "classical logic"



    you are using the "natural" meaning and break the rules of formal
    logic, or you mean the formal meaning within the system, at which
    point what is the difference between your "semantic" connections as
    you define them and the classical meaning of semantic being related
    to showable by a chain of connections to the truth makers of the
    system.


    We don't need to formalize the notions of {cats} and {animals} to know
    that cats <are> animals according to the meaning of those terms.

    Unless they are tractors, or something else using the word.
    That is why I (and the CYC project) use GUIDs.

    So USE the GUIDs (and state the source of the definition of them) and
    admit you are only working in the system that has defined those GUIDs.




    Note, if you take that later definition, then either you need to
    cripple the logic you allow or the implication operator and the
    principle of explosion both exist in your system. (If you don't
    define the implication operator as a base operation,

    I have already said quite a few times that I am probably replacing
    the implication operator with the Semantic Necessity operator: ⊨□

    But are you removing the AND and OR and NOT operator,

    I never said anything like that, where do you get this stuff from?

    I am asking you that question. Are you removing AND, OR, and NOT from
    your logic?

    (Note, this is one thing that your "Relevence Logic" does in part, its
    "atoms" can not use logical connectives.


    if not, anything done by implication can be done with a combination of
    those.


    Propositional logic has been adapted so that there is some semantic connection between its terms. Relevance logic may be sufficient.

    I am examining these things at the foundational basic architecture level
    you mistake this for a lack of understanding of the details. All of the
    the details have not been fully reverse engineered yet.

    So, you don't know how, or IF, your logic system can work, but you still
    claim that it disproves all sorts of things tha tyou also don't understand.

    Semms typical for you.



    I don't think you actually understand how the operator works.


    Its truth table tells me everything that I need to know.

    It should, but you don't seem too understand what a truth table actually
    tells you, or maybe it is the logic about how to use them.

    Perhaps your problems between some and all are a foundation for your errors.


    Also, can you actually DEFINE (not just show an exampe) of what this
    operator defines?

    I can't do that because you do not have a sufficient understand of the
    term semantic in the you assumed it only applies to the meaning of
    words.

    So, DEFINE it. You seem to like using terms you can't actually formally
    define.



    That you can't seem to remember key points that I make and repeat many
    times is very annoying.

    The fact that you never actually define things, and ignore my comments
    make that your fault.

    I think the problem is you don't know how to do any of the things I
    ask about, so when I keep asking you to do them, you get annoyed
    because I keep showing how stupid you are.


    I am mostly ignorant of model theory and am actually correcting that.
    You seem mostly ignorant of philosophy thus cannot understand the
    philosophy of logic.


    I will admit that some of the deeper philosophies are not in my wheel
    house of knowledge, but I do understand a lot of the basics.

    The key thing YOU seem to miss, as that FORMAL LOGIC is fairly distinct
    from the Philosophies. Philosophy likes to argue about what would be the
    best set of rules. Formal Logic says we will establish the rules we will
    work with and see where it goes.

    Note, this means you can't argue that a Formal Logic has "incorrect"
    rules, as it has the rules that it has. You can try to argue that a
    given Formal Logic has rules that have problems (like when Naive Set
    theory collapsed under its paradox).

    So, you can TRY to create an alternate Formal Logic system built on your
    new rules of logic, but you will need to do all that work to establish
    what it can actually do.

    I note that you have posted a number of papers claiming errors in
    classical proof where your own paper is filled with errors due to you
    not understanding what you are talking about, but I have yet to see you
    posting an actual formal paper about even the basics of this new logic
    system.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon May 1 17:01:24 2023
    XPost: sci.logic, comp.theory, sci.math

    On 4/28/2023 10:05 PM, Richard Damon wrote:
    On 4/28/23 6:17 PM, olcott wrote:
    On 4/28/2023 4:21 PM, Richard Damon wrote:
    On 4/28/23 1:15 PM, olcott wrote:
    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, >>>>>>>>>>> and I won't   teach lies to kids.

    5 to express what is false; convey a false impression.


    It does not ALWAYS require actual knowledge that the
    statement is incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar.



    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. >>>>>>>> The dictionary definition of lying is “to make a false statement >>>>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>>>> problems with this definition. It is both too narrow, since it >>>>>>>> requires falsity, and too broad, since it allows for lying about >>>>>>>> something other than what is being stated, and lying to someone >>>>>>>> who is believed to be listening in but who is not being addressed. >>>>>>>>
    The most widely accepted definition of lying is the following: >>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>> the intention that someone else shall be led to believe it”
    (Isenberg 1973, 248) (cf. “[lying is] making a statement
    believed to be false, with the intention of getting another to >>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition does >>>>>>>> not specify the addressee, however. It may be restated as follows: >>>>>>>>
    (L1) To lie =df to make a believed-false statement to another
    person with the intention that the other person believe that
    statement to be true.

    L1 is the traditional definition of lying. According to L1,
    there are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement >>>>>>>> condition).

    Second, lying requires that the person believe the statement to >>>>>>>> be false; that is, lying requires that the statement be
    untruthful (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to >>>>>>>> another person (addressee condition).

    Fourth, lying requires that the person intend that that other
    person believe the untruthful statement to be true (intention to >>>>>>>> deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>


    So, you are trying to use arguments to justify that you can say
    "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept
    truth differed from your ideas does not excuse you from claiming >>>>>>> that you can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify
    that
    it is a semantic tautology does not even make my assertion false.


    So, you admit that you don't know that actually meaning of a FACT.


    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>
    Semantic tautologies are the only kind of facts that are necessarily
    true in all possible worlds.

    The fact that your error has been pointed out an enormous number >>>>>>> of times, makes you blatant disregard for the actual truth, a
    suitable stand in for your own belief.


    That fact that no one has understood my semantic tautologies only
    proves
    that no one has understood my semantic tautologies. It does not even >>>>>> prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but then
    try to change the rules at the far end, without understanding that
    you are accepting things your logic likely rejects, shows that you
    don't understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic >>>> terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic >>>> terminology.

    You present "semantic tautologies" based on FALSE definition and
    results that you can not prove.


    It may seem that way from the POV of not understanding what I am
    saying.
    The entire body of analytical truth is a set of semantic tautologies.
    That you are unfamiliar with the meaning of these terms is no actual
    rebuttal at all.


    If you don't understand from all instruction you have been given >>>>>>> that you are wrong, you are just proved to be totally mentally
    incapable.

    If you want to claim that you are not a liar by reason of
    insanity, make that plea, but that just becomes an admission that >>>>>>> you are a pathological liar, a liar because of a mental illness. >>>>>>>

    That you continue to believe that lies do not require an intention to >>>>>> deceive after the above has been pointed out makes you willfully
    ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you
    that you are wrong, but you continue to spout words that have been
    proven incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a semantic >>>> tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic
    tautology.

    Except that isn't the meaning of a "Tautology".


    In logic, a formula is satisfiable if it is true under at least one
    interpretation, and thus a tautology is a formula whose negation is
    unsatisfiable. In other words, it cannot be false. It cannot be untrue.

    Right, but that means using the rules of the field, so only definition
    of that field.

    Thus, your "Meaning of the Words" needs to quote ONLY actual definitions
    that have been accepted in the field.


    https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.

    What I actually mean is analytic truth, yet math people will have no
    clue about this because all of math is syntactic rather than semantic.
    https://plato.stanford.edu/entries/analytic-synthetic/

    I thought you previously were claiming that all of mathematics had to be analytic!


    Everyone that knows philosophy of mathematics knows that this is true.

    And why do you call out an article about analytic-synthetic when you are making a distintion between semantic and syntactic? That seems to be a non-sequitor.

    And math is NOT just syntactic, as syntax can't express many of the properties used in math.


    Because of this I coined my own term [semantic tautology] as the most
    self-descriptive term that I could find as a place-holder for my notion.


    Right, do don't understand how math works, so you make up terms that you can't actually define to fix it.


    The COMMON definition is "the saying of the same thing twice in
    different words, generally considered to be a fault of style (e.g.,
    they arrived one after the other in succession)".

    The Meaning in the fielc of Logic is "In mathematical logic, a
    tautology (from Greek: ταυτολογία) is a formula or assertion that is
    true in every possible interpretation."

    So, neither of them point to the meaning of the words.


    Did I say that I am limiting the application [semantic tautology] to
    words?

    You haven't given any other definition, so yes, by default you have.


    Only for people that don't have a clue what the term [semantics] means.

    You can't use the classic semantic of logic, since you disagree with how
    that works, so you only have words. (Classic logic semantics lets you
    show the principle of explosion works, so you can't be using that).


    When dealing with logic a [semantic tautology] may simply be a
    tautology(logic). When dealing with formalized natural language it may
    be more clear to refer to it as as [semantic tautology] in that the
    semantic meaning of natural language expression are formalized as
    axioms.

    In other words, you don't know what you are talking about and using word salad.


    If you are just making up words, you are admitting you have lost from
    the start.

    The problem is that word meanings, especially for "natural" language
    are to ill defined to be used to form the basis of formal logic. You
    need to

    Not when natural language is formalized.
    Semantic Grammar and the Power of Computational Language

    But then you need to use that formalize version, and be in a system that
    uses it.


    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    So, you are admitting you don't know how formal logic works.

    Note, ChatGPT is proven to not understand how to get actually correct
    answer (or at least doesn't always apply those rules).


    work with FORMAL definitions, which become part of the Truth Makers
    of the system. At that point, either you semantic tautologies are
    real tautologies because they are alway true in every model, or they
    are not tautologies.


    Cats are animals in the currently existing model of the world, Cats may
    not exist in other possible worlds. [semantic tautology] applies with
    a model of the world.

    WHICH model of the world?


    Clearly you have never heard of possible worlds semantics. https://en.wikipedia.org/wiki/Possible_world

    (Note, you didn't use any UUID's, so you can't argue with them)

    Cats are also a type of tractor.

    It depends on WHICH model of (what part of) the world you are working.

    Also, it depends on actually being in a model of "the world" and not somethint else.

    You are just showing how little you understand about the basis of formal logic.


    I am mostly focusing on the philosophical foundation of logic rather
    than logic itself. To most logicians this is just silly nonsense.
    They don't care whether or not the rules are consistent, the rules
    are the word of God to logicians.



    Cats are animals is necessarily true even if no cats ever physically
    existed.

    Nope. If cats don't exist in the system, the statement is not
    necessarily true. For instance, the statement is NOT true in the
    system of the Natural Numbers.


    Cats are animals at the semantic level in the current model of the
    world. The model of the world has GUID placeholders for the notion of
    {cats} and {animals} and for every other unique sense meaning.

    No, you didn't use them, and the GUIDs only apply to the system that
    actually defines them.

    So in *A* model of the world, with the addition of the GUIDs on the
    terms, you can make that claim.

    The is not a unique "The" model of the world.

    Sure maybe the living animal: "cat" has always been a ten story office
    building and everyone has been fooled into thinking otherwise.



    Also, I am not "ignorant", since that means not having knowledge or
    awareness of something, but I do understand what you are saying and
    aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is
    necessarily true you remain sufficiently ignorant.

    As far as you have explained, it is an illogical concept based on
    undefined grounds. You refuse to state whether your "semantic" is "by
    the meaning of the words" at which point you need understand that either

    When I refer to {semantic} and don't restrict this to the meaning of
    words then it applies to every formal language expression, natural
    language expression and formalized natural language expression.

    So, you don't understand what you are talking about.

    SO, you admit that you system falls to the principle of explosion, as
    the classic definition of semantic in classic logic is enough to allow it.


    I am not stupid enough to believe that
    FALSE <proves> Donald Trump is the Christ.

    Anyone with any sense rejects this nonsense:
    ex falso [sequitur] quodlibet


    That you assume otherwise is your mistake.

    In other words, you don't know how to say things precisly,

    you are using the "natural" meaning and break the rules of formal
    logic, or you mean the formal meaning within the system, at which
    point what is the difference between your "semantic" connections as
    you define them and the classical meaning of semantic being related
    to showable by a chain of connections to the truth makers of the system. >>>

    We don't need to formalize the notions of {cats} and {animals} to know
    that cats <are> animals according to the meaning of those terms.

    Unless they are tractors, or something else using the word.

    That is why I stipulated that in the hypothetical formal system that I
    am referring to each unique sense meaning has its own GUID.


    Note, if you take that later definition, then either you need to
    cripple the logic you allow or the implication operator and the
    principle of explosion both exist in your system. (If you don't
    define the implication operator as a base operation,

    I have already said quite a few times that I am probably replacing the
    implication operator with the Semantic Necessity operator: ⊨□

    But are you removing the AND and OR and NOT operator, if not, anything
    done by implication can be done with a combination of those.


    Show me.

    I don't think you actually understand how the operator works.


    It is a freaking truth table.

    Also, can you actually DEFINE (not just show an exampe) of what this
    operator defines?


    Assume that everything known to mankind is in a type hierarchy, the GUID
    is the identifier in the type hierarchy for each unique sense meaning.


    That you can't seem to remember key points that I make and repeat many
    times is very annoying.

    The fact that you never actually define things, and ignore my comments
    make that your fault.

    I think the problem is you don't know how to do any of the things I ask about, so when I keep asking you to do them, you get annoyed because I
    keep showing how stupid you are.



    but do include "not", "and" and "or" as operation, it can just be
    defined in the system).


    YOU are the ignorant one, as you don't seem to understand enough to
    even comment about the rebutalls to your claims.

    THAT show ignorance, and stupidity.





    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon May 1 19:09:46 2023
    XPost: sci.logic, comp.theory, sci.math

    On 5/1/23 6:01 PM, olcott wrote:
    On 4/28/2023 10:05 PM, Richard Damon wrote:
    On 4/28/23 6:17 PM, olcott wrote:
    On 4/28/2023 4:21 PM, Richard Damon wrote:
    On 4/28/23 1:15 PM, olcott wrote:
    On 4/28/2023 11:41 AM, Richard Damon wrote:
    On 4/28/23 11:50 AM, olcott wrote:
    On 4/28/2023 10:44 AM, Richard Damon wrote:
    On 4/28/23 11:26 AM, olcott wrote:
    On 4/28/2023 10:14 AM, Richard Damon wrote:
    On 4/28/23 10:59 AM, olcott wrote:
    On 4/28/2023 6:40 AM, Richard Damon wrote:

    https://www.dictionary.com/browse/lie

    3 an inaccurate or untrue statement; falsehood:
       When I went to school, history books were full of lies, >>>>>>>>>>>> and I won't   teach lies to kids.

    5 to express what is false; convey a false impression. >>>>>>>>>>>>

    It does not ALWAYS require actual knowledge that the
    statement is incorrect.


    Yes it does and you are stupid for saying otherwise.


    Then why do the definition I quoted say otherwise?

    That just shows you are the one that is stupid, and a liar. >>>>>>>>>>


    In this case you are proving to be stupid: (yet not a liar)

    1. Traditional Definition of Lying
    There is no universally accepted definition of lying to others. >>>>>>>>> The dictionary definition of lying is “to make a false
    statement with the intention to deceive” (OED 1989) but there >>>>>>>>> are numerous problems with this definition. It is both too
    narrow, since it requires falsity, and too broad, since it
    allows for lying about something other than what is being
    stated, and lying to someone who is believed to be listening in >>>>>>>>> but who is not being addressed.

    The most widely accepted definition of lying is the following: >>>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>>> the intention that someone else shall be led to believe it” >>>>>>>>> (Isenberg 1973, 248) (cf. “[lying is] making a statement
    believed to be false, with the intention of getting another to >>>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition >>>>>>>>> does not specify the addressee, however. It may be restated as >>>>>>>>> follows:

    (L1) To lie =df to make a believed-false statement to another >>>>>>>>> person with the intention that the other person believe that >>>>>>>>> statement to be true.

    L1 is the traditional definition of lying. According to L1,
    there are at least four necessary conditions for lying.

    First, lying requires that a person make a statement (statement >>>>>>>>> condition).

    Second, lying requires that the person believe the statement to >>>>>>>>> be false; that is, lying requires that the statement be
    untruthful (untruthfulness condition).

    Third, lying requires that the untruthful statement be made to >>>>>>>>> another person (addressee condition).

    Fourth, lying requires that the person intend that that other >>>>>>>>> person believe the untruthful statement to be true (intention >>>>>>>>> to deceive the addressee condition).

    https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>>


    So, you are trying to use arguments to justify that you can say >>>>>>>> "false statements" and not be considered a liar.

    The fact that you seem to have KNOWN that the generally accept >>>>>>>> truth differed from your ideas does not excuse you from claiming >>>>>>>> that you can say them as FACT, and not be a liar.


    When I say that an idea is a fact I mean that it is a semantic
    tautology. That you don't understand things well enough to verify >>>>>>> that
    it is a semantic tautology does not even make my assertion false. >>>>>>>

    So, you admit that you don't know that actually meaning of a FACT. >>>>>>

    I mean rue in the absolute sense of the word true such as:
    2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>>
    Semantic tautologies are the only kind of facts that are necessarily >>>>> true in all possible worlds.

    The fact that your error has been pointed out an enormous number >>>>>>>> of times, makes you blatant disregard for the actual truth, a
    suitable stand in for your own belief.


    That fact that no one has understood my semantic tautologies only >>>>>>> proves
    that no one has understood my semantic tautologies. It does not even >>>>>>> prove that my assertion is incorrect.

    No, the fact that you ACCEPT most existing logic is valid, but
    then try to change the rules at the far end, without understanding >>>>>> that you are accepting things your logic likely rejects, shows
    that you don't understand how logic actually works.


    That I do not have a complete grasp of every nuance of mathematical
    logic does not show that I do not have a sufficient grasp of those
    aspects that I refer to.

    My next goal is to attain a complete understanding of all of the basic >>>>> terminology of model theory. I had a key insight about model theory
    sometime in the last month that indicates that I must master its basic >>>>> terminology.

    You present "semantic tautologies" based on FALSE definition and
    results that you can not prove.


    It may seem that way from the POV of not understanding what I am
    saying.
    The entire body of analytical truth is a set of semantic tautologies. >>>>> That you are unfamiliar with the meaning of these terms is no actual >>>>> rebuttal at all.


    If you don't understand from all instruction you have been given >>>>>>>> that you are wrong, you are just proved to be totally mentally >>>>>>>> incapable.

    If you want to claim that you are not a liar by reason of
    insanity, make that plea, but that just becomes an admission
    that you are a pathological liar, a liar because of a mental
    illness.


    That you continue to believe that lies do not require an
    intention to
    deceive after the above has been pointed out makes you willfully >>>>>>> ignorant, yet still not a liar.


    But, by the definiton I use, since it has been made clear to you
    that you are wrong, but you continue to spout words that have been >>>>>> proven incorrect make YOU a pathological liar.


    No it only proves that you continue to have no grasp of what a
    semantic
    tautology could possibly be. Any expression that is verified as
    necessarily true entirely on the basis of its meaning is a semantic
    tautology.

    Except that isn't the meaning of a "Tautology".


    In logic, a formula is satisfiable if it is true under at least one
    interpretation, and thus a tautology is a formula whose negation is
    unsatisfiable. In other words, it cannot be false. It cannot be untrue.

    Right, but that means using the rules of the field, so only definition
    of that field.

    Thus, your "Meaning of the Words" needs to quote ONLY actual
    definitions that have been accepted in the field.


    https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.

    What I actually mean is analytic truth, yet math people will have no
    clue about this because all of math is syntactic rather than semantic.
    https://plato.stanford.edu/entries/analytic-synthetic/

    I thought you previously were claiming that all of mathematics had to
    be analytic!


    Everyone that knows philosophy of mathematics knows that this is true.

    So, which is it? Is Mathematics analytical, or is it synthetic?

    Note, if you claim "synthetic", what "World" applies, since mathematics
    has concepts that can't be directly applied to the physical universe
    since, at least what we can know of it, is Finite.


    And why do you call out an article about analytic-synthetic when you
    are making a distintion between semantic and syntactic? That seems to
    be a non-sequitor.

    And math is NOT just syntactic, as syntax can't express many of the
    properties used in math.


    Because of this I coined my own term [semantic tautology] as the most
    self-descriptive term that I could find as a place-holder for my notion.


    Right, do don't understand how math works, so you make up terms that
    you can't actually define to fix it.


    The COMMON definition is "the saying of the same thing twice in
    different words, generally considered to be a fault of style (e.g.,
    they arrived one after the other in succession)".

    The Meaning in the fielc of Logic is "In mathematical logic, a
    tautology (from Greek: ταυτολογία) is a formula or assertion that is
    true in every possible interpretation."

    So, neither of them point to the meaning of the words.


    Did I say that I am limiting the application [semantic tautology] to
    words?

    You haven't given any other definition, so yes, by default you have.


    Only for people that don't have a clue what the term [semantics] means.

    Which seems to be you, since you can't supply the actual definition you
    are using.

    My question is WHICH of the definition of "Semantic" are you using,
    there are several that could be applied.


    You can't use the classic semantic of logic, since you disagree with
    how that works, so you only have words. (Classic logic semantics lets
    you show the principle of explosion works, so you can't be using that).


    When dealing with logic a [semantic tautology] may simply be a
    tautology(logic). When dealing with formalized natural language it may
    be more clear to refer to it as as [semantic tautology] in that the
    semantic meaning of natural language expression are formalized as
    axioms.

    In other words, you don't know what you are talking about and using
    word salad.


    If you are just making up words, you are admitting you have lost
    from the start.

    The problem is that word meanings, especially for "natural" language
    are to ill defined to be used to form the basis of formal logic. You
    need to

    Not when natural language is formalized.
    Semantic Grammar and the Power of Computational Language

    But then you need to use that formalize version, and be in a system
    that uses it.


    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

    So, you are admitting you don't know how formal logic works.

    Note, ChatGPT is proven to not understand how to get actually correct
    answer (or at least doesn't always apply those rules).


    work with FORMAL definitions, which become part of the Truth Makers
    of the system. At that point, either you semantic tautologies are
    real tautologies because they are alway true in every model, or they
    are not tautologies.


    Cats are animals in the currently existing model of the world, Cats may
    not exist in other possible worlds. [semantic tautology] applies with
    a model of the world.

    WHICH model of the world?


    Clearly you have never heard of possible worlds semantics. https://en.wikipedia.org/wiki/Possible_world

    But that isn't the only one.

    Again, you are confusing "a case" for "the universal".

    That seems to be your Achilies heal.


    (Note, you didn't use any UUID's, so you can't argue with them)

    Cats are also a type of tractor.

    It depends on WHICH model of (what part of) the world you are working.

    Also, it depends on actually being in a model of "the world" and not
    somethint else.

    You are just showing how little you understand about the basis of
    formal logic.


    I am mostly focusing on the philosophical foundation of logic rather
    than logic itself. To most logicians this is just silly nonsense.
    They don't care whether or not the rules are consistent, the rules
    are the word of God to logicians.

    So, why are you making claims about things proven with the logic you
    want to replace?

    And, your statement just shows you don't really understanf how logic works.




    Cats are animals is necessarily true even if no cats ever
    physically existed.

    Nope. If cats don't exist in the system, the statement is not
    necessarily true. For instance, the statement is NOT true in the
    system of the Natural Numbers.


    Cats are animals at the semantic level in the current model of the
    world. The model of the world has GUID placeholders for the notion of
    {cats} and {animals} and for every other unique sense meaning.

    No, you didn't use them, and the GUIDs only apply to the system that
    actually defines them.

    So in *A* model of the world, with the addition of the GUIDs on the
    terms, you can make that claim.

    The is not a unique "The" model of the world.

    Sure maybe the living animal: "cat" has always been a ten story office building and everyone has been fooled into thinking otherwise.

    So, you really don't understand that there are MANY models in use, and
    in not all of them does "cat" mean the animal, like you claim.




    Also, I am not "ignorant", since that means not having knowledge
    or awareness of something, but I do understand what you are saying >>>>>> and aware of your ideas, AND I POINT OUT YOUR ERRORS.

    Until you fully understand what a semantic tautology is and why it is >>>>> necessarily true you remain sufficiently ignorant.

    As far as you have explained, it is an illogical concept based on
    undefined grounds. You refuse to state whether your "semantic" is
    "by the meaning of the words" at which point you need understand
    that either

    When I refer to {semantic} and don't restrict this to the meaning of
    words then it applies to every formal language expression, natural
    language expression and formalized natural language expression.

    So, you don't understand what you are talking about.

    SO, you admit that you system falls to the principle of explosion, as
    the classic definition of semantic in classic logic is enough to allow
    it.


    I am not stupid enough to believe that
    FALSE <proves> Donald Trump is the Christ.

    Good, because the statement that you are looking at doesn't claim that.

    You apparntly just are unable to understand what it does say, because
    you have made yourself too ignorant of what logic actually is.


    Anyone with any sense rejects this nonsense:
    ex falso [sequitur] quodlibet

    so, you don't understand what "falso" actually means.

    Note, it isn't the logical value "False"



    That you assume otherwise is your mistake.

    In other words, you don't know how to say things precisly,

    you are using the "natural" meaning and break the rules of formal
    logic, or you mean the formal meaning within the system, at which
    point what is the difference between your "semantic" connections as
    you define them and the classical meaning of semantic being related
    to showable by a chain of connections to the truth makers of the
    system.


    We don't need to formalize the notions of {cats} and {animals} to know
    that cats <are> animals according to the meaning of those terms.

    Unless they are tractors, or something else using the word.

    That is why I stipulated that in the hypothetical formal system that I
    am referring to each unique sense meaning has its own GUID.

    So, you are stipulating that none of your arguments apply to the systems
    that the arguements were about.



    Note, if you take that later definition, then either you need to
    cripple the logic you allow or the implication operator and the
    principle of explosion both exist in your system. (If you don't
    define the implication operator as a base operation,

    I have already said quite a few times that I am probably replacing
    the implication operator with the Semantic Necessity operator: ⊨□

    But are you removing the AND and OR and NOT operator, if not, anything
    done by implication can be done with a combination of those.


    Show me.

    I DID, but apparently you didn't understand it.


    I don't think you actually understand how the operator works.


    It is a freaking truth table.

    Right, but you don't seem to understand how to apply that truth table.


    Also, can you actually DEFINE (not just show an exampe) of what this
    operator defines?


    Assume that everything known to mankind is in a type hierarchy, the GUID
    is the identifier in the type hierarchy for each unique sense meaning.


    Which would be ONE system to work in. Some system actually pre-suppose
    things that we know are not in this physical world. You don't seem to understand that not all logic is tyed to the physical world, so you
    can't go back to it as your source. There are very many logical systems,
    and many of them are based on contradictory assumptions to others. This
    BREAKS your concept of a single universal system.

    For instance, you can build one logical system with the assumption that
    the Twin Primes conjecture is true, and another where it is False. These
    both form viable logic systems, but they are contradictory to each
    other. Yes, perhaps eventually one will be proven to be based on a false
    truth maker, and lead to contradictions, but until that happens, both
    can be worked in.


    That you can't seem to remember key points that I make and repeat many
    times is very annoying.

    The fact that you never actually define things, and ignore my comments
    make that your fault.

    I think the problem is you don't know how to do any of the things I
    ask about, so when I keep asking you to do them, you get annoyed
    because I keep showing how stupid you are.



    but do include "not", "and" and "or" as operation, it can just be
    defined in the system).


    YOU are the ignorant one, as you don't seem to understand enough
    to even comment about the rebutalls to your claims.

    THAT show ignorance, and stupidity.






    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)