A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting behavior patterns in a finite number of steps of correct simulation. Inputs that
do terminate are simply simulated until they complete.
When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.
My reviewers cannot show that any of the extra features added to the UTM change the behavior of the simulated input for the first N steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the first
N steps.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it enters a final state” (Linz:1990:234)rrr
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps
of correct simulation then we have conclusive proof that D presents non- halting behavior to H.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
On 4/18/23 1:00 AM, olcott wrote:
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting behavior
patterns in a finite number of steps of correct simulation. Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.You agreed that the first N steps are correctly simulated.
The "Pathological Program" when built on such a Decider that does give
an answer, which you say will be non-halting, and then "Correctly
Simulated" by giving it representation to a UTM, we see that the
simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you have
added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you added
have removed essential features needed for it to be an actual UTM. That
you make this claim shows you don't actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle, since
it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps of
the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it
enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine, not
a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps
of correct simulation then we have conclusive proof that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting behavior >>> patterns in a finite number of steps of correct simulation. Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does give
an answer, which you say will be non-halting, and then "Correctly
Simulated" by giving it representation to a UTM, we see that the
simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you have
added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you added
have removed essential features needed for it to be an actual UTM.
That you make this claim shows you don't actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to the UTM >>> change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps of
the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it
enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine,
not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps
of correct simulation then we have conclusive proof that D presents non- >>> halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation. Inputs
that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see
that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its
input
it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an actual
UTM. That you make this claim shows you don't actually know what a
UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to
the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps
of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D >>>>> presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it >>>>> enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine,
not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps >>>>> of correct simulation then we have conclusive proof that D presents
non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines so it is unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting behavior >>> patterns in a finite number of steps of correct simulation. Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does give
an answer, which you say will be non-halting, and then "Correctly
Simulated" by giving it representation to a UTM, we see that the
simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you have
added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you added
have removed essential features needed for it to be an actual UTM.
That you make this claim shows you don't actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to the UTM >>> change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps of
the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it
enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine,
not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps
of correct simulation then we have conclusive proof that D presents non- >>> halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation. Inputs that >>>> do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see
that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you have
added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its input >>>> it derives the exact same N steps that a pure UTM would derive because >>>> it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you added
have removed essential features needed for it to be an actual UTM.
That you make this claim shows you don't actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to the
UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps
of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it >>>> enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine,
not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps >>>> of correct simulation then we have conclusive proof that D presents
non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is erroneous:
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider thinks
that Px is non-halting which is an obvious error due to a design flaw in
the architecture of your decider. Only the Flibble Signaling Simulating Halt Decider (SSHD) correctly handles this case.
/Flibble
On 4/18/23 11:58 AM, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and
halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation. Inputs that >>>> do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see
that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you have
added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its input >>>> it derives the exact same N steps that a pure UTM would derive because >>>> it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you added
have removed essential features needed for it to be an actual UTM.
That you make this claim shows you don't actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to the
UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps
of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it >>>> enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine,
not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps >>>> of correct simulation then we have conclusive proof that D presents
non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is shown
by the fact that D(D) does halt.
It might show that no possible H could simulate the input to a final
state, but that isn't the definition of Halting. Halting is strictly
about the behavior of the machine itself.
On 4/18/23 7:13 PM, olcott wrote:
On 4/18/2023 5:30 PM, Richard Damon wrote:
On 4/18/23 11:58 AM, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation. Inputs >>>>>> that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see
that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its
input
it derives the exact same N steps that a pure UTM would derive
because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an actual
UTM. That you make this claim shows you don't actually know what a
UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to
the UTM
change the behavior of the simulated input for the first N steps
of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps
of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D >>>>>> presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever >>>>>> it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot >>>>>> possibly reach its simulated final state in any finite number of
steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
shown by the fact that D(D) does halt.
It might show that no possible H could simulate the input to a final
state, but that isn't the definition of Halting. Halting is strictly
about the behavior of the machine itself.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
computation that halts… “the Turing machine will halt whenever it enters >> a final state” (Linz:1990:234)
Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as it must to be saying that its input is non-halting.
This is because embedded_H and H must be identical machines, and thus do exactly the same thing when given the same input.
Non-halting behavior patterns can be matched in N steps
⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
number of steps
Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a finite number of steps.
You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
because it doesn't stop simulating until it reaches a final state, or it
just keeps simulating.
H / embedded_H are NOT a UTM, as they don't have that necessary property.
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
Except you have defined that H, and thus embeded_H doesn't do (c), but
when it sees the attempted to go into embedded_H with the same input
actually aborts its simulation and goes to Ĥ.qn which causes the machine
Ĥ to halt.
On 4/18/2023 5:30 PM, Richard Damon wrote:
On 4/18/23 11:58 AM, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation. Inputs
that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see
that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its
input
it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an actual
UTM. That you make this claim shows you don't actually know what a
UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to
the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps
of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D >>>>> presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it >>>>> enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine,
not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps >>>>> of correct simulation then we have conclusive proof that D presents
non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
shown by the fact that D(D) does halt.
It might show that no possible H could simulate the input to a final
state, but that isn't the definition of Halting. Halting is strictly
about the behavior of the machine itself.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
computation that halts… “the Turing machine will halt whenever it enters a final state” (Linz:1990:234)
Non-halting behavior patterns can be matched in N steps
⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
number of steps
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
The above N steps proves that ⟨Ĥ⟩ correctly simulated by embedded_H could not possibly reach the final state of ⟨Ĥ.q0⟩ in any finite number of steps of correct simulation *because ⟨Ĥ⟩ is defined to have*
*a pathological relationship to embedded_H*
That a UTM applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts shows an entirely different sequence
*because UTM and ⟨Ĥ⟩ ⟨Ĥ⟩ do not have a pathological relationship*
On 4/18/2023 9:31 PM, Richard Damon wrote:
On 4/18/23 7:13 PM, olcott wrote:
On 4/18/2023 5:30 PM, Richard Damon wrote:
On 4/18/23 11:58 AM, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation.
Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see >>>>>> that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its >>>>>>> input
it derives the exact same N steps that a pure UTM would derive
because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an
actual UTM. That you make this claim shows you don't actually know >>>>>> what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle, >>>>>> since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
change the behavior of the simulated input for the first N steps >>>>>>> of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.
No one claims that it doesn't correctly reproduce the first N
steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>> simulated by simulating halt decider H are the actual behavior
that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever >>>>>>> it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>
When we see (after N steps) that D correctly simulated by H cannot >>>>>>> possibly reach its simulated final state in any finite number of >>>>>>> steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
shown by the fact that D(D) does halt.
It might show that no possible H could simulate the input to a final
state, but that isn't the definition of Halting. Halting is strictly
about the behavior of the machine itself.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
computation that halts… “the Turing machine will halt whenever it enters
a final state” (Linz:1990:234)
Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as
it must to be saying that its input is non-halting.
This is because embedded_H and H must be identical machines, and thus
do exactly the same thing when given the same input.
Non-halting behavior patterns can be matched in N steps
⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
number of steps
Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a
finite number of steps.
You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
because it doesn't stop simulating until it reaches a final state, or
it just keeps simulating.
H / embedded_H are NOT a UTM, as they don't have that necessary property.
Except you have defined that H, and thus embeded_H doesn't do (c), but
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>
when it sees the attempted to go into embedded_H with the same input
actually aborts its simulation and goes to Ĥ.qn which causes the
machine Ĥ to halt.
embedded_H could do (c) 10,000 times before aborting which would have to
be the actual behavior of the actual input because embedded_H remains in
pure UTM mode until it aborts.
How many times does it take for you to understand that ⟨Ĥ⟩ can't possibly reach ⟨Ĥ.qn⟩ because of its pathological relationship to embedded_H ?
On 4/18/23 11:21 PM, olcott wrote:
On 4/18/2023 10:10 PM, Richard Damon wrote:
On 4/18/23 10:57 PM, olcott wrote:But that is flat out not the truth. The input simulated by embedded_H
On 4/18/2023 9:31 PM, Richard Damon wrote:
On 4/18/23 7:13 PM, olcott wrote:
On 4/18/2023 5:30 PM, Richard Damon wrote:
On 4/18/23 11:58 AM, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its >>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>> state and
halt. It does this by correctly recognizing several
non-halting behavior
patterns in a finite number of steps of correct simulation. >>>>>>>>>> Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that >>>>>>>>> does give an answer, which you say will be non-halting, and
then "Correctly Simulated" by giving it representation to a
UTM, we see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>> you have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of >>>>>>>>>> its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>>>> because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you >>>>>>>>> added have removed essential features needed for it to be an >>>>>>>>> actual UTM. That you make this claim shows you don't actually >>>>>>>>> know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal
vehicle, since it started as one and just had some extra
features axded.
My reviewers cannot show that any of the extra features added >>>>>>>>>> to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>> steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>> the first N steps.
No one claims that it doesn't correctly reproduce the first N >>>>>>>>> steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>>> that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>> wrong.
When we see (after N steps) that D correctly simulated by H >>>>>>>>>> cannot
possibly reach its simulated final state in any finite number >>>>>>>>>> of steps
of correct simulation then we have conclusive proof that D >>>>>>>>>> presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly >>>>>>>> recognized in the first N steps.
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as
is shown by the fact that D(D) does halt.
It might show that no possible H could simulate the input to a
final state, but that isn't the definition of Halting. Halting is >>>>>>> strictly about the behavior of the machine itself.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
computation that halts… “the Turing machine will halt whenever it >>>>>> enters
a final state” (Linz:1990:234)
Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, >>>>> as it must to be saying that its input is non-halting.
This is because embedded_H and H must be identical machines, and
thus do exactly the same thing when given the same input.
Non-halting behavior patterns can be matched in N steps
⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a
finite
number of steps
Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in >>>>> a finite number of steps.
You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste, >>>>> because it doesn't stop simulating until it reaches a final state,
or it just keeps simulating.
H / embedded_H are NOT a UTM, as they don't have that necessary
property.
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
Except you have defined that H, and thus embeded_H doesn't do (c),
but when it sees the attempted to go into embedded_H with the same
input actually aborts its simulation and goes to Ĥ.qn which causes
the machine Ĥ to halt.
embedded_H could do (c) 10,000 times before aborting which would
have to
be the actual behavior of the actual input because embedded_H
remains in
pure UTM mode until it aborts.
No such thing. UTM isn't a "Mode" but an identity.
if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.
necessarily must have exact same behavior as simulated by a pure UTM
until the simulation of this input is aborted because aborting the
simulation of its input is the only one of three features added to a UTM
that changes the behavior of its input relative to a pure UTM.
Which makes it NOT a UTM, so embedded_H doesn't actually act like a UTM.
It MUST act like H, or you have LIED about following the requirement for building Ĥ.
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it.
(b) Even aborting the simulation after N steps doesn't change the
first N steps.
N steps could be 10,000 recursive simulations.
Rigth, and then one more recusrive simulation by a REAL UTM past that
point will see the outer embedded_H abort its simulation, go to Qn and Ĥ will then halt, showing embedded_H was wrong to say it couldn't.
Aborted simulations don't, by themselves, show non-halting behavior.
The only case that this doesn't work is if embedded_H actually never
does abort, but then H can't either, so H doesn't answer, and fails to
be a decider.
On 4/18/2023 10:10 PM, Richard Damon wrote:
On 4/18/23 10:57 PM, olcott wrote:But that is flat out not the truth. The input simulated by embedded_H necessarily must have exact same behavior as simulated by a pure UTM
On 4/18/2023 9:31 PM, Richard Damon wrote:
On 4/18/23 7:13 PM, olcott wrote:
On 4/18/2023 5:30 PM, Richard Damon wrote:
On 4/18/23 11:58 AM, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its >>>>>>>>> correctly simulated input can possibly reach its own final
state and
halt. It does this by correctly recognizing several non-halting >>>>>>>>> behavior
patterns in a finite number of steps of correct simulation.
Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that
does give an answer, which you say will be non-halting, and then >>>>>>>> "Correctly Simulated" by giving it representation to a UTM, we >>>>>>>> see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you >>>>>>>> have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of >>>>>>>>> its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>>> because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you >>>>>>>> added have removed essential features needed for it to be an
actual UTM. That you make this claim shows you don't actually
know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal
vehicle, since it started as one and just had some extra
features axded.
My reviewers cannot show that any of the extra features added >>>>>>>>> to the UTM
change the behavior of the simulated input for the first N
steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>> the first N steps.
No one claims that it doesn't correctly reproduce the first N
steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>> that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt
whenever it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>>
When we see (after N steps) that D correctly simulated by H cannot >>>>>>>>> possibly reach its simulated final state in any finite number >>>>>>>>> of steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is >>>>>> shown by the fact that D(D) does halt.
It might show that no possible H could simulate the input to a
final state, but that isn't the definition of Halting. Halting is
strictly about the behavior of the machine itself.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
computation that halts… “the Turing machine will halt whenever it >>>>> enters
a final state” (Linz:1990:234)
Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as >>>> it must to be saying that its input is non-halting.
This is because embedded_H and H must be identical machines, and
thus do exactly the same thing when given the same input.
Non-halting behavior patterns can be matched in N steps
⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a
finite
number of steps
Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in >>>> a finite number of steps.
You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
because it doesn't stop simulating until it reaches a final state,
or it just keeps simulating.
H / embedded_H are NOT a UTM, as they don't have that necessary
property.
Except you have defined that H, and thus embeded_H doesn't do (c),
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>>
but when it sees the attempted to go into embedded_H with the same
input actually aborts its simulation and goes to Ĥ.qn which causes
the machine Ĥ to halt.
embedded_H could do (c) 10,000 times before aborting which would have to >>> be the actual behavior of the actual input because embedded_H remains in >>> pure UTM mode until it aborts.
No such thing. UTM isn't a "Mode" but an identity.
if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.
until the simulation of this input is aborted because aborting the
simulation of its input is the only one of three features added to a UTM
that changes the behavior of its input relative to a pure UTM.
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it.
(b) Even aborting the simulation after N steps doesn't change the first
N steps.
N steps could be 10,000 recursive simulations.
On 4/18/23 10:57 PM, olcott wrote:But that is flat out not the truth. The input simulated by embedded_H necessarily must have exact same behavior as simulated by a pure UTM
On 4/18/2023 9:31 PM, Richard Damon wrote:
On 4/18/23 7:13 PM, olcott wrote:
On 4/18/2023 5:30 PM, Richard Damon wrote:
On 4/18/23 11:58 AM, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its >>>>>>>> correctly simulated input can possibly reach its own final state >>>>>>>> and
halt. It does this by correctly recognizing several non-halting >>>>>>>> behavior
patterns in a finite number of steps of correct simulation.
Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does >>>>>>> give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we
see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you >>>>>>> have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of
its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>> because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an
actual UTM. That you make this claim shows you don't actually
know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal
vehicle, since it started as one and just had some extra features >>>>>>> axded.
My reviewers cannot show that any of the extra features added to >>>>>>>> the UTM
change the behavior of the simulated input for the first N steps >>>>>>>> of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change
the first N steps.
No one claims that it doesn't correctly reproduce the first N
steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>> that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever >>>>>>>> it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>
When we see (after N steps) that D correctly simulated by H cannot >>>>>>>> possibly reach its simulated final state in any finite number of >>>>>>>> steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Nope, the pattern you detect isn't a "Nobn-Halting" pattern, as is
shown by the fact that D(D) does halt.
It might show that no possible H could simulate the input to a
final state, but that isn't the definition of Halting. Halting is
strictly about the behavior of the machine itself.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
computation that halts… “the Turing machine will halt whenever it
enters
a final state” (Linz:1990:234)
Right and Ĥ (Ĥ) will reach Ĥ.qn and halt if H (Ĥ) (Ĥ) goes to qn, as >>> it must to be saying that its input is non-halting.
This is because embedded_H and H must be identical machines, and thus
do exactly the same thing when given the same input.
Non-halting behavior patterns can be matched in N steps
⟨Ĥ⟩ Halting is reaching its simulated final state of ⟨Ĥ.qn⟩ in a finite
number of steps
Nope, Halting is the MACHINE Ĥ (Ĥ) reaching its final state Ĥ.qn in a >>> finite number of steps.
You can also use UTM (Ĥ) (Ĥ), which also reaches that final stste,
because it doesn't stop simulating until it reaches a final state, or
it just keeps simulating.
H / embedded_H are NOT a UTM, as they don't have that necessary
property.
Except you have defined that H, and thus embeded_H doesn't do (c),
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which
simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>
but when it sees the attempted to go into embedded_H with the same
input actually aborts its simulation and goes to Ĥ.qn which causes
the machine Ĥ to halt.
embedded_H could do (c) 10,000 times before aborting which would have to
be the actual behavior of the actual input because embedded_H remains in
pure UTM mode until it aborts.
No such thing. UTM isn't a "Mode" but an identity.
if embedded_H aborts its simulation, it NEVER was a UTM. PERIOD.
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000 necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have a pathological relationship to embedded_H.
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is NOT
what does the "simulation by H" show, but what is the actual behavior of
the actual machine the input represents.
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation. Inputs
that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see
that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its
input
it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an actual
UTM. That you make this claim shows you don't actually know what a
UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to
the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps
of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D >>>>> presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever it >>>>> enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL machine,
not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot
possibly reach its simulated final state in any finite number of steps >>>>> of correct simulation then we have conclusive proof that D presents
non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines so it is unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider thinks
that Px is non-halting which is an obvious error due to a design flaw
in the architecture of your decider. Only the Flibble Signaling
Simulating Halt Decider (SSHD) correctly handles this case.
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation. Inputs >>>>>> that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see
that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its
input
it derives the exact same N steps that a pure UTM would derive
because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an actual
UTM. That you make this claim shows you don't actually know what a
UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle,
since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to
the UTM
change the behavior of the simulated input for the first N steps
of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
No one claims that it doesn't correctly reproduce the first N steps
of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D
simulated by simulating halt decider H are the actual behavior that D >>>>>> presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever >>>>>> it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong.
When we see (after N steps) that D correctly simulated by H cannot >>>>>> possibly reach its simulated final state in any finite number of
steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is erroneous: >>>
My new paper anchors its ideas in actual Turing machines so it is
unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider thinks
that Px is non-halting which is an obvious error due to a design flaw
in the architecture of your decider. Only the Flibble Signaling
Simulating Halt Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt decision to its
caller in finite time
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its >>>>>>>> correctly simulated input can possibly reach its own final state >>>>>>>> and
halt. It does this by correctly recognizing several non-halting >>>>>>>> behavior
patterns in a finite number of steps of correct simulation.
Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does >>>>>>> give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we
see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you >>>>>>> have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of
its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>> because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an
actual UTM. That you make this claim shows you don't actually
know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal
vehicle, since it started as one and just had some extra features >>>>>>> axded.
My reviewers cannot show that any of the extra features added to >>>>>>>> the UTM
change the behavior of the simulated input for the first N steps >>>>>>>> of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change
the first N steps.
No one claims that it doesn't correctly reproduce the first N
steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>> that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever >>>>>>>> it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>
When we see (after N steps) that D correctly simulated by H cannot >>>>>>>> possibly reach its simulated final state in any finite number of >>>>>>>> steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is
erroneous:
My new paper anchors its ideas in actual Turing machines so it is
unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* >>>> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider
thinks that Px is non-halting which is an obvious error due to a
design flaw in the architecture of your decider. Only the Flibble
Signaling Simulating Halt Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt decision to
its caller in finite time
Although H must always return to some caller H is not allowed to return
to any caller that essentially calls H in infinite recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does not have any infinite recursion thereby proving that
such recursion is not a
necessary feature of SHDs invoked from the program being analyzed, the infinite recursion in your H is present because your H has a critical
design flaw.
/Flibble
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation.
Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see >>>>>> that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its >>>>>>> input
it derives the exact same N steps that a pure UTM would derive
because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an
actual UTM. That you make this claim shows you don't actually know >>>>>> what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle, >>>>>> since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
change the behavior of the simulated input for the first N steps >>>>>>> of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.
No one claims that it doesn't correctly reproduce the first N
steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>> simulated by simulating halt decider H are the actual behavior
that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever >>>>>>> it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>
When we see (after N steps) that D correctly simulated by H cannot >>>>>>> possibly reach its simulated final state in any finite number of >>>>>>> steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is
erroneous:
My new paper anchors its ideas in actual Turing machines so it is
unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider
thinks that Px is non-halting which is an obvious error due to a
design flaw in the architecture of your decider. Only the Flibble
Signaling Simulating Halt Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt decision to its
caller in finite time
Although H must always return to some caller H is not allowed to return
to any caller that essentially calls H in infinite recursion.
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:Nope. For H to be a halt decider it must return a halt decision to
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its >>>>>>>>> correctly simulated input can possibly reach its own final
state and
halt. It does this by correctly recognizing several non-halting >>>>>>>>> behavior
patterns in a finite number of steps of correct simulation.
Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that
does give an answer, which you say will be non-halting, and then >>>>>>>> "Correctly Simulated" by giving it representation to a UTM, we >>>>>>>> see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you >>>>>>>> have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of >>>>>>>>> its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>>> because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you >>>>>>>> added have removed essential features needed for it to be an
actual UTM. That you make this claim shows you don't actually
know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal
vehicle, since it started as one and just had some extra
features axded.
My reviewers cannot show that any of the extra features added >>>>>>>>> to the UTM
change the behavior of the simulated input for the first N
steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>> the first N steps.
No one claims that it doesn't correctly reproduce the first N
steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>> that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt
whenever it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>>>
When we see (after N steps) that D correctly simulated by H cannot >>>>>>>>> possibly reach its simulated final state in any finite number >>>>>>>>> of steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is
erroneous:
My new paper anchors its ideas in actual Turing machines so it is
unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the >>>>> actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs* >>>>> https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider
thinks that Px is non-halting which is an obvious error due to a
design flaw in the architecture of your decider. Only the Flibble >>>>>> Signaling Simulating Halt Decider (SSHD) correctly handles this case. >>>>
its caller in finite time
Although H must always return to some caller H is not allowed to return
to any caller that essentially calls H in infinite recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does not have any
infinite recursion thereby proving that
It overrode that behavior that was specified by the machine code for Px.
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is NOT
what does the "simulation by H" show, but what is the actual behavior
of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.
My reviewers cannot show that any of the extra features added to the UTM change the behavior of the simulated input for the first N steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the first
N steps.
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the behavior of the simulation of N steps by embedded_H because embedded_H
has the exact same behavior as a UTM for these first N steps, and you
already agreed with this.
Did you quit believing in UTMs?
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its >>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>> state and
halt. It does this by correctly recognizing several
non-halting behavior
patterns in a finite number of steps of correct simulation. >>>>>>>>>> Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that >>>>>>>>> does give an answer, which you say will be non-halting, and
then "Correctly Simulated" by giving it representation to a
UTM, we see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>> you have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of >>>>>>>>>> its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>>>> because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you >>>>>>>>> added have removed essential features needed for it to be an >>>>>>>>> actual UTM. That you make this claim shows you don't actually >>>>>>>>> know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal
vehicle, since it started as one and just had some extra
features axded.
My reviewers cannot show that any of the extra features added >>>>>>>>>> to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>> steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>> the first N steps.
No one claims that it doesn't correctly reproduce the first N >>>>>>>>> steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>>>>> simulated by simulating halt decider H are the actual behavior >>>>>>>>>> that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>> wrong.
When we see (after N steps) that D correctly simulated by H >>>>>>>>>> cannot
possibly reach its simulated final state in any finite number >>>>>>>>>> of steps
of correct simulation then we have conclusive proof that D >>>>>>>>>> presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly >>>>>>>> recognized in the first N steps.
Your assumption that a program that calls H is non-halting is
erroneous:
My new paper anchors its ideas in actual Turing machines so it is
unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the >>>>>> actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem
Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider
thinks that Px is non-halting which is an obvious error due to a >>>>>>> design flaw in the architecture of your decider. Only the
Flibble Signaling Simulating Halt Decider (SSHD) correctly
handles this case.
Nope. For H to be a halt decider it must return a halt decision to
its caller in finite time
Although H must always return to some caller H is not allowed to return >>>> to any caller that essentially calls H in infinite recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does not have
any infinite recursion thereby proving that
It overrode that behavior that was specified by the machine code for Px.
Nope. You SHD is not a halt decider as
it has a critical design flaw as
it doesn't correctly report that Px halts.
/Flibble.
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its
mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>> 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to >>>> have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is NOT
what does the "simulation by H" show, but what is the actual behavior
of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a UTM.
You are just proving that you are a pathological liar that doesn't know
what he is talking about.
My reviewers cannot show that any of the extra features added to the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
behavior of the simulation of N steps by embedded_H because embedded_H
has the exact same behavior as a UTM for these first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ applied to
(Ĥ) does.
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>> mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>>> 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined to >>>>> have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is NOT
what does the "simulation by H" show, but what is the actual
behavior of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of its input
it derives the exact same N steps that a pure UTM would derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a UTM.
You are just proving that you are a pathological liar that doesn't
know what he is talking about.
My reviewers cannot show that any of the extra features added to the UTM >>> change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
behavior of the simulation of N steps by embedded_H because embedded_H
has the exact same behavior as a UTM for these first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ applied to
(Ĥ) does.
that cannot possibly cause its simulation of its input to diverge from
the simulation of a pure UTM for the first N steps of simulation we know
that it necessarily does provide the actual behavior specified by this
input for these N steps.
Because these N steps can include 10,000 recursive simulations of ⟨Ĥ⟩ by embedded_H, these recursive simulations <are> the actual behavior
specified by this input.
On 4/19/23 7:16 PM, olcott wrote:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>>> mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>>>> 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined >>>>>> to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is
NOT what does the "simulation by H" show, but what is the actual
behavior of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of its input >>>> it derives the exact same N steps that a pure UTM would derive because >>>> it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a UTM.
You are just proving that you are a pathological liar that doesn't
know what he is talking about.
My reviewers cannot show that any of the extra features added to the
UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
behavior of the simulation of N steps by embedded_H because embedded_H >>>> has the exact same behavior as a UTM for these first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ applied to
(Ĥ) does.
that cannot possibly cause its simulation of its input to diverge from
the simulation of a pure UTM for the first N steps of simulation we know
that it necessarily does provide the actual behavior specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement of a UTM
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>>>> mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even after >>>>>>> 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined >>>>>>> to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is
NOT what does the "simulation by H" show, but what is the actual
behavior of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of its
input
it derives the exact same N steps that a pure UTM would derive because >>>>> it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a UTM.
You are just proving that you are a pathological liar that doesn't
know what he is talking about.
My reviewers cannot show that any of the extra features added to
the UTM
change the behavior of the simulated input for the first N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>> behavior of the simulation of N steps by embedded_H because embedded_H >>>>> has the exact same behavior as a UTM for these first N steps, and you >>>>> already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ applied
to (Ĥ) does.
that cannot possibly cause its simulation of its input to diverge from
the simulation of a pure UTM for the first N steps of simulation we know >>> that it necessarily does provide the actual behavior specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three features >>>> that cannot possibly cause its simulation of its input to diverge from >>>> the simulation of a pure UTM for the first N steps of simulation we
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error* >>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute its >>>>>>>> mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>> after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is defined >>>>>>>> to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is >>>>>>> NOT what does the "simulation by H" show, but what is the actual >>>>>>> behavior of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of its
input
it derives the exact same N steps that a pure UTM would derive
because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a UTM. >>>>>
You are just proving that you are a pathological liar that doesn't
know what he is talking about.
My reviewers cannot show that any of the extra features added to
the UTM
change the behavior of the simulated input for the first N steps of >>>>>> simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>> behavior of the simulation of N steps by embedded_H because
embedded_H
has the exact same behavior as a UTM for these first N steps, and you >>>>>> already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ applied
to (Ĥ) does.
know
that it necessarily does provide the actual behavior specified by this >>>> input for these N steps.
And is no longer a UTM, since if fails to meet the requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the
actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL of them.
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error* >>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>> its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>> after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question is >>>>>>>> NOT what does the "simulation by H" show, but what is the actual >>>>>>>> behavior of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of its >>>>>>> input
it derives the exact same N steps that a pure UTM would derive
because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a UTM. >>>>>>
You are just proving that you are a pathological liar that doesn't >>>>>> know what he is talking about.
My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
change the behavior of the simulated input for the first N steps of >>>>>>> simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>>> behavior of the simulation of N steps by embedded_H because
embedded_H
has the exact same behavior as a UTM for these first N steps, and >>>>>>> you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ applied >>>>>> to (Ĥ) does.
features
that cannot possibly cause its simulation of its input to diverge from >>>>> the simulation of a pure UTM for the first N steps of simulation we
know
that it necessarily does provide the actual behavior specified by this >>>>> input for these N steps.
And is no longer a UTM, since if fails to meet the requirement of a UTM >>>>
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the >>> actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 recursive simulations these are the actual behavior of ⟨Ĥ⟩.
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>>>> its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>>> after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question >>>>>>>>>> is NOT what does the "simulation by H" show, but what is the >>>>>>>>>> actual behavior of the actual machine the input represents. >>>>>>>>>>
When a simulating halt decider correctly simulates N steps of >>>>>>>>> its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>>> because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a >>>>>>>> UTM.
You are just proving that you are a pathological liar that
doesn't know what he is talking about.
My reviewers cannot show that any of the extra features added >>>>>>>>> to the UTM
change the behavior of the simulated input for the first N
steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>> the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>>>>> behavior of the simulation of N steps by embedded_H because
embedded_H
has the exact same behavior as a UTM for these first N steps, >>>>>>>>> and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ
applied to (Ĥ) does.
features
that cannot possibly cause its simulation of its input to diverge >>>>>>> from
the simulation of a pure UTM for the first N steps of simulation >>>>>>> we know
that it necessarily does provide the actual behavior specified by >>>>>>> this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement of
a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the >>>>> actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL of
them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the input as
defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.
If you simply don't "believe in" UTMs then you might not see this
correctly.
If you fully comprehend UTMs then you understand that 10,000 recursive simulations of ⟨Ĥ⟩ by embedded_H are the actual behavior of ⟨Ĥ⟩.
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>>> its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>> after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question >>>>>>>>> is NOT what does the "simulation by H" show, but what is the >>>>>>>>> actual behavior of the actual machine the input represents.
When a simulating halt decider correctly simulates N steps of
its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>> because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of a >>>>>>> UTM.
You are just proving that you are a pathological liar that
doesn't know what he is talking about.
My reviewers cannot show that any of the extra features added to >>>>>>>> the UTM
change the behavior of the simulated input for the first N steps of >>>>>>>> simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change
the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the >>>>>>>> behavior of the simulation of N steps by embedded_H because
embedded_H
has the exact same behavior as a UTM for these first N steps,
and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ
applied to (Ĥ) does.
features
that cannot possibly cause its simulation of its input to diverge
from
the simulation of a pure UTM for the first N steps of simulation
we know
that it necessarily does provide the actual behavior specified by
this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement of a
UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are the >>>> actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL of them. >>>
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the input as
defined,
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three >>>>>>>> features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must compute >>>>>>>>>>>> its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>>>> after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The question >>>>>>>>>>> is NOT what does the "simulation by H" show, but what is the >>>>>>>>>>> actual behavior of the actual machine the input represents. >>>>>>>>>>>
When a simulating halt decider correctly simulates N steps of >>>>>>>>>> its input
it derives the exact same N steps that a pure UTM would derive >>>>>>>>>> because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of >>>>>>>>> a UTM.
You are just proving that you are a pathological liar that
doesn't know what he is talking about.
My reviewers cannot show that any of the extra features added >>>>>>>>>> to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>> steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>> the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
behavior of the simulation of N steps by embedded_H because >>>>>>>>>> embedded_H
has the exact same behavior as a UTM for these first N steps, >>>>>>>>>> and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ
applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to
diverge from
the simulation of a pure UTM for the first N steps of simulation >>>>>>>> we know
that it necessarily does provide the actual behavior specified >>>>>>>> by this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement of >>>>>>> a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>> the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL of
them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual
behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the input as
defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior of the
ACTUAL MACHINE which is decribed by the input.
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:Nope, Read the problem definition.
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three >>>>>>>>> features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation error* >>>>>>>>>>>>> The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ even >>>>>>>>>>>>> after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The
question is NOT what does the "simulation by H" show, but >>>>>>>>>>>> what is the actual behavior of the actual machine the input >>>>>>>>>>>> represents.
When a simulating halt decider correctly simulates N steps of >>>>>>>>>>> its input
it derives the exact same N steps that a pure UTM would
derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition of >>>>>>>>>> a UTM.
You are just proving that you are a pathological liar that >>>>>>>>>> doesn't know what he is talking about.
My reviewers cannot show that any of the extra features added >>>>>>>>>>> to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>> steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>>> the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is the
behavior of the simulation of N steps by embedded_H because >>>>>>>>>>> embedded_H
has the exact same behavior as a UTM for these first N steps, >>>>>>>>>>> and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>> applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to
diverge from
the simulation of a pure UTM for the first N steps of
simulation we know
that it necessarily does provide the actual behavior specified >>>>>>>>> by this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement >>>>>>>> of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>>> the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL of >>>>>> them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>> behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the input as
defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H. >>
The behavior to be decided by a Halt Decider is the behavior of the
ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior of the
actual input must necessarily be the N steps simulated by embedded_H.
The only alternative is to simply disbelieve in UTMs.
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can include
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>> three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>>>> error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>>> even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, but >>>>>>>>>>>>>>> what is the actual behavior of the actual machine the >>>>>>>>>>>>>>> input represents.
When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>>> of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition >>>>>>>>>>>>> of a UTM.
You are just proving that you are a pathological liar that >>>>>>>>>>>>> doesn't know what he is talking about.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>>>>> steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>> change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents >>>>>>>>>>>>>> is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>> steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>>>> applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to >>>>>>>>>>>> diverge from
the simulation of a pure UTM for the first N steps of
simulation we know
that it necessarily does provide the actual behavior
specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the
requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>> are the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change the >>>>>>>>>> first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL >>>>>>>>> of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>>>> behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the input
as defined,
behavior
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior of the
ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior of the
actual input must necessarily be the N steps simulated by embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS
of a UTM, the statement is meaningless.
10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can include
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:Nope, Read the problem definition.
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this behavior >>>> is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H. >>>
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with three >>>>>>>>>> features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>> error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>> even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>> question is NOT what does the "simulation by H" show, but >>>>>>>>>>>>> what is the actual behavior of the actual machine the input >>>>>>>>>>>>> represents.
When a simulating halt decider correctly simulates N steps >>>>>>>>>>>> of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>> derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition >>>>>>>>>>> of a UTM.
You are just proving that you are a pathological liar that >>>>>>>>>>> doesn't know what he is talking about.
My reviewers cannot show that any of the extra features >>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>>> steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>> change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents is >>>>>>>>>>>> the
behavior of the simulation of N steps by embedded_H because >>>>>>>>>>>> embedded_H
has the exact same behavior as a UTM for these first N >>>>>>>>>>>> steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>> applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to
diverge from
the simulation of a pure UTM for the first N steps of
simulation we know
that it necessarily does provide the actual behavior specified >>>>>>>>>> by this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement >>>>>>>>> of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>>>> the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the >>>>>>>> first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL
of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>> behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the input as
defined,
The behavior to be decided by a Halt Decider is the behavior of the
ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior of the
actual input must necessarily be the N steps simulated by embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS of a
UTM, the statement is meaningless.
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can include
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:Nope, Read the problem definition.
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with >>>>>>>>>>> three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>>> error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>> even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ is >>>>>>>>>>>>>>> defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, but >>>>>>>>>>>>>> what is the actual behavior of the actual machine the >>>>>>>>>>>>>> input represents.
When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>> of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the definition >>>>>>>>>>>> of a UTM.
You are just proving that you are a pathological liar that >>>>>>>>>>>> doesn't know what he is talking about.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>>>> steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>> change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents >>>>>>>>>>>>> is the
behavior of the simulation of N steps by embedded_H because >>>>>>>>>>>>> embedded_H
has the exact same behavior as a UTM for these first N >>>>>>>>>>>>> steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>>> applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to >>>>>>>>>>> diverge from
the simulation of a pure UTM for the first N steps of
simulation we know
that it necessarily does provide the actual behavior
specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the requirement >>>>>>>>>> of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must are >>>>>>>>> the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>> (c) Even aborting the simulation after N steps doesn't change the >>>>>>>>> first N steps.
But a UTM doesn't simulate just "N" steps of its input, but ALL >>>>>>>> of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>>> behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the input as >>>>>> defined,
behavior
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H. >>>>
The behavior to be decided by a Halt Decider is the behavior of the
ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior of the
actual input must necessarily be the N steps simulated by embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS of
a UTM, the statement is meaningless.
10,000 recursive simulations.
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can include
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>> three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of equivocation >>>>>>>>>>>>>>>>> error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>>>> even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ >>>>>>>>>>>>>>>>> is defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>> the input represents.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the
definition of a UTM.
You are just proving that you are a pathological liar that >>>>>>>>>>>>>> doesn't know what he is talking about.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first >>>>>>>>>>>>>>> N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>> change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ represents >>>>>>>>>>>>>>> is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>> steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE Ĥ >>>>>>>>>>>>>> applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to >>>>>>>>>>>>> diverge from
the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>> simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the
requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>>> are the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>>> the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>> ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the actual >>>>>>>>> behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the input >>>>>>>> as defined,
behavior
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior of
the ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior of the >>>>> actual input must necessarily be the N steps simulated by embedded_H. >>>>>
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS
of a UTM, the statement is meaningless.
10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N steps.
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can include
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this >>>>>>>> behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>>> three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of ⟨Ĥ.qn⟩ >>>>>>>>>>>>>>>>>> even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>> is defined to have
a pathological relationship to embedded_H.
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>>> the input represents.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological liar >>>>>>>>>>>>>>> that doesn't know what he is talking about.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first >>>>>>>>>>>>>>>> N steps of
simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>> change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>>> steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE >>>>>>>>>>>>>>> Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to >>>>>>>>>>>>>> diverge from
the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>> simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the
requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>>>> are the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>>> ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000 >>>>>>>>>> recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the input >>>>>>>>> as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by
embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior of
the ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior of the >>>>>> actual input must necessarily be the N steps simulated by embedded_H. >>>>>>
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the REQUIREMENTS
of a UTM, the statement is meaningless.
10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N
steps.
Right, but we don't care about that. We care about the TOTAL behavior of
the input, which H never gets to see, because it gives up.
We know that
H (M) w needs to go to qy if M w will halt when actually run (By the definition of a Halt decider)
H (Ĥ) (Ĥ) goes to qn (by your assertions)
Ĥ (Ĥ) will go to Ĥ.qn and halt when actually run.
THEREFORE, H was just WRONG, BY DEFINITION.
Also UTM (Ĥ) (Ĥ) will halt just like Ĥ (Ĥ)
So, if you want to use the alternate definition, that
H (M) w needs to go to qy if UTM (M) w halts.
Note, it is UTM (M) w, which ALWAYS will have the same behavior for a
given input. Not "the correct simulation done by H".
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can include
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this >>>>>>>>> behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>>>> three features
On 4/19/2023 6:14 AM, Richard Damon wrote:
On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>> is defined to have
a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>>>> the input represents.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological liar >>>>>>>>>>>>>>>> that doesn't know what he is talking about.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>> first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>>>> steps, and you
already agreed with this.
No, the actual behavior of the input is what the MACHINE >>>>>>>>>>>>>>>> Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its input to >>>>>>>>>>>>>>> diverge from
the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>> simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H must >>>>>>>>>>>>> are the actual behavior of these N steps because
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>>>> ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the
input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior of >>>>>>>> the ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior of >>>>>>> the
actual input must necessarily be the N steps simulated by
embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the
REQUIREMENTS of a UTM, the statement is meaningless.
10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these first N
steps.
Right, but we don't care about that. We care about the TOTAL behavior
of the input, which H never gets to see, because it gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
When N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are performed (unless we are playing head games) we can see that ⟨Ĥ⟩ cannot possibly reach its own final state of ⟨Ĥ.qn⟩ in any finite number of steps.
N steps could be reach (c) or N steps could be reaching (c) 10,000 times.
We know that
H (M) w needs to go to qy if M w will halt when actually run (By the
definition of a Halt decider)
H (Ĥ) (Ĥ) goes to qn (by your assertions)
Ĥ (Ĥ) will go to Ĥ.qn and halt when actually run.
THEREFORE, H was just WRONG, BY DEFINITION.
Also UTM (Ĥ) (Ĥ) will halt just like Ĥ (Ĥ)
So, if you want to use the alternate definition, that
H (M) w needs to go to qy if UTM (M) w halts.
Note, it is UTM (M) w, which ALWAYS will have the same behavior for a
given input. Not "the correct simulation done by H".
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can include >>>>>> 10,000 recursive simulations.
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and this >>>>>>>>>> behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented with >>>>>>>>>>>>>>>> three features
On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H must >>>>>>>>>>>>>>>>>>>> compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" show, >>>>>>>>>>>>>>>>>>> but what is the actual behavior of the actual machine >>>>>>>>>>>>>>>>>>> the input represents.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological liar >>>>>>>>>>>>>>>>> that doesn't know what he is talking about.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>> first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these first N >>>>>>>>>>>>>>>>>> steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its input >>>>>>>>>>>>>>>> to diverge from
the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>> simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, but >>>>>>>>>>>>> ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>> input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior of >>>>>>>>> the ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior >>>>>>>> of the
actual input must necessarily be the N steps simulated by
embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the
REQUIREMENTS of a UTM, the statement is meaningless.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these
first N
steps.
Right, but we don't care about that. We care about the TOTAL behavior
of the input, which H never gets to see, because it gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
Until the outer embedded_H used by Ĥ reaches the point that it decides
to stop its simulation, and the whole simulation ends with just partial results and it decides to go to qn and Ĥ Halts.
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its
correctly simulated input can possibly reach its own final state and >>>>>>> halt. It does this by correctly recognizing several non-halting
behavior
patterns in a finite number of steps of correct simulation.
Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that does
give an answer, which you say will be non-halting, and then
"Correctly Simulated" by giving it representation to a UTM, we see >>>>>> that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is you
have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of its >>>>>>> input
it derives the exact same N steps that a pure UTM would derive
because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features you
added have removed essential features needed for it to be an
actual UTM. That you make this claim shows you don't actually know >>>>>> what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal vehicle, >>>>>> since it started as one and just had some extra features axded.
My reviewers cannot show that any of the extra features added to >>>>>>> the UTM
change the behavior of the simulated input for the first N steps >>>>>>> of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it
(c) Even aborting the simulation after N steps doesn't change the >>>>>>> first N steps.
No one claims that it doesn't correctly reproduce the first N
steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of input D >>>>>>> simulated by simulating halt decider H are the actual behavior
that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt whenever >>>>>>> it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL
machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is wrong. >>>>>>
When we see (after N steps) that D correctly simulated by H cannot >>>>>>> possibly reach its simulated final state in any finite number of >>>>>>> steps
of correct simulation then we have conclusive proof that D
presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting is
erroneous:
My new paper anchors its ideas in actual Turing machines so it is
unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider
thinks that Px is non-halting which is an obvious error due to a
design flaw in the architecture of your decider. Only the Flibble
Signaling Simulating Halt Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt decision to its
caller in finite time
Although H must always return to some caller H is not allowed to return
to any caller that essentially calls H in infinite recursion.
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>
A simulating halt decider correctly predicts whether or not its >>>>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>>>> state and
halt. It does this by correctly recognizing several
non-halting behavior
patterns in a finite number of steps of correct simulation. >>>>>>>>>>>> Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that >>>>>>>>>>> does give an answer, which you say will be non-halting, and >>>>>>>>>>> then "Correctly Simulated" by giving it representation to a >>>>>>>>>>> UTM, we see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>>>> you have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps >>>>>>>>>>>> of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>> derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>>> you added have removed essential features needed for it to be >>>>>>>>>>> an actual UTM. That you make this claim shows you don't
actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>> features axded.
My reviewers cannot show that any of the extra features >>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>>> steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the first N >>>>>>>>>>> steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of >>>>>>>>>>>> input D
simulated by simulating halt decider H are the actual
behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>>> machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>>>> wrong.
When we see (after N steps) that D correctly simulated by H >>>>>>>>>>>> cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof that D >>>>>>>>>>>> presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly >>>>>>>>>> recognized in the first N steps.
Your assumption that a program that calls H is non-halting is >>>>>>>>> erroneous:
My new paper anchors its ideas in actual Turing machines so it is >>>>>>>> unequivocal. The first two pages re only about the Linz Turing >>>>>>>> machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps >>>>>>>> UTM based simulated by a simulating halt decider are necessarily >>>>>>>> the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem >>>>>>>> Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider >>>>>>>>> thinks that Px is non-halting which is an obvious error due to >>>>>>>>> a design flaw in the architecture of your decider. Only the >>>>>>>>> Flibble Signaling Simulating Halt Decider (SSHD) correctly
handles this case.
Nope. For H to be a halt decider it must return a halt decision
to its caller in finite time
Although H must always return to some caller H is not allowed to
return
to any caller that essentially calls H in infinite recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does not have
any infinite recursion thereby proving that
It overrode that behavior that was specified by the machine code for
Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your
program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its simulation
just like I have defined it as evidenced by the fact that it returns a correct halting decision for Px; something your broken SHD gets wrong.
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated.
A simulating halt decider correctly predicts whether or not its >>>>>>>>>>> correctly simulated input can possibly reach its own final >>>>>>>>>>> state and
halt. It does this by correctly recognizing several
non-halting behavior
patterns in a finite number of steps of correct simulation. >>>>>>>>>>> Inputs that
do terminate are simply simulated until they complete.
Except t doesn't o this for the "pathological" program.
The "Pathological Program" when built on such a Decider that >>>>>>>>>> does give an answer, which you say will be non-halting, and >>>>>>>>>> then "Correctly Simulated" by giving it representation to a >>>>>>>>>> UTM, we see that the simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>>> you have added a pattern that isn't always non-halting.
When a simulating halt decider correctly simulates N steps of >>>>>>>>>>> its input
it derives the exact same N steps that a pure UTM would
derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>> you added have removed essential features needed for it to be >>>>>>>>>> an actual UTM. That you make this claim shows you don't
actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal
vehicle, since it started as one and just had some extra
features axded.
My reviewers cannot show that any of the extra features added >>>>>>>>>>> to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>> steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't change >>>>>>>>>>> the first N steps.
No one claims that it doesn't correctly reproduce the first N >>>>>>>>>> steps of the behavior, that is a Strawman argumen.
Because of all this we can know that the first N steps of >>>>>>>>>>> input D
simulated by simulating halt decider H are the actual
behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>> machine, not a partial simulation of it.
H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>>> wrong.
When we see (after N steps) that D correctly simulated by H >>>>>>>>>>> cannot
possibly reach its simulated final state in any finite number >>>>>>>>>>> of steps
of correct simulation then we have conclusive proof that D >>>>>>>>>>> presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly >>>>>>>>> recognized in the first N steps.
Your assumption that a program that calls H is non-halting is
erroneous:
My new paper anchors its ideas in actual Turing machines so it is >>>>>>> unequivocal. The first two pages re only about the Linz Turing
machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps
UTM based simulated by a simulating halt decider are necessarily the >>>>>>> actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem
Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider >>>>>>>> thinks that Px is non-halting which is an obvious error due to a >>>>>>>> design flaw in the architecture of your decider. Only the
Flibble Signaling Simulating Halt Decider (SSHD) correctly
handles this case.
Nope. For H to be a halt decider it must return a halt decision to >>>>>> its caller in finite time
Although H must always return to some caller H is not allowed to
return
to any caller that essentially calls H in infinite recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does not have
any infinite recursion thereby proving that
It overrode that behavior that was specified by the machine code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your
program does its simulation incorrectly.
My new write-up proves that my Turing-machine based SHD necessarily must simulate the first N steps of its input correctly because for the first
N steps embedded_H <is> a pure UTM that can't possibly do any simulation incorrectly for the first N steps of simulation.
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>
A simulating halt decider correctly predicts whether or not >>>>>>>>>>>>> its
correctly simulated input can possibly reach its own final >>>>>>>>>>>>> state and
halt. It does this by correctly recognizing several
non-halting behavior
patterns in a finite number of steps of correct simulation. >>>>>>>>>>>>> Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>
The "Pathological Program" when built on such a Decider that >>>>>>>>>>>> does give an answer, which you say will be non-halting, and >>>>>>>>>>>> then "Correctly Simulated" by giving it representation to a >>>>>>>>>>>> UTM, we see that the simulation reaches a final state. >>>>>>>>>>>>
Thus, your H was WRONG t make the answer. And the problem is >>>>>>>>>>>> you have added a pattern that isn't always non-halting. >>>>>>>>>>>>
When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>> of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>>>> you added have removed essential features needed for it to >>>>>>>>>>>> be an actual UTM. That you make this claim shows you don't >>>>>>>>>>>> actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>> features axded.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>>>> steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the first >>>>>>>>>>>> N steps of the behavior, that is a Strawman argumen.
Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>>>> machine, not a partial simulation of it.
Because of all this we can know that the first N steps of >>>>>>>>>>>>> input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>
H(D,D) returns non-halting, but D(D) Halts, so the answer is >>>>>>>>>>>> wrong.
When we see (after N steps) that D correctly simulated by H >>>>>>>>>>>>> cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof that D >>>>>>>>>>>>> presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly >>>>>>>>>>> recognized in the first N steps.
Your assumption that a program that calls H is non-halting is >>>>>>>>>> erroneous:
My new paper anchors its ideas in actual Turing machines so it is >>>>>>>>> unequivocal. The first two pages re only about the Linz Turing >>>>>>>>> machine based proof.
The H/D material is now on a single page and all reference
to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps >>>>>>>>> UTM based simulated by a simulating halt decider are
necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem >>>>>>>>> Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your decider >>>>>>>>>> thinks that Px is non-halting which is an obvious error due to >>>>>>>>>> a design flaw in the architecture of your decider. Only the >>>>>>>>>> Flibble Signaling Simulating Halt Decider (SSHD) correctly >>>>>>>>>> handles this case.
Nope. For H to be a halt decider it must return a halt decision >>>>>>>> to its caller in finite time
Although H must always return to some caller H is not allowed to >>>>>>> return
to any caller that essentially calls H in infinite recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does not have >>>>>> any infinite recursion thereby proving that
It overrode that behavior that was specified by the machine code
for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your
program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its simulation
just like I have defined it as evidenced by the fact that it returns a
correct halting decision for Px; something your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you must
change the behavior of Px away from the behavior that its x86 code
specifies.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine
address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that _Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
Your system doesn't merely report on the behavior of its input it also interferes with the behavior of its input.
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>
A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>> not its
correctly simulated input can possibly reach its own final >>>>>>>>>>>>>> state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct
simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>
The "Pathological Program" when built on such a Decider >>>>>>>>>>>>> that does give an answer, which you say will be
non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>> representation to a UTM, we see that the simulation reaches >>>>>>>>>>>>> a final state.
Thus, your H was WRONG t make the answer. And the problem >>>>>>>>>>>>> is you have added a pattern that isn't always non-halting. >>>>>>>>>>>>>
When a simulating halt decider correctly simulates N steps >>>>>>>>>>>>>> of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the features >>>>>>>>>>>>> you added have removed essential features needed for it to >>>>>>>>>>>>> be an actual UTM. That you make this claim shows you don't >>>>>>>>>>>>> actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>>> features axded.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first N >>>>>>>>>>>>>> steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the first >>>>>>>>>>>>> N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>
Right, so we are concerned about the behavior of the ACTUAL >>>>>>>>>>>>> machine, not a partial simulation of it.
Because of all this we can know that the first N steps of >>>>>>>>>>>>>> input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>
H(D,D) returns non-halting, but D(D) Halts, so the answer >>>>>>>>>>>>> is wrong.
When we see (after N steps) that D correctly simulated by >>>>>>>>>>>>>> H cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof that D >>>>>>>>>>>>>> presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is correctly >>>>>>>>>>>> recognized in the first N steps.
Your assumption that a program that calls H is non-halting is >>>>>>>>>>> erroneous:
My new paper anchors its ideas in actual Turing machines so it is >>>>>>>>>> unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>> machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>> to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps >>>>>>>>>> UTM based simulated by a simulating halt decider are
necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting Problem >>>>>>>>>> Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your
decider thinks that Px is non-halting which is an obvious >>>>>>>>>>> error due to a design flaw in the architecture of your
decider. Only the Flibble Signaling Simulating Halt Decider >>>>>>>>>>> (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt decision >>>>>>>>> to its caller in finite time
Although H must always return to some caller H is not allowed to >>>>>>>> return
to any caller that essentially calls H in infinite recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does not
have any infinite recursion thereby proving that
It overrode that behavior that was specified by the machine code
for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your
program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact that
it returns a correct halting decision for Px; something your broken
SHD gets wrong.
In order for you to have Px simulated by H terminate normally you must
change the behavior of Px away from the behavior that its x86 code
specifies.
Your "x86 code" has nothing to do with how my halt decider works; I am
using an entirely different simulation method, one that actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine
address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px
[00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px
[00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that _Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt decision to
each branch is a perfectly valid SHD design; again a design, unlike
yours, that actually works.
Your system doesn't merely report on the behavior of its input it also
interferes with the behavior of its input.
No it doesn't; H returns a value to its caller in finite time so
satisfies the requirements of a halt decider unlike your SHD which you
have to "abort" because your decider doesn't satisfy the requirements
because your design is broken.
/Flibble
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:Until the outer embedded_H used by Ĥ reaches the point that it
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and >>>>>>>>>>>> this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>> with three featuresOn 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>> must compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by H" >>>>>>>>>>>>>>>>>>>>> show, but what is the actual behavior of the actual >>>>>>>>>>>>>>>>>>>>> machine the input represents.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological liar >>>>>>>>>>>>>>>>>>> that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>> first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these first >>>>>>>>>>>>>>>>>>>> N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its input >>>>>>>>>>>>>>>>>> to diverge from
the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>>>> simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>> but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>> 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>> input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior >>>>>>>>>>> of the ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior >>>>>>>>>> of the
actual input must necessarily be the N steps simulated by
embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the
REQUIREMENTS of a UTM, the statement is meaningless.
include 10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>> first N
steps.
Right, but we don't care about that. We care about the TOTAL
behavior of the input, which H never gets to see, because it gives up. >>>>>
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which
simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>
decides to stop its simulation, and the whole simulation ends with
just partial results and it decides to go to qn and Ĥ Halts.
You keep dodging the key truth when N steps of embedded_H are correctly
simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached >> their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:Until the outer embedded_H used by Ĥ reaches the point that it decides
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and >>>>>>>>>>> this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote:
On 4/19/23 11:05 AM, olcott wrote:Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>> with three features
On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote:
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>> must compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. The >>>>>>>>>>>>>>>>>>>> question is NOT what does the "simulation by H" >>>>>>>>>>>>>>>>>>>> show, but what is the actual behavior of the actual >>>>>>>>>>>>>>>>>>>> machine the input represents.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features.
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological liar >>>>>>>>>>>>>>>>>> that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these first >>>>>>>>>>>>>>>>>>> N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its input >>>>>>>>>>>>>>>>> to diverge from
the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>>> simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>> but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>> 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩. >>>>>>>>>>>>>
Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>> input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior >>>>>>>>>> of the ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual behavior >>>>>>>>> of the
actual input must necessarily be the N steps simulated by
embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the
REQUIREMENTS of a UTM, the statement is meaningless.
include 10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>> first N
steps.
Right, but we don't care about that. We care about the TOTAL
behavior of the input, which H never gets to see, because it gives up. >>>>
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual
behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which simulates
⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>
to stop its simulation, and the whole simulation ends with just
partial results and it decides to go to qn and Ĥ Halts.
You keep dodging the key truth when N steps of embedded_H are correctly simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached their final state of ⟨Ĥ.qn⟩.
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:Until the outer embedded_H used by Ĥ reaches the point that it
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and >>>>>>>>>>>>> this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote:
On 4/19/23 7:16 PM, olcott wrote:As you already agreed:
On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>> with three featuresOn 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations because >>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>
When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>> N steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>> first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by embedded_H >>>>>>>>>>>>>>>>>>>>> because embedded_H
has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>> input to diverge from
the simulation of a pure UTM for the first N steps of >>>>>>>>>>>>>>>>>>> simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>>
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>>> but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>>> 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>>> input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the behavior >>>>>>>>>>>> of the ACTUAL MACHINE which is decribed by the input.
No matter what the problem definition says the actual
behavior of the
actual input must necessarily be the N steps simulated by >>>>>>>>>>> embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the
REQUIREMENTS of a UTM, the statement is meaningless.
include 10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>>> first N
steps.
Right, but we don't care about that. We care about the TOTAL
behavior of the input, which H never gets to see, because it gives >>>>>> up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process* >>>>
decides to stop its simulation, and the whole simulation ends with
just partial results and it decides to go to qn and Ĥ Halts.
You keep dodging the key truth when N steps of embedded_H are correctly
simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.
Referring to an entirely different sequence where there is no such pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
Why do you continue to believe that you can get away with this?
On 4/20/23 6:51 PM, olcott wrote:
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>> include 10,000 recursive simulations.
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:No matter what the problem definition says the actual
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and >>>>>>>>>>>>>> this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
As you already agreed:On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>> with three featuresOn 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>
When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>>> N steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>>> first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>> input to diverge from
the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>> of simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>>>
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>>>> but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>>>> 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>>>> input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the
behavior of the ACTUAL MACHINE which is decribed by the input. >>>>>>>>>>>>
behavior of the
actual input must necessarily be the N steps simulated by >>>>>>>>>>>> embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the
REQUIREMENTS of a UTM, the statement is meaningless.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>>>> first N
steps.
Right, but we don't care about that. We care about the TOTAL
behavior of the input, which H never gets to see, because it
gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
Until the outer embedded_H used by Ĥ reaches the point that it
decides to stop its simulation, and the whole simulation ends with
just partial results and it decides to go to qn and Ĥ Halts.
You keep dodging the key truth when N steps of embedded_H are correctly >>>> simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have
never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the input describes.
PERIOD.
Referring to an entirely different sequence where there is no such
pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
So, you just don't understand the meaning of ACTUAL BEHAVIOR
Why do you continue to believe that you can get away with this?
Why do YOU?
Can you name a reliable source that supports your definition? (NOT YOU)
On 4/20/2023 6:14 PM, Richard Damon wrote:
On 4/20/23 6:51 PM, olcott wrote:
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>> include 10,000 recursive simulations.
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and >>>>>>>>>>>>>>> this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
As you already agreed:On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>
Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>>> with three features*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>>
When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>> features added to the UTMWhich don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>
change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>> the first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>> doesn't change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>> input to diverge from
the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>>> of simulation we know
that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>> behavior specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>> steps because
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>> input, but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>> the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>> simulates 10,000
recursive simulations these are the actual behavior of >>>>>>>>>>>>>>>>> ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>> the input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>> input.
No matter what the problem definition says the actual >>>>>>>>>>>>> behavior of the
actual input must necessarily be the N steps simulated by >>>>>>>>>>>>> embedded_H.
The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>
NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>> these first N
steps.
Right, but we don't care about that. We care about the TOTAL
behavior of the input, which H never gets to see, because it
gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
Until the outer embedded_H used by Ĥ reaches the point that it
decides to stop its simulation, and the whole simulation ends with >>>>>> just partial results and it decides to go to qn and Ĥ Halts.
You keep dodging the key truth when N steps of embedded_H are
correctly
simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>> reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
input describes.
PERIOD.
Referring to an entirely different sequence where there is no such
pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
So, you just don't understand the meaning of ACTUAL BEHAVIOR
Why do you continue to believe that you can get away with this?
Why do YOU?
Can you name a reliable source that supports your definition? (NOT YOU)
Professor Sipser.
On 4/20/23 6:51 PM, olcott wrote:
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>> include 10,000 recursive simulations.
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:No matter what the problem definition says the actual
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and >>>>>>>>>>>>>> this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
As you already agreed:On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote:
Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>> with three featuresOn 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>
*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000
necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>
When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>>> N steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>>> first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
Which don't matter, as the question
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>> input to diverge from
the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>> of simulation we know
that it necessarily does provide the actual behavior >>>>>>>>>>>>>>>>>>>> specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by embedded_H >>>>>>>>>>>>>>>>>> must are the actual behavior of these N steps because >>>>>>>>>>>>>>>>>>
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the
first N steps.
But a UTM doesn't simulate just "N" steps of its input, >>>>>>>>>>>>>>>>> but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is the >>>>>>>>>>>>>>>> actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H simulates >>>>>>>>>>>>>>>> 10,000
recursive simulations these are the actual behavior of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of the >>>>>>>>>>>>>>> input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the
behavior of the ACTUAL MACHINE which is decribed by the input. >>>>>>>>>>>>
behavior of the
actual input must necessarily be the N steps simulated by >>>>>>>>>>>> embedded_H.
The only alternative is to simply disbelieve in UTMs.
NOPE, Since H isn't a UTM, because it doesn't meet the
REQUIREMENTS of a UTM, the statement is meaningless.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly >>>>>>>> simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for these >>>>>>>> first N
steps.
Right, but we don't care about that. We care about the TOTAL
behavior of the input, which H never gets to see, because it
gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
Until the outer embedded_H used by Ĥ reaches the point that it
decides to stop its simulation, and the whole simulation ends with
just partial results and it decides to go to qn and Ĥ Halts.
You keep dodging the key truth when N steps of embedded_H are correctly >>>> simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have
never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the input describes.
PERIOD.
Referring to an entirely different sequence where there is no such
pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
So, you just don't understand the meaning of ACTUAL BEHAVIOR
Why do you continue to believe that you can get away with this?
Why do YOU?
Can you name a reliable source that supports your definition? (NOT YOU)
Not just someone you have "tricked" into agreeing to a poorly worded statement that yo misinterpret to agree with you.
On 4/20/23 10:05 PM, olcott wrote:
On 4/20/2023 6:14 PM, Richard Damon wrote:
On 4/20/23 6:51 PM, olcott wrote:
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>>> include 10,000 recursive simulations.
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input >>>>>>>>>>>>>>>> and this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
As you already agreed:On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>
Because embedded_H is a UTM that has been >>>>>>>>>>>>>>>>>>>>>> augmented with three features*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that >>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H must compute its mapping >>>>>>>>>>>>>>>>>>>>>>>>>> from never reaches its simulated final state >>>>>>>>>>>>>>>>>>>>>>>>>> of ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman >>>>>>>>>>>>>>>>>>>>>>>>> error. The question is NOT what does the >>>>>>>>>>>>>>>>>>>>>>>>> "simulation by H" show, but what is the actual >>>>>>>>>>>>>>>>>>>>>>>>> behavior of the actual machine the input >>>>>>>>>>>>>>>>>>>>>>>>> represents.
When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input >>>>>>>>>>>>>>>>>>>>>>>> it derives the exact same N steps that a pure >>>>>>>>>>>>>>>>>>>>>>>> UTM would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>>> features added to the UTMWhich don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>>
change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>>> the first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>>> doesn't change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>>
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>>> input to diverge from
the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>>>> of simulation we know
that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>>> behavior specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>>> steps because
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>> doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>>> input, but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>>> the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>>> simulates 10,000
recursive simulations these are the actual behavior of >>>>>>>>>>>>>>>>>> ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>>> the input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>>> input.
No matter what the problem definition says the actual >>>>>>>>>>>>>> behavior of the
actual input must necessarily be the N steps simulated by >>>>>>>>>>>>>> embedded_H.
The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>>
NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>>> these first N
steps.
Right, but we don't care about that. We care about the TOTAL >>>>>>>>> behavior of the input, which H never gets to see, because it >>>>>>>>> gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the >>>>>>>> process*
Until the outer embedded_H used by Ĥ reaches the point that it
decides to stop its simulation, and the whole simulation ends
with just partial results and it decides to go to qn and Ĥ Halts. >>>>>>>
You keep dodging the key truth when N steps of embedded_H are
correctly
simulated by embedded_H and N = 30000 then we know that the actual >>>>>> behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>>> reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to
have
a pathological relationship to embedded_H.
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
input describes.
PERIOD.
Referring to an entirely different sequence where there is no such
pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
So, you just don't understand the meaning of ACTUAL BEHAVIOR
Why do you continue to believe that you can get away with this?
Why do YOU?
Can you name a reliable source that supports your definition? (NOT YOU)
Not just someone you have "tricked" into agreeing to a poorly worded
statement that yo misinterpret to agree with you.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
"If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then H can abort its simulation of D and correctly report
that D specifies a non-halting sequence of configurations."
He understood that the above paragraph is a tautology. That you do not
understand that it is a tautology provides zero evidence that it is not
a tautology.
You have already agreed that N steps of an input simulated by a
simulating halt decider are the actual behavior for these N steps.
The fact that you agreed with this seems to prove that you will not
disagree with me at the expense of truth and that you do actually care
about the truth.
Right, like I said, *IF* the decider correctly simulates its input D
until H *CORRECTLY* determines that its Simulate D would never stop
running unless aborted.
NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE MACHINE.
NOT JUST A PARTIAL SIMULATION BY H.
On 4/20/2023 6:14 PM, Richard Damon wrote:
On 4/20/23 6:51 PM, olcott wrote:
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>> include 10,000 recursive simulations.
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input and >>>>>>>>>>>>>>> this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote:
On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote:
As you already agreed:On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>
Because embedded_H is a UTM that has been augmented >>>>>>>>>>>>>>>>>>>>> with three features*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that embedded_H >>>>>>>>>>>>>>>>>>>>>>>>> must compute its mapping
from never reaches its simulated final state of >>>>>>>>>>>>>>>>>>>>>>>>> ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman error. >>>>>>>>>>>>>>>>>>>>>>>> The question is NOT what does the "simulation by >>>>>>>>>>>>>>>>>>>>>>>> H" show, but what is the actual behavior of the >>>>>>>>>>>>>>>>>>>>>>>> actual machine the input represents. >>>>>>>>>>>>>>>>>>>>>>>>
When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>> features added to the UTMWhich don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>
change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>> the first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>> doesn't change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H
has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does.
that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>> input to diverge from
the simulation of a pure UTM for the first N steps >>>>>>>>>>>>>>>>>>>>> of simulation we know
that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>> behavior specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>> steps because
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>> input, but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>> the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>> simulates 10,000
recursive simulations these are the actual behavior of >>>>>>>>>>>>>>>>> ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>> the input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>> embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>> input.
No matter what the problem definition says the actual >>>>>>>>>>>>> behavior of the
actual input must necessarily be the N steps simulated by >>>>>>>>>>>>> embedded_H.
The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>
NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>> these first N
steps.
Right, but we don't care about that. We care about the TOTAL
behavior of the input, which H never gets to see, because it
gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process*
Until the outer embedded_H used by Ĥ reaches the point that it
decides to stop its simulation, and the whole simulation ends with >>>>>> just partial results and it decides to go to qn and Ĥ Halts.
You keep dodging the key truth when N steps of embedded_H are
correctly
simulated by embedded_H and N = 30000 then we know that the actual
behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>> reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to have
a pathological relationship to embedded_H.
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
input describes.
PERIOD.
Referring to an entirely different sequence where there is no such
pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
So, you just don't understand the meaning of ACTUAL BEHAVIOR
Why do you continue to believe that you can get away with this?
Why do YOU?
Can you name a reliable source that supports your definition? (NOT YOU)
Not just someone you have "tricked" into agreeing to a poorly worded
statement that yo misinterpret to agree with you.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
"If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D would never stop running
unless aborted then H can abort its simulation of D and correctly report
that D specifies a non-halting sequence of configurations."
He understood that the above paragraph is a tautology. That you do not understand that it is a tautology provides zero evidence that it is not
a tautology.
You have already agreed that N steps of an input simulated by a
simulating halt decider are the actual behavior for these N steps.
The fact that you agreed with this seems to prove that you will not
disagree with me at the expense of truth and that you do actually care
about the truth.
On 4/20/2023 9:20 PM, Richard Damon wrote:
On 4/20/23 10:05 PM, olcott wrote:Unless the simulation is from the frame-of-reference of the pathological relationship it is rejecting apples because lemons are too sour.
On 4/20/2023 6:14 PM, Richard Damon wrote:
On 4/20/23 6:51 PM, olcott wrote:
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote:
On 4/19/23 8:52 PM, olcott wrote:There is only one actual behavior of the actual input >>>>>>>>>>>>>>>>> and this behavior
On 4/19/2023 7:45 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>
As you already agreed:that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>>>> input to diverge from*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that >>>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H must compute its mapping >>>>>>>>>>>>>>>>>>>>>>>>>>> from never reaches its simulated final state >>>>>>>>>>>>>>>>>>>>>>>>>>> of ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman >>>>>>>>>>>>>>>>>>>>>>>>>> error. The question is NOT what does the >>>>>>>>>>>>>>>>>>>>>>>>>> "simulation by H" show, but what is the actual >>>>>>>>>>>>>>>>>>>>>>>>>> behavior of the actual machine the input >>>>>>>>>>>>>>>>>>>>>>>>>> represents.
When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input >>>>>>>>>>>>>>>>>>>>>>>>> it derives the exact same N steps that a pure >>>>>>>>>>>>>>>>>>>>>>>>> UTM would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet the >>>>>>>>>>>>>>>>>>>>>>>> definition of a UTM.
You are just proving that you are a pathological >>>>>>>>>>>>>>>>>>>>>>>> liar that doesn't know what he is talking about. >>>>>>>>>>>>>>>>>>>>>>>>
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>>>> features added to the UTMWhich don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>>>
change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>>>> the first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>>>> doesn't change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>>>
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H >>>>>>>>>>>>>>>>>>>>>>>>> has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what the >>>>>>>>>>>>>>>>>>>>>>>> MACHINE Ĥ applied to (Ĥ) does. >>>>>>>>>>>>>>>>>>>>>>> Because embedded_H is a UTM that has been >>>>>>>>>>>>>>>>>>>>>>> augmented with three features
the simulation of a pure UTM for the first N >>>>>>>>>>>>>>>>>>>>>>> steps of simulation we know
that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>>>> behavior specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet the >>>>>>>>>>>>>>>>>>>>>> requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>>>> steps because
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>> doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>>>> input, but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>>>> the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>>>> simulates 10,000
recursive simulations these are the actual behavior >>>>>>>>>>>>>>>>>>> of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>>>> the input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated >>>>>>>>>>>>>>>>> by embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>>>> input.
No matter what the problem definition says the actual >>>>>>>>>>>>>>> behavior of the
actual input must necessarily be the N steps simulated by >>>>>>>>>>>>>>> embedded_H.
The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>>>
NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless. >>>>>>>>>>>>> It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>>>> include 10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD.
Why are you playing head games with this?
You know and acknowledged that the first N steps of ⟨Ĥ⟩ >>>>>>>>>>> correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>>>> these first N
steps.
Right, but we don't care about that. We care about the TOTAL >>>>>>>>>> behavior of the input, which H never gets to see, because it >>>>>>>>>> gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the actual >>>>>>>>> behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H >>>>>>>>> (b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which >>>>>>>>> simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the >>>>>>>>> process*
Until the outer embedded_H used by Ĥ reaches the point that it >>>>>>>> decides to stop its simulation, and the whole simulation ends
with just partial results and it decides to go to qn and Ĥ Halts. >>>>>>>>
You keep dodging the key truth when N steps of embedded_H are
correctly
simulated by embedded_H and N = 30000 then we know that the actual >>>>>>> behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>>>> reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have >>>>> never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined to
have
a pathological relationship to embedded_H.
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
input describes.
PERIOD.
Referring to an entirely different sequence where there is no such
pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
So, you just don't understand the meaning of ACTUAL BEHAVIOR
Why do you continue to believe that you can get away with this?
Why do YOU?
Can you name a reliable source that supports your definition? (NOT YOU) >>>>
Not just someone you have "tricked" into agreeing to a poorly worded
statement that yo misinterpret to agree with you.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
"If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then H can abort its simulation of D and correctly report >>> that D specifies a non-halting sequence of configurations."
He understood that the above paragraph is a tautology. That you do not
understand that it is a tautology provides zero evidence that it is not
a tautology.
You have already agreed that N steps of an input simulated by a
simulating halt decider are the actual behavior for these N steps.
The fact that you agreed with this seems to prove that you will not
disagree with me at the expense of truth and that you do actually care
about the truth.
Right, like I said, *IF* the decider correctly simulates its input D
until H *CORRECTLY* determines that its Simulate D would never stop
running unless aborted.
NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE
MACHINE. NOT JUST A PARTIAL SIMULATION BY H.
Thus when N steps of ⟨Ĥ⟩ correctly simulated by embedded_H conclusively proves by a form of mathematical induction that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach its simulated final state
of ⟨Ĥ.qn⟩ in any finite number of steps the Sipser approved criteria has been met.
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does not
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>
A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>> not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct
simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>
The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>> that does give an answer, which you say will be
non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>> reaches a final state.
Thus, your H was WRONG t make the answer. And the problem >>>>>>>>>>>>>> is you have added a pattern that isn't always non-halting. >>>>>>>>>>>>>>
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>> features you added have removed essential features needed >>>>>>>>>>>>>> for it to be an actual UTM. That you make this claim shows >>>>>>>>>>>>>> you don't actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>>>> features axded.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first >>>>>>>>>>>>>>> N steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't change it >>>>>>>>>>>>>>> (c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it.
Because of all this we can know that the first N steps of >>>>>>>>>>>>>>> input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>
H(D,D) returns non-halting, but D(D) Halts, so the answer >>>>>>>>>>>>>> is wrong.
When we see (after N steps) that D correctly simulated by >>>>>>>>>>>>>>> H cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof that >>>>>>>>>>>>>>> D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting >>>>>>>>>>>> is erroneous:
My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>> it is
unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>> machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>> to the x86 language has been stripped and replaced with
analysis entirely in C.
With this new paper even Richard admits that the first N steps >>>>>>>>>>> UTM based simulated by a simulating halt decider are
necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting
Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>> decider. Only the Flibble Signaling Simulating Halt Decider >>>>>>>>>>>> (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt
decision to its caller in finite time
Although H must always return to some caller H is not allowed >>>>>>>>> to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>
have any infinite recursion thereby proving that
It overrode that behavior that was specified by the machine code >>>>>>> for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your
program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact that
it returns a correct halting decision for Px; something your broken
SHD gets wrong.
In order for you to have Px simulated by H terminate normally you
must change the behavior of Px away from the behavior that its x86
code specifies.
Your "x86 code" has nothing to do with how my halt decider works; I am
using an entirely different simulation method, one that actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine
address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px
[00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px
[00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that _Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt decision
to each branch is a perfectly valid SHD design; again a design, unlike
yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final "return" statement and halts you are incorrect.
On 4/20/23 10:43 PM, olcott wrote:
On 4/20/2023 9:20 PM, Richard Damon wrote:
On 4/20/23 10:05 PM, olcott wrote:Unless the simulation is from the frame-of-reference of the pathological
On 4/20/2023 6:14 PM, Richard Damon wrote:
On 4/20/23 6:51 PM, olcott wrote:
On 4/20/2023 5:40 PM, Richard Damon wrote:
On 4/20/23 10:59 AM, olcott wrote:
On 4/20/2023 7:06 AM, Richard Damon wrote:
On 4/20/23 7:56 AM, olcott wrote:
On 4/20/2023 6:23 AM, Richard Damon wrote:
On 4/20/23 12:04 AM, olcott wrote:
On 4/19/2023 10:41 PM, Richard Damon wrote:
On 4/19/23 11:29 PM, olcott wrote:Why are you playing head games with this?
On 4/19/2023 9:16 PM, Richard Damon wrote:
On 4/19/23 9:59 PM, olcott wrote:
On 4/19/2023 8:38 PM, Richard Damon wrote:
On 4/19/23 9:25 PM, olcott wrote:
On 4/19/2023 8:08 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/19/23 8:52 PM, olcott wrote:
There is only one actual behavior of the actual input >>>>>>>>>>>>>>>>>> and this behaviorOn 4/19/2023 7:45 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/19/23 8:31 PM, olcott wrote:
On 4/19/2023 7:07 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 7:16 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 5:49 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 4/19/23 11:05 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> On 4/19/2023 6:14 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> On 4/18/23 11:48 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>
As you already agreed:that cannot possibly cause its simulation of its >>>>>>>>>>>>>>>>>>>>>>>> input to diverge from*You keep slip sliding with the fallacy of >>>>>>>>>>>>>>>>>>>>>>>>>>>> equivocation error*
The actual simulated input: ⟨Ĥ⟩ that >>>>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H must compute its mapping >>>>>>>>>>>>>>>>>>>>>>>>>>>> from never reaches its simulated final state >>>>>>>>>>>>>>>>>>>>>>>>>>>> of ⟨Ĥ.qn⟩ even after 10,000 >>>>>>>>>>>>>>>>>>>>>>>>>>>> necessarily correct recursive simulations >>>>>>>>>>>>>>>>>>>>>>>>>>>> because ⟨Ĥ⟩ is defined to have >>>>>>>>>>>>>>>>>>>>>>>>>>>> a pathological relationship to embedded_H. >>>>>>>>>>>>>>>>>>>>>>>>>>>
An YOU keep on falling into your Strawman >>>>>>>>>>>>>>>>>>>>>>>>>>> error. The question is NOT what does the >>>>>>>>>>>>>>>>>>>>>>>>>>> "simulation by H" show, but what is the >>>>>>>>>>>>>>>>>>>>>>>>>>> actual behavior of the actual machine the >>>>>>>>>>>>>>>>>>>>>>>>>>> input represents.
When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its input >>>>>>>>>>>>>>>>>>>>>>>>>> it derives the exact same N steps that a pure >>>>>>>>>>>>>>>>>>>>>>>>>> UTM would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>>>>>
No, it ISN'T a UTM because if fails to meeet >>>>>>>>>>>>>>>>>>>>>>>>> the definition of a UTM.
You are just proving that you are a >>>>>>>>>>>>>>>>>>>>>>>>> pathological liar that doesn't know what he is >>>>>>>>>>>>>>>>>>>>>>>>> talking about.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>>>>> features added to the UTMWhich don't matter, as the question >>>>>>>>>>>>>>>>>>>>>>>>>
change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>>>>> the first N steps of
simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>>>>> doesn't change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>>>>
The actual behavior that the actual input: ⟨Ĥ⟩ >>>>>>>>>>>>>>>>>>>>>>>>>> represents is the
behavior of the simulation of N steps by >>>>>>>>>>>>>>>>>>>>>>>>>> embedded_H because embedded_H >>>>>>>>>>>>>>>>>>>>>>>>>> has the exact same behavior as a UTM for these >>>>>>>>>>>>>>>>>>>>>>>>>> first N steps, and you
already agreed with this.
No, the actual behavior of the input is what >>>>>>>>>>>>>>>>>>>>>>>>> the MACHINE Ĥ applied to (Ĥ) does. >>>>>>>>>>>>>>>>>>>>>>>> Because embedded_H is a UTM that has been >>>>>>>>>>>>>>>>>>>>>>>> augmented with three features
the simulation of a pure UTM for the first N >>>>>>>>>>>>>>>>>>>>>>>> steps of simulation we know
that it necessarily does provide the actual >>>>>>>>>>>>>>>>>>>>>>>> behavior specified by this
input for these N steps.
And is no longer a UTM, since if fails to meet >>>>>>>>>>>>>>>>>>>>>>> the requirement of a UTM
The behavior of N steps of ⟨Ĥ⟩ simulated by >>>>>>>>>>>>>>>>>>>>>> embedded_H must are the actual behavior of these N >>>>>>>>>>>>>>>>>>>>>> steps because
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the
first N steps.
But a UTM doesn't simulate just "N" steps of its >>>>>>>>>>>>>>>>>>>>> input, but ALL of them.
Yet when embedded_H simulates N steps of ⟨Ĥ⟩ this is >>>>>>>>>>>>>>>>>>>> the actual behavior
of ⟨Ĥ⟩ for these N steps, thus when embedded_H >>>>>>>>>>>>>>>>>>>> simulates 10,000
recursive simulations these are the actual behavior >>>>>>>>>>>>>>>>>>>> of ⟨Ĥ⟩.
Yes, but doesn't actually show the ACTUAL behavior of >>>>>>>>>>>>>>>>>>> the input as defined,
is correctly demonstrated by N steps of ⟨Ĥ⟩ simulated >>>>>>>>>>>>>>>>>> by embedded_H.
Nope, Read the problem definition.
The behavior to be decided by a Halt Decider is the >>>>>>>>>>>>>>>>> behavior of the ACTUAL MACHINE which is decribed by the >>>>>>>>>>>>>>>>> input.
No matter what the problem definition says the actual >>>>>>>>>>>>>>>> behavior of the
actual input must necessarily be the N steps simulated >>>>>>>>>>>>>>>> by embedded_H.
The only alternative is to simply disbelieve in UTMs. >>>>>>>>>>>>>>>>
NOPE, Since H isn't a UTM, because it doesn't meet the >>>>>>>>>>>>>>> REQUIREMENTS of a UTM, the statement is meaningless. >>>>>>>>>>>>>> It <is> equivalent to a UTM for the first N steps that can >>>>>>>>>>>>>> include 10,000 recursive simulations.
Which means it ISN'T the Equivalent of a UTM. PERIOD. >>>>>>>>>>>>
You know and acknowledged that the first N steps of ⟨Ĥ⟩ >>>>>>>>>>>> correctly
simulated by embedded_H are the actual behavior of ⟨Ĥ⟩ for >>>>>>>>>>>> these first N
steps.
Right, but we don't care about that. We care about the TOTAL >>>>>>>>>>> behavior of the input, which H never gets to see, because it >>>>>>>>>>> gives up.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn >>>>>>>>>>
N steps of ⟨Ĥ⟩ correctly simulated by embedded_H are the >>>>>>>>>> actual behavior of this input:
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to embedded_H
(b) embedded_H is applied to ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) which
simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) *which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the >>>>>>>>>> process*
Until the outer embedded_H used by Ĥ reaches the point that it >>>>>>>>> decides to stop its simulation, and the whole simulation ends >>>>>>>>> with just partial results and it decides to go to qn and Ĥ Halts. >>>>>>>>>
You keep dodging the key truth when N steps of embedded_H are
correctly
simulated by embedded_H and N = 30000 then we know that the actual >>>>>>>> behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have never >>>>>>>> reached
their final state of ⟨Ĥ.qn⟩.
No, it has been shown that if N = 3000, then
the actual behavior of ⟨Ĥ⟩ is 10,000 recursive simulations that have
never reached their final state of ⟨Ĥ.qn⟩ because ⟨Ĥ⟩ is defined
to have
a pathological relationship to embedded_H.
No, becasue the ACTUAL BEHAVIOR is defined by the machine that the
input describes.
PERIOD.
Referring to an entirely different sequence where there is no such >>>>>> pathological relationship is like comparing apples to lemons and
rejecting apples because lemons are too sour.
So, you just don't understand the meaning of ACTUAL BEHAVIOR
Why do you continue to believe that you can get away with this?
Why do YOU?
Can you name a reliable source that supports your definition? (NOT
YOU)
Not just someone you have "tricked" into agreeing to a poorly
worded statement that yo misinterpret to agree with you.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
"If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then H can abort its simulation of D and correctly
report
that D specifies a non-halting sequence of configurations."
He understood that the above paragraph is a tautology. That you do not >>>> understand that it is a tautology provides zero evidence that it is not >>>> a tautology.
You have already agreed that N steps of an input simulated by a
simulating halt decider are the actual behavior for these N steps.
The fact that you agreed with this seems to prove that you will not
disagree with me at the expense of truth and that you do actually care >>>> about the truth.
Right, like I said, *IF* the decider correctly simulates its input D
until H *CORRECTLY* determines that its Simulate D would never stop
running unless aborted.
NOTE. THAT MEANS THE ACTUAL MACHIBE OR A UTM SIMULATION OF THE
MACHINE. NOT JUST A PARTIAL SIMULATION BY H.
relationship it is rejecting apples because lemons are too sour.
So, you don't understand the nature of simulation.
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>> have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>>> not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>
The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>> that does give an answer, which you say will be
non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>>> reaches a final state.
Thus, your H was WRONG t make the answer. And the problem >>>>>>>>>>>>>>> is you have added a pattern that isn't always non-halting. >>>>>>>>>>>>>>>
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>> features you added have removed essential features needed >>>>>>>>>>>>>>> for it to be an actual UTM. That you make this claim >>>>>>>>>>>>>>> shows you don't actually know what a UTM is.
This is like saying a NASCAR Racing Car is a Street Legal >>>>>>>>>>>>>>> vehicle, since it started as one and just had some extra >>>>>>>>>>>>>>> features axded.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the first >>>>>>>>>>>>>>>> N steps of simulation:
(a) Watching the behavior doesn't change it.
(b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it.
Because of all this we can know that the first N steps >>>>>>>>>>>>>>>> of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>
H(D,D) returns non-halting, but D(D) Halts, so the answer >>>>>>>>>>>>>>> is wrong.
When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>> by H cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof that >>>>>>>>>>>>>>>> D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting >>>>>>>>>>>>> is erroneous:
My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>>> it is
unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>>> machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N steps >>>>>>>>>>>> UTM based simulated by a simulating halt decider are
necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>> decider. Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>> Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt
decision to its caller in finite time
Although H must always return to some caller H is not allowed >>>>>>>>>> to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>
It overrode that behavior that was specified by the machine code >>>>>>>> for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your
program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact
that it returns a correct halting decision for Px; something your
broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you
must change the behavior of Px away from the behavior that its x86
code specifies.
Your "x86 code" has nothing to do with how my halt decider works; I
am using an entirely different simulation method, one that actually
works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine
address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that _Infinite_Loop() >>>> never halts, forcing it to break out of its infinite loop and jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt decision
to each branch is a perfectly valid SHD design; again a design,
unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final
"return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
Your H is not
a genuine halt decider as it aborts rather than returning a value to its caller in finite time. Think of it this way: if H was not of the
simulating type then there would be no need to abort any recursion as H
would not be directly invoking Px, i.e., there would be no recursion. Recursion is a problem for you because your halt decider is based on a
broken design.
/Flibble
On 21/04/2023 4:16 pm, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>>> have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>> or not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving >>>>>>>>>>>>>>>>> it representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>>> reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>>
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>> answer is wrong.
Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>>> of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>> halt whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>>
When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>>> by H cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is
non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>> so it is
unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>> Turing
machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N >>>>>>>>>>>>>> steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>>> decider. Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>> decision to its caller in finite time
Although H must always return to some caller H is not
allowed to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>
It overrode that behavior that was specified by the machine >>>>>>>>>> code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how
your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact
that it returns a correct halting decision for Px; something your >>>>>>> broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you
must change the behavior of Px away from the behavior that its x86 >>>>>> code specifies.
Your "x86 code" has nothing to do with how my halt decider works; I
am using an entirely different simulation method, one that actually
works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine >>>>>> address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to >>>>>> its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again a
design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final >>>> "return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.
Nope, a correctly simulated Px will allow it to reach its own final
state (termination); your H does NOT perform a correct simulation
because your H is broken.
/Flibble
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>> have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>>>> not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>> reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had some >>>>>>>>>>>>>>>> extra features axded.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>> answer is wrong.
Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>> of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>
When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>> by H cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting >>>>>>>>>>>>>> is erroneous:
My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>>>> it is
unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>>>> machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N steps >>>>>>>>>>>>> UTM based simulated by a simulating halt decider are >>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>> decider. Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>> decision to its caller in finite time
Although H must always return to some caller H is not allowed >>>>>>>>>>> to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>
It overrode that behavior that was specified by the machine
code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your >>>>>>> program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact
that it returns a correct halting decision for Px; something your
broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you
must change the behavior of Px away from the behavior that its x86
code specifies.
Your "x86 code" has nothing to do with how my halt decider works; I
am using an entirely different simulation method, one that actually
works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine
address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to >>>>> its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt decision
to each branch is a perfectly valid SHD design; again a design,
unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final
"return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.
On 4/21/2023 11:36 AM, Mr Flibble wrote:
On 21/04/2023 4:16 pm, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>> not have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>> decision to its caller in finite time
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>> or not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>> simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>> argumen.
Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>> steps of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>> answer is wrong.
When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>> simulated by H cannot
possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>> finite number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is >>>>>>>>>>>>>>>> non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>> so it is
unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>> Turing
machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N >>>>>>>>>>>>>>> steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>> of your decider. Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>
Although H must always return to some caller H is not >>>>>>>>>>>>> allowed to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>
It overrode that behavior that was specified by the machine >>>>>>>>>>> code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how >>>>>>>>> your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact >>>>>>>> that it returns a correct halting decision for Px; something
your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you >>>>>>> must change the behavior of Px away from the behavior that its
x86 code specifies.
Your "x86 code" has nothing to do with how my halt decider works;
I am using an entirely different simulation method, one that
actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its
machine address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and
jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again a
design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final >>>>> "return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.
Nope, a correctly simulated Px will allow it to reach its own final
state (termination); your H does NOT perform a correct simulation
because your H is broken.
/Flibble
Strawman deception
Px correctly simulated by H will never reach its own simulated final
state of "return" because Px and H have a pathological relationship to
each other.
Measuring the behavior of Px simulated by a simulator have no such pathological relationship is the same as rejecting apples because lemons
are too sour. One must compare apples to apples.
On 21/04/2023 5:41 pm, olcott wrote:
On 4/21/2023 11:36 AM, Mr Flibble wrote:
On 21/04/2023 4:16 pm, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>>> not have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>> decision to its caller in finite time
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote:
You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>> simulated.A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>>> or not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>>> simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>>> steps of its inputBut if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>>
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>>> argumen.
Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>> steps of input D
simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>> actual behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>> answer is wrong.
When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>> simulated by H cannot
possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>> finite number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is >>>>>>>>>>>>>>>>> non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>>> so it is
unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>>> Turing
machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N >>>>>>>>>>>>>>>> steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>>> of your decider. Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>>
Although H must always return to some caller H is not >>>>>>>>>>>>>> allowed to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>>
It overrode that behavior that was specified by the machine >>>>>>>>>>>> code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how >>>>>>>>>> your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact >>>>>>>>> that it returns a correct halting decision for Px; something >>>>>>>>> your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally
you must change the behavior of Px away from the behavior that >>>>>>>> its x86 code specifies.
Your "x86 code" has nothing to do with how my halt decider works; >>>>>>> I am using an entirely different simulation method, one that
actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its
machine address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and
jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again a >>>>>>> design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own
final
"return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do
this.
Nope, a correctly simulated Px will allow it to reach its own final
state (termination); your H does NOT perform a correct simulation
because your H is broken.
/Flibble
Strawman deception
Px correctly simulated by H will never reach its own simulated final
state of "return" because Px and H have a pathological relationship to
each other.
Nope, there is no pathological relationship between Px and H because Px discards the result of H (i.e. it does not try to do the opposite of the
H halting result as per the definition of the Halting Problem).
Measuring the behavior of Px simulated by a simulator have no such
pathological relationship is the same as rejecting apples because lemons
are too sour. One must compare apples to apples.
LOLWUT?!
/Flibble
On 4/21/2023 12:42 PM, Mr Flibble wrote:
On 21/04/2023 5:41 pm, olcott wrote:
On 4/21/2023 11:36 AM, Mr Flibble wrote:
On 21/04/2023 4:16 pm, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:It overrode that behavior that was specified by the machine >>>>>>>>>>>>> code for Px.
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote:
A simulating halt decider correctly predicts >>>>>>>>>>>>>>>>>>>>> whether or not its
correctly simulated input can possibly reach its >>>>>>>>>>>>>>>>>>>>> own final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>>>> simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't >>>>>>>>>>>>>>>>>>>> always non-halting.
When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>> N steps of its inputBut if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make >>>>>>>>>>>>>>>>>>>> this claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>>>
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
No one claims that it doesn't correctly reproduce >>>>>>>>>>>>>>>>>>>> the first N steps of the behavior, that is a >>>>>>>>>>>>>>>>>>>> Strawman argumen.
Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>>> steps of input D
simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>>> actual behavior that D
presents to H for these same N steps. >>>>>>>>>>>>>>>>>>>>>
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>>> answer is wrong.
When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>>> simulated by H cannot
possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>>> finite number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H" >>>>>>>>>>>>>>>>>>> You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>>> simulated.
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is >>>>>>>>>>>>>>>>>> non-halting is erroneous:
My new paper anchors its ideas in actual Turing >>>>>>>>>>>>>>>>> machines so it is
unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>>>> Turing
machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first >>>>>>>>>>>>>>>>> N steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>>>> of your decider. Only the Flibble Signaling >>>>>>>>>>>>>>>>>> Simulating Halt Decider (SSHD) correctly handles this >>>>>>>>>>>>>>>>>> case.
Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>>> decision to its caller in finite time
Although H must always return to some caller H is not >>>>>>>>>>>>>>> allowed to return
to any caller that essentially calls H in infinite >>>>>>>>>>>>>>> recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>>>> not have any infinite recursion thereby proving that >>>>>>>>>>>>>
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how >>>>>>>>>>> your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its >>>>>>>>>> simulation just like I have defined it as evidenced by the >>>>>>>>>> fact that it returns a correct halting decision for Px;
something your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally >>>>>>>>> you must change the behavior of Px away from the behavior that >>>>>>>>> its x86 code specifies.
Your "x86 code" has nothing to do with how my halt decider
works; I am using an entirely different simulation method, one >>>>>>>> that actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its
machine address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px
[00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px
[00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and >>>>>>>>> jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the >>>>>>>> simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again a >>>>>>>> design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own >>>>>>> final
"return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do
this.
Nope, a correctly simulated Px will allow it to reach its own final
state (termination); your H does NOT perform a correct simulation
because your H is broken.
/Flibble
Strawman deception
Px correctly simulated by H will never reach its own simulated final
state of "return" because Px and H have a pathological relationship to
each other.
Nope, there is no pathological relationship between Px and H because
Px discards the result of H (i.e. it does not try to do the opposite
of the H halting result as per the definition of the Halting Problem).
It seems that you continue to fail to see the nested simulation
01 void Px(void (*x)())
02 {
03 (void) H(x, x);
04 return;
05 }
06
07 void main()
08 {
09 H(Px,Px);
10 }
*Execution Trace when H never aborts its simulation*
main() calls H(Px,Px) that simulates Px(Px) at line 09
*keeps repeating*
simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px) at
line 03 ...
On 21/04/2023 7:36 pm, olcott wrote:
On 4/21/2023 12:42 PM, Mr Flibble wrote:
On 21/04/2023 5:41 pm, olcott wrote:
On 4/21/2023 11:36 AM, Mr Flibble wrote:
On 21/04/2023 4:16 pm, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:It overrode that behavior that was specified by the >>>>>>>>>>>>>> machine code for Px.
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote:
A simulating halt decider correctly predicts >>>>>>>>>>>>>>>>>>>>>> whether or not its
correctly simulated input can possibly reach its >>>>>>>>>>>>>>>>>>>>>> own final state and
halt. It does this by correctly recognizing >>>>>>>>>>>>>>>>>>>>>> several non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they >>>>>>>>>>>>>>>>>>>>>> complete.
Except t doesn't o this for the "pathological" >>>>>>>>>>>>>>>>>>>>> program.
The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say >>>>>>>>>>>>>>>>>>>>> will be non-halting, and then "Correctly Simulated" >>>>>>>>>>>>>>>>>>>>> by giving it representation to a UTM, we see that >>>>>>>>>>>>>>>>>>>>> the simulation reaches a final state. >>>>>>>>>>>>>>>>>>>>>
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't >>>>>>>>>>>>>>>>>>>>> always non-halting.
When a simulating halt decider correctly simulates >>>>>>>>>>>>>>>>>>>>>> N steps of its inputBut if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make >>>>>>>>>>>>>>>>>>>>> this claim shows you don't actually know what a UTM >>>>>>>>>>>>>>>>>>>>> is.
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
No one claims that it doesn't correctly reproduce >>>>>>>>>>>>>>>>>>>>> the first N steps of the behavior, that is a >>>>>>>>>>>>>>>>>>>>> Strawman argumen.
Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>>>> steps of input D
simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>>>> actual behavior that D
presents to H for these same N steps. >>>>>>>>>>>>>>>>>>>>>>
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr
Right, so we are concerned about the behavior of >>>>>>>>>>>>>>>>>>>>> the ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>>>> answer is wrong.
When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>>>> simulated by H cannot
possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>>>> finite number of steps
of correct simulation then we have conclusive >>>>>>>>>>>>>>>>>>>>>> proof that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H" >>>>>>>>>>>>>>>>>>>> You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>>>> simulated.
It turns out that the non-halting behavior pattern >>>>>>>>>>>>>>>>>>>> is correctly
recognized in the first N steps.
Your assumption that a program that calls H is >>>>>>>>>>>>>>>>>>> non-halting is erroneous:
My new paper anchors its ideas in actual Turing >>>>>>>>>>>>>>>>>> machines so it is
unequivocal. The first two pages re only about the >>>>>>>>>>>>>>>>>> Linz Turing
machine based proof.
The H/D material is now on a single page and all >>>>>>>>>>>>>>>>>> reference
to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first >>>>>>>>>>>>>>>>>> N steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); >>>>>>>>>>>>>>>>>>> your decider thinks that Px is non-halting which is >>>>>>>>>>>>>>>>>>> an obvious error due to a design flaw in the >>>>>>>>>>>>>>>>>>> architecture of your decider. Only the Flibble >>>>>>>>>>>>>>>>>>> Signaling Simulating Halt Decider (SSHD) correctly >>>>>>>>>>>>>>>>>>> handles this case.
Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>>>> decision to its caller in finite time
Although H must always return to some caller H is not >>>>>>>>>>>>>>>> allowed to return
to any caller that essentially calls H in infinite >>>>>>>>>>>>>>>> recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>>>>> not have any infinite recursion thereby proving that >>>>>>>>>>>>>>
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how >>>>>>>>>>>> your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its >>>>>>>>>>> simulation just like I have defined it as evidenced by the >>>>>>>>>>> fact that it returns a correct halting decision for Px;
something your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally >>>>>>>>>> you must change the behavior of Px away from the behavior that >>>>>>>>>> its x86 code specifies.
Your "x86 code" has nothing to do with how my halt decider
works; I am using an entirely different simulation method, one >>>>>>>>> that actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its >>>>>>>>>> machine address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px
[00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px
[00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and >>>>>>>>>> jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the >>>>>>>>> simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again >>>>>>>>> a design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own >>>>>>>> final
"return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a >>>>>> finite number of steps of correct simulation. It can't possibly do >>>>>> this.
Nope, a correctly simulated Px will allow it to reach its own final
state (termination); your H does NOT perform a correct simulation
because your H is broken.
/Flibble
Strawman deception
Px correctly simulated by H will never reach its own simulated final
state of "return" because Px and H have a pathological relationship to >>>> each other.
Nope, there is no pathological relationship between Px and H because
Px discards the result of H (i.e. it does not try to do the opposite
of the H halting result as per the definition of the Halting Problem).
It seems that you continue to fail to see the nested simulation
01 void Px(void (*x)())
02 {
03 (void) H(x, x);
04 return;
05 }
06
07 void main()
08 {
09 H(Px,Px);
10 }
*Execution Trace when H never aborts its simulation*
main() calls H(Px,Px) that simulates Px(Px) at line 09
*keeps repeating*
simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px) at
line 03 ...
"nested simulation" (recursion) is a property of your broken halt
decider
On 4/21/2023 3:02 PM, Mr Flibble wrote:
On 21/04/2023 7:36 pm, olcott wrote:
On 4/21/2023 12:42 PM, Mr Flibble wrote:
On 21/04/2023 5:41 pm, olcott wrote:
On 4/21/2023 11:36 AM, Mr Flibble wrote:
On 21/04/2023 4:16 pm, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:It overrode that behavior that was specified by the >>>>>>>>>>>>>>> machine code for Px.
On 4/19/2023 1:47 PM, Mr Flibble wrote:
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote: >>>>>>>>>>>>>>>>>>>> On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>>>> On 4/18/23 1:00 AM, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>> A simulating halt decider correctly predicts >>>>>>>>>>>>>>>>>>>>>>> whether or not its
correctly simulated input can possibly reach its >>>>>>>>>>>>>>>>>>>>>>> own final state and
halt. It does this by correctly recognizing >>>>>>>>>>>>>>>>>>>>>>> several non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they >>>>>>>>>>>>>>>>>>>>>>> complete.
Except t doesn't o this for the "pathological" >>>>>>>>>>>>>>>>>>>>>> program.
The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say >>>>>>>>>>>>>>>>>>>>>> will be non-halting, and then "Correctly >>>>>>>>>>>>>>>>>>>>>> Simulated" by giving it representation to a UTM, >>>>>>>>>>>>>>>>>>>>>> we see that the simulation reaches a final state. >>>>>>>>>>>>>>>>>>>>>>
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't >>>>>>>>>>>>>>>>>>>>>> always non-halting.
When a simulating halt decider correctly >>>>>>>>>>>>>>>>>>>>>>> simulates N steps of its inputBut if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make >>>>>>>>>>>>>>>>>>>>>> this claim shows you don't actually know what a >>>>>>>>>>>>>>>>>>>>>> UTM is.
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features. >>>>>>>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a >>>>>>>>>>>>>>>>>>>>>> Street Legal vehicle, since it started as one and >>>>>>>>>>>>>>>>>>>>>> just had some extra features axded. >>>>>>>>>>>>>>>>>>>>>>
No one claims that it doesn't correctly reproduce >>>>>>>>>>>>>>>>>>>>>> the first N steps of the behavior, that is a >>>>>>>>>>>>>>>>>>>>>> Strawman argumen.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for >>>>>>>>>>>>>>>>>>>>>>> the first N steps of simulation: >>>>>>>>>>>>>>>>>>>>>>> (a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns >>>>>>>>>>>>>>>>>>>>>>> doesn't change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>>>>>> doesn't change the first N steps. >>>>>>>>>>>>>>>>>>>>>>
Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>>>>>> steps of input D
simulated by simulating halt decider H are the >>>>>>>>>>>>>>>>>>>>>>> actual behavior that D
presents to H for these same N steps. >>>>>>>>>>>>>>>>>>>>>>>
*computation that halts*… “the Turing machine >>>>>>>>>>>>>>>>>>>>>>> will halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr
Right, so we are concerned about the behavior of >>>>>>>>>>>>>>>>>>>>>> the ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>>>>>> answer is wrong.
When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>>>>>> simulated by H cannot
possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>>>>>> finite number of steps
of correct simulation then we have conclusive >>>>>>>>>>>>>>>>>>>>>>> proof that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H" >>>>>>>>>>>>>>>>>>>>> You agreed that the first N steps are correctly >>>>>>>>>>>>>>>>>>>>> simulated.
It turns out that the non-halting behavior pattern >>>>>>>>>>>>>>>>>>>>> is correctly
recognized in the first N steps.
Your assumption that a program that calls H is >>>>>>>>>>>>>>>>>>>> non-halting is erroneous:
My new paper anchors its ideas in actual Turing >>>>>>>>>>>>>>>>>>> machines so it is
unequivocal. The first two pages re only about the >>>>>>>>>>>>>>>>>>> Linz Turing
machine based proof.
The H/D material is now on a single page and all >>>>>>>>>>>>>>>>>>> reference
to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the >>>>>>>>>>>>>>>>>>> first N steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the >>>>>>>>>>>>>>>>>>> Halting Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); >>>>>>>>>>>>>>>>>>>> your decider thinks that Px is non-halting which is >>>>>>>>>>>>>>>>>>>> an obvious error due to a design flaw in the >>>>>>>>>>>>>>>>>>>> architecture of your decider. Only the Flibble >>>>>>>>>>>>>>>>>>>> Signaling Simulating Halt Decider (SSHD) correctly >>>>>>>>>>>>>>>>>>>> handles this case.
Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>>>>>> decision to its caller in finite time
Although H must always return to some caller H is not >>>>>>>>>>>>>>>>> allowed to return
to any caller that essentially calls H in infinite >>>>>>>>>>>>>>>>> recursion.
The Flibble Signaling Simulating Halt Decider (SSHD) >>>>>>>>>>>>>>>> does not have any infinite recursion thereby proving that >>>>>>>>>>>>>>>
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about >>>>>>>>>>>>> how your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its >>>>>>>>>>>> simulation just like I have defined it as evidenced by the >>>>>>>>>>>> fact that it returns a correct halting decision for Px; >>>>>>>>>>>> something your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally >>>>>>>>>>> you must change the behavior of Px away from the behavior >>>>>>>>>>> that its x86 code specifies.
Your "x86 code" has nothing to do with how my halt decider >>>>>>>>>> works; I am using an entirely different simulation method, one >>>>>>>>>> that actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its >>>>>>>>>>> machine address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px
[00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px
[00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and >>>>>>>>>>> jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking >>>>>>>>>> the simulation into two branches and returning a different >>>>>>>>>> halt decision to each branch is a perfectly valid SHD design; >>>>>>>>>> again a design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its >>>>>>>>> own final
"return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a >>>>>>> finite number of steps of correct simulation. It can't possibly
do this.
Nope, a correctly simulated Px will allow it to reach its own
final state (termination); your H does NOT perform a correct
simulation because your H is broken.
/Flibble
Strawman deception
Px correctly simulated by H will never reach its own simulated final >>>>> state of "return" because Px and H have a pathological relationship to >>>>> each other.
Nope, there is no pathological relationship between Px and H because
Px discards the result of H (i.e. it does not try to do the opposite
of the H halting result as per the definition of the Halting Problem). >>>>
It seems that you continue to fail to see the nested simulation
01 void Px(void (*x)())
02 {
03 (void) H(x, x);
04 return;
05 }
06
07 void main()
08 {
09 H(Px,Px);
10 }
*Execution Trace when H never aborts its simulation*
main() calls H(Px,Px) that simulates Px(Px) at line 09
*keeps repeating*
simulated Px(Px) calls simulated H(Px,Px) that simulates Px(Px)
at line 03 ...
"nested simulation" (recursion) is a property of your broken halt decider
Nested simulation is inherent when any simulating halt decider is
applied to any of the conventional halting problem counter-example
inputs. That you may fail to comprehend this is not my mistake.
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
a) If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D would never stop running
unless aborted then
(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>> have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether or >>>>>>>>>>>>>>>>> not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving it >>>>>>>>>>>>>>>> representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>> reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM would >>>>>>>>>>>>>>>>> derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had some >>>>>>>>>>>>>>>> extra features axded.
My reviewers cannot show that any of the extra features >>>>>>>>>>>>>>>>> added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>> answer is wrong.
Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>> of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will halt >>>>>>>>>>>>>>>>> whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>
When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>> by H cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is non-halting >>>>>>>>>>>>>> is erroneous:
My new paper anchors its ideas in actual Turing machines so >>>>>>>>>>>>> it is
unequivocal. The first two pages re only about the Linz Turing >>>>>>>>>>>>> machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N steps >>>>>>>>>>>>> UTM based simulated by a simulating halt decider are >>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>> decider. Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>> decision to its caller in finite time
Although H must always return to some caller H is not allowed >>>>>>>>>>> to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>
It overrode that behavior that was specified by the machine
code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how your >>>>>>> program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact
that it returns a correct halting decision for Px; something your
broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you
must change the behavior of Px away from the behavior that its x86
code specifies.
Your "x86 code" has nothing to do with how my halt decider works; I
am using an entirely different simulation method, one that actually
works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine
address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to >>>>> its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt decision
to each branch is a perfectly valid SHD design; again a design,
unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final
"return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
a) If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then
(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the definition of "correct simulation" that Professer Sipser uses, your arguement is VOID.
You are just PROVING your stupidity.
On 4/21/23 11:16 AM, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does not >>>>>>>>>>> have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>> or not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a Decider >>>>>>>>>>>>>>>>> that does give an answer, which you say will be >>>>>>>>>>>>>>>>> non-halting, and then "Correctly Simulated" by giving >>>>>>>>>>>>>>>>> it representation to a UTM, we see that the simulation >>>>>>>>>>>>>>>>> reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps doesn't >>>>>>>>>>>>>>>>>> change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman argumen. >>>>>>>>>>>>>>>>>
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>> answer is wrong.
Because of all this we can know that the first N steps >>>>>>>>>>>>>>>>>> of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>> halt whenever it enters a final state” (Linz:1990:234)rrr >>>>>>>>>>>>>>>>>
When we see (after N steps) that D correctly simulated >>>>>>>>>>>>>>>>>> by H cannot
possibly reach its simulated final state in any finite >>>>>>>>>>>>>>>>>> number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is
non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>> so it is
unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>> Turing
machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N >>>>>>>>>>>>>> steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an obvious >>>>>>>>>>>>>>> error due to a design flaw in the architecture of your >>>>>>>>>>>>>>> decider. Only the Flibble Signaling Simulating Halt >>>>>>>>>>>>>>> Decider (SSHD) correctly handles this case.
Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>> decision to its caller in finite time
Although H must always return to some caller H is not
allowed to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>
It overrode that behavior that was specified by the machine >>>>>>>>>> code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how
your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact
that it returns a correct halting decision for Px; something your >>>>>>> broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you
must change the behavior of Px away from the behavior that its x86 >>>>>> code specifies.
Your "x86 code" has nothing to do with how my halt decider works; I
am using an entirely different simulation method, one that actually
works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its machine >>>>>> address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and jump to >>>>>> its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again a
design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final >>>> "return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.
So, you're saying that a UTM doesn't do a "Correct Simulation"?
UTM(Px,Px) will see Px call H, and then H simulation its copy of Px(Px),
then aborting its simulaiton and returning non-halting to Px and then Px halting
It is only the PARTIAL simulation by whatever H Px is built on that
can't reach that state. The UTM will ALWAYS reach that state slightly
(one recursion) after your H stops its simulation.
On 4/21/2023 5:34 PM, Richard Damon wrote:
On 4/21/23 11:16 AM, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>> not have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>> decision to its caller in finite time
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>> or not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>> simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>> argumen.
Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>> steps of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>> answer is wrong.
When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>> simulated by H cannot
possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>> finite number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is >>>>>>>>>>>>>>>> non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>> so it is
unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>> Turing
machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N >>>>>>>>>>>>>>> steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>> of your decider. Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>
Although H must always return to some caller H is not >>>>>>>>>>>>> allowed to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>
It overrode that behavior that was specified by the machine >>>>>>>>>>> code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how >>>>>>>>> your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact >>>>>>>> that it returns a correct halting decision for Px; something
your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you >>>>>>> must change the behavior of Px away from the behavior that its
x86 code specifies.
Your "x86 code" has nothing to do with how my halt decider works;
I am using an entirely different simulation method, one that
actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its
machine address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and
jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again a
design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final >>>>> "return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.
So, you're saying that a UTM doesn't do a "Correct Simulation"?
Always with the strawman error.
I am saying that when Px is correctly simulated by H it cannot possibly
reach its own simulated "return" instruction in any finite number of
steps because Px is defined to have a pathological relationship to H.
When we examine the behavior of Px simulated by a pure simulator or even another simulating halt decider such as H1 having no such pathological relationship as the basis of the actual behavior of the input to H we
are comparing apples to lemons and rejecting the apples because lemons
are too sour.
UTM(Px,Px) will see Px call H, and then H simulation its copy of
Px(Px), then aborting its simulaiton and returning non-halting to Px
and then Px halting
It is only the PARTIAL simulation by whatever H Px is built on that
can't reach that state. The UTM will ALWAYS reach that state slightly
(one recursion) after your H stops its simulation.
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
a) If simulating halt decider H correctly simulates its input D until H
correctly determines that its simulated D would never stop running
unless aborted then
(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the definition
of "correct simulation" that Professer Sipser uses, your arguement is
VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite number of steps because Ĥ is defined to have a pathological relationship
to embedded_H.
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even another simulating halt decider such as embedded_H1 having no such pathological relationship as the basis of the actual behavior of the
input to embedded_H we are comparing apples to lemons and rejecting the apples because lemons are too sour.
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
a) If simulating halt decider H correctly simulates its input D until H >>>> correctly determines that its simulated D would never stop running
unless aborted then
(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the definition
of "correct simulation" that Professer Sipser uses, your arguement is
VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite
number of steps because Ĥ is defined to have a pathological relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the definition that
allows using a simulation instead of the actual machines behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure simulator gives, not the apples that you H gives.
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
another simulating halt decider such as embedded_H1 having no such
pathological relationship as the basis of the actual behavior of the
input to embedded_H we are comparing apples to lemons and rejecting the
apples because lemons are too sour.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the actual
truth of the system.
H MUST answer about the behavior of the actual machine to be a Halt
Decider, since that is what the mapping a Halt Decider is supposed to
answer is based on.
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following verbatim
paragraph is correct:
a) If simulating halt decider H correctly simulates its input D
until H
correctly determines that its simulated D would never stop running
unless aborted then
(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the definition
of "correct simulation" that Professer Sipser uses, your arguement
is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite >>> number of steps because Ĥ is defined to have a pathological relationship >>> to embedded_H.
Since H never "Correctly Simulates" the input per the definition that
allows using a simulation instead of the actual machines behavior,
YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>> another simulating halt decider such as embedded_H1 having no such
pathological relationship as the basis of the actual behavior of the
input to embedded_H we are comparing apples to lemons and rejecting the
apples because lemons are too sour.
simulator gives, not the apples that you H gives.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the actual
truth of the system.
H MUST answer about the behavior of the actual machine to be a Halt
Decider, since that is what the mapping a Halt Decider is supposed to
answer is based on.
When a simulating halt decider or even a plain UTM examines the behavior
of its input and the SHD or UTM has a pathological relationship to its
input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual behavior
of this actual input to the original SHD or UTM.
I used to think that you were simply lying to play head games, I no
longer believe this. Now I believe that you are ensnared by group-think.
Group-think is the way that 40% of the electorate could honestly believe
that significant voter fraud changed the outcome of the 2020 election
even though there has very persistently been zero evidence of this. https://www.psychologytoday.com/us/basics/groupthink
Hopefully they will not believe that Fox news paid $787 million to trick people into believing that there was no voter fraud.
Maybe they will believe that tiny space aliens living in the heads of
Fox leadership took control of their brains and forced them to pay.
The actual behavior of the actual input is correctly determined by an embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns.
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following
verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its input D
until H
correctly determines that its simulated D would never stop running >>>>>> unless aborted then
(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the
definition of "correct simulation" that Professer Sipser uses, your
arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>> cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite >>>> number of steps because Ĥ is defined to have a pathological
relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the definition that
allows using a simulation instead of the actual machines behavior,
YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>>> another simulating halt decider such as embedded_H1 having no such
pathological relationship as the basis of the actual behavior of the
input to embedded_H we are comparing apples to lemons and rejecting the >>>> apples because lemons are too sour.
simulator gives, not the apples that you H gives.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the actual
truth of the system.
H MUST answer about the behavior of the actual machine to be a Halt
Decider, since that is what the mapping a Halt Decider is supposed to
answer is based on.
When a simulating halt decider or even a plain UTM examines the behavior
of its input and the SHD or UTM has a pathological relationship to its
input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM, then
YES, the UTM will generate an infinite behavior, but so does the machine itself, and ANY UTM will see that same infinite behavior.
The problem is that you SHD is NOT a UTM, and thus the fact that it
aborts its simulation and returns an answer changes the behavior of the machine that USED it (compared to a UTM), and thus to be "correct", the
SHD needs to take that into account.
I used to think that you were simply lying to play head games, I no
longer believe this. Now I believe that you are ensnared by group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies.
Group-think is the way that 40% of the electorate could honestly believe
that significant voter fraud changed the outcome of the 2020 election
even though there has very persistently been zero evidence of this.
https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider, which is
DEFINIED that H(D,D) needs to return the answer "Halting" if D(D) Halts,
is correct to give the answer non-halting even though D(D) Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million to trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain views by
telling them the lies they wanted to hear.
At least they KNEW they were lying, but didn't care, and had to pay the price.
You don't seem to understand that you are lying just as bad as they were.
Maybe they will believe that tiny space aliens living in the heads of
Fox leadership took control of their brains and forced them to pay.
The actual behavior of the actual input is correctly determined by an
embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns.
But embedded_H isn't "embedded_UTM", so you are just living a lie.
You are just to ignorant to understand that a UTM can't be modified to
stop its simulation and still be a UTM.
That is like saying that all racing cars are street legal, because they
are based on the design of cars that were street legal.
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following
verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its input D
until H
correctly determines that its simulated D would never stop running >>>>>>> unless aborted then
(b) H can abort its simulation of D and correctly report that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the
definition of "correct simulation" that Professer Sipser uses,
your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>>> cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite >>>>> number of steps because Ĥ is defined to have a pathological
relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the definition
that allows using a simulation instead of the actual machines
behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>>>> another simulating halt decider such as embedded_H1 having no such
pathological relationship as the basis of the actual behavior of the >>>>> input to embedded_H we are comparing apples to lemons and rejecting
the
apples because lemons are too sour.
simulator gives, not the apples that you H gives.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the actual
truth of the system.
H MUST answer about the behavior of the actual machine to be a Halt
Decider, since that is what the mapping a Halt Decider is supposed
to answer is based on.
When a simulating halt decider or even a plain UTM examines the behavior >>> of its input and the SHD or UTM has a pathological relationship to its
input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual behavior >>> of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM, then
YES, the UTM will generate an infinite behavior, but so does the
machine itself, and ANY UTM will see that same infinite behavior.
The point is that that behavior of the input to embedded_H must be
measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.
I know that this is totally obvious thus I had to conclude that anyone denying it must be a liar that is only playing head games for sadistic pleasure.
I did not take into account the power of group think that got at least
100 million Americans to believe the election fraud changed the outcome
of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of Texas
only turned up one Republican that cheated.
Only during the 2022 election did it look like this was starting to turn around a little bit.
The problem is that you SHD is NOT a UTM, and thus the fact that it
aborts its simulation and returns an answer changes the behavior of
the machine that USED it (compared to a UTM), and thus to be
"correct", the SHD needs to take that into account.
I used to think that you were simply lying to play head games, I no
longer believe this. Now I believe that you are ensnared by group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies.
Group-think is the way that 40% of the electorate could honestly believe >>> that significant voter fraud changed the outcome of the 2020 election
even though there has very persistently been zero evidence of this.
https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider, which is
DEFINIED that H(D,D) needs to return the answer "Halting" if D(D)
Halts, is correct to give the answer non-halting even though D(D) Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million to trick >>> people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain views by
telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies.
At least they KNEW they were lying, but didn't care, and had to pay
the price.
You don't seem to understand that you are lying just as bad as they were.
I am absolutely not lying Truth is the most important thing to me even
much more important than love.
All of this work is aimed at formalizing the notion of truth because the
HP, LP, IT and Tarski's Undefinability theorem are all instances of the
same Olcott(2004) pathological self-reference error.
Maybe they will believe that tiny space aliens living in the heads of
Fox leadership took control of their brains and forced them to pay.
The actual behavior of the actual input is correctly determined by an
embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns.
But embedded_H isn't "embedded_UTM", so you are just living a lie.
embedded_H is embedded_UTM for the first N steps even when these N steps include 10,000 recursive simulations.
After 10,000 recursive simulations even an idiot can infer that more
will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state of ⟨Ĥ.qn⟩ in any finite number of steps.
You and I both know that mathematical induction proves this in far less
than 10,000 recursive simulations. Why you deny it when you should know
this is true is beyond me.
You are just to ignorant to understand that a UTM can't be modified to
stop its simulation and still be a UTM.
That is like saying that all racing cars are street legal, because
they are based on the design of cars that were street legal.
On 4/21/2023 5:34 PM, Richard Damon wrote:
On 4/21/23 11:16 AM, olcott wrote:
On 4/21/2023 7:17 AM, Mr Flibble wrote:
On 20/04/2023 8:20 pm, olcott wrote:
On 4/20/2023 2:08 PM, Mr Flibble wrote:
On 20/04/2023 6:49 pm, olcott wrote:
On 4/20/2023 12:32 PM, Mr Flibble wrote:
On 19/04/2023 11:52 pm, olcott wrote:
On 4/19/2023 4:14 PM, Mr Flibble wrote:
On 19/04/2023 10:10 pm, olcott wrote:
On 4/19/2023 3:32 PM, Mr Flibble wrote:
On 19/04/2023 8:39 pm, olcott wrote:
On 4/19/2023 1:47 PM, Mr Flibble wrote:The Flibble Signaling Simulating Halt Decider (SSHD) does >>>>>>>>>>>> not have any infinite recursion thereby proving that
On 18/04/2023 11:39 pm, olcott wrote:
On 4/18/2023 4:55 PM, Mr Flibble wrote:Nope. For H to be a halt decider it must return a halt >>>>>>>>>>>>>> decision to its caller in finite time
On 18/04/2023 4:58 pm, olcott wrote:
On 4/18/2023 6:32 AM, Richard Damon wrote:
On 4/18/23 1:00 AM, olcott wrote:You agreed that the first N steps are correctly simulated. >>>>>>>>>>>>>>>>>
A simulating halt decider correctly predicts whether >>>>>>>>>>>>>>>>>>> or not its
correctly simulated input can possibly reach its own >>>>>>>>>>>>>>>>>>> final state and
halt. It does this by correctly recognizing several >>>>>>>>>>>>>>>>>>> non-halting behavior
patterns in a finite number of steps of correct >>>>>>>>>>>>>>>>>>> simulation. Inputs that
do terminate are simply simulated until they complete. >>>>>>>>>>>>>>>>>>>
Except t doesn't o this for the "pathological" program. >>>>>>>>>>>>>>>>>>
The "Pathological Program" when built on such a >>>>>>>>>>>>>>>>>> Decider that does give an answer, which you say will >>>>>>>>>>>>>>>>>> be non-halting, and then "Correctly Simulated" by >>>>>>>>>>>>>>>>>> giving it representation to a UTM, we see that the >>>>>>>>>>>>>>>>>> simulation reaches a final state.
Thus, your H was WRONG t make the answer. And the >>>>>>>>>>>>>>>>>> problem is you have added a pattern that isn't always >>>>>>>>>>>>>>>>>> non-halting.
When a simulating halt decider correctly simulates N >>>>>>>>>>>>>>>>>>> steps of its input
it derives the exact same N steps that a pure UTM >>>>>>>>>>>>>>>>>>> would derive because
it is itself a UTM with extra features.
But if ISN'T a "UTM" any more, because some of the >>>>>>>>>>>>>>>>>> features you added have removed essential features >>>>>>>>>>>>>>>>>> needed for it to be an actual UTM. That you make this >>>>>>>>>>>>>>>>>> claim shows you don't actually know what a UTM is. >>>>>>>>>>>>>>>>>>
This is like saying a NASCAR Racing Car is a Street >>>>>>>>>>>>>>>>>> Legal vehicle, since it started as one and just had >>>>>>>>>>>>>>>>>> some extra features axded.
My reviewers cannot show that any of the extra >>>>>>>>>>>>>>>>>>> features added to the UTM
change the behavior of the simulated input for the >>>>>>>>>>>>>>>>>>> first N steps of simulation:
(a) Watching the behavior doesn't change it. >>>>>>>>>>>>>>>>>>> (b) Matching non-halting behavior patterns doesn't >>>>>>>>>>>>>>>>>>> change it
(c) Even aborting the simulation after N steps >>>>>>>>>>>>>>>>>>> doesn't change the first N steps.
No one claims that it doesn't correctly reproduce the >>>>>>>>>>>>>>>>>> first N steps of the behavior, that is a Strawman >>>>>>>>>>>>>>>>>> argumen.
Because of all this we can know that the first N >>>>>>>>>>>>>>>>>>> steps of input D
simulated by simulating halt decider H are the actual >>>>>>>>>>>>>>>>>>> behavior that D
presents to H for these same N steps.
*computation that halts*… “the Turing machine will >>>>>>>>>>>>>>>>>>> halt whenever it enters a final state” >>>>>>>>>>>>>>>>>>> (Linz:1990:234)rrr
Right, so we are concerned about the behavior of the >>>>>>>>>>>>>>>>>> ACTUAL machine, not a partial simulation of it. >>>>>>>>>>>>>>>>>> H(D,D) returns non-halting, but D(D) Halts, so the >>>>>>>>>>>>>>>>>> answer is wrong.
When we see (after N steps) that D correctly >>>>>>>>>>>>>>>>>>> simulated by H cannot
possibly reach its simulated final state in any >>>>>>>>>>>>>>>>>>> finite number of steps
of correct simulation then we have conclusive proof >>>>>>>>>>>>>>>>>>> that D presents non-
halting behavior to H.
But it isn't "Correctly Simulated by H"
It turns out that the non-halting behavior pattern is >>>>>>>>>>>>>>>>> correctly
recognized in the first N steps.
Your assumption that a program that calls H is >>>>>>>>>>>>>>>> non-halting is erroneous:
My new paper anchors its ideas in actual Turing machines >>>>>>>>>>>>>>> so it is
unequivocal. The first two pages re only about the Linz >>>>>>>>>>>>>>> Turing
machine based proof.
The H/D material is now on a single page and all reference >>>>>>>>>>>>>>> to the x86 language has been stripped and replaced with >>>>>>>>>>>>>>> analysis entirely in C.
With this new paper even Richard admits that the first N >>>>>>>>>>>>>>> steps
UTM based simulated by a simulating halt decider are >>>>>>>>>>>>>>> necessarily the
actual behavior of these N steps.
*Simulating (partial) Halt Deciders Defeat the Halting >>>>>>>>>>>>>>> Problem Proofs*
https://www.researchgate.net/publication/369971402_Simulating_partial_Halt_Deciders_Defeat_the_Halting_Problem_Proofs
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px halts (it discards the result that H returns); your >>>>>>>>>>>>>>>> decider thinks that Px is non-halting which is an >>>>>>>>>>>>>>>> obvious error due to a design flaw in the architecture >>>>>>>>>>>>>>>> of your decider. Only the Flibble Signaling Simulating >>>>>>>>>>>>>>>> Halt Decider (SSHD) correctly handles this case. >>>>>>>>>>>>>>
Although H must always return to some caller H is not >>>>>>>>>>>>> allowed to return
to any caller that essentially calls H in infinite recursion. >>>>>>>>>>>>
It overrode that behavior that was specified by the machine >>>>>>>>>>> code for Px.
Nope. You SHD is not a halt decider as
I was not even talking about my SHD, I was talking about how >>>>>>>>> your program does its simulation incorrectly.
My SSHD does not do its simulation incorrectly: it does its
simulation just like I have defined it as evidenced by the fact >>>>>>>> that it returns a correct halting decision for Px; something
your broken SHD gets wrong.
In order for you to have Px simulated by H terminate normally you >>>>>>> must change the behavior of Px away from the behavior that its
x86 code specifies.
Your "x86 code" has nothing to do with how my halt decider works;
I am using an entirely different simulation method, one that
actually works.
void Px(void (*x)())
{
(void) H(x, x);
return;
}
Px correctly simulated by H cannot possibly reach past its
machine address of: [00001b3d].
_Px()
[00001b32] 55 push ebp
[00001b33] 8bec mov ebp,esp
[00001b35] 8b4508 mov eax,[ebp+08]
[00001b38] 50 push eax // push address of Px >>>>>>> [00001b39] 8b4d08 mov ecx,[ebp+08]
[00001b3c] 51 push ecx // push address of Px >>>>>>> [00001b3d] e800faffff call 00001542 // Call H
[00001b42] 83c408 add esp,+08
[00001b45] 5d pop ebp
[00001b46] c3 ret
Size in bytes:(0021) [00001b46]
What you are doing is the the same as recognizing that
_Infinite_Loop()
never halts, forcing it to break out of its infinite loop and
jump to
its "ret" instruction
_Infinite_Loop()
[00001c62] 55 push ebp
[00001c63] 8bec mov ebp,esp
[00001c65] ebfe jmp 00001c65
[00001c67] 5d pop ebp
[00001c68] c3 ret
Size in bytes:(0007) [00001c68]
No I am not: there is no infinite loop in Px above; forking the
simulation into two branches and returning a different halt
decision to each branch is a perfectly valid SHD design; again a
design, unlike yours, that actually works.
If you say that Px correctly simulated by H ever reaches its own final >>>>> "return" statement and halts you are incorrect.
Px halts if H is (or is part of) a genuine halt decider.
The simulated Px only halts if it reaches its own final state in a
finite number of steps of correct simulation. It can't possibly do this.
So, you're saying that a UTM doesn't do a "Correct Simulation"?
Always with the strawman error.
I am saying that when Px is correctly simulated by H it cannot possibly
reach its own simulated "return" instruction in any finite number of
steps because Px is defined to have a pathological relationship to H.
When we examine the behavior of Px simulated by a pure simulator or even another simulating halt decider such as H1 having no such pathological relationship as the basis of the actual behavior of the input to H we
are comparing apples to lemons and rejecting the apples because lemons
are too sour.
On 4/21/23 10:10 PM, olcott wrote:
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following
verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its input D >>>>>>>> until H
correctly determines that its simulated D would never stop running >>>>>>>> unless aborted then
(b) H can abort its simulation of D and correctly report that D >>>>>>>> specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the
definition of "correct simulation" that Professer Sipser uses,
your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>>>> cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite
number of steps because Ĥ is defined to have a pathological
relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the definition
that allows using a simulation instead of the actual machines
behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even >>>>>> another simulating halt decider such as embedded_H1 having no such >>>>>> pathological relationship as the basis of the actual behavior of the >>>>>> input to embedded_H we are comparing apples to lemons and
rejecting the
apples because lemons are too sour.
simulator gives, not the apples that you H gives.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the actual
truth of the system.
H MUST answer about the behavior of the actual machine to be a Halt
Decider, since that is what the mapping a Halt Decider is supposed
to answer is based on.
When a simulating halt decider or even a plain UTM examines the
behavior
of its input and the SHD or UTM has a pathological relationship to its >>>> input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual
behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM, then
YES, the UTM will generate an infinite behavior, but so does the
machine itself, and ANY UTM will see that same infinite behavior.
The point is that that behavior of the input to embedded_H must be
measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.
No, the behavior measured must be the DEFINED behavior, which IS the
behavior of the ACTUAL MACHINE.
That Halts, so H gets the wrong answer.
I know that this is totally obvious thus I had to conclude that anyone
denying it must be a liar that is only playing head games for sadistic
pleasure.
No, the fact that you think what you say shows that you are a TOTAL IDIOT.
I did not take into account the power of group think that got at least
100 million Americans to believe the election fraud changed the outcome
of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of Texas
only turned up one Republican that cheated.
Nope, you just don't understand the truth. You are ready for the truth, because it shows that you have been wrong, and you fragile ego can't
handle that.
Only during the 2022 election did it look like this was starting to turn
around a little bit.
You have been wrong a lot longer than that.
The problem is that you SHD is NOT a UTM, and thus the fact that it
aborts its simulation and returns an answer changes the behavior of
the machine that USED it (compared to a UTM), and thus to be
"correct", the SHD needs to take that into account.
I used to think that you were simply lying to play head games, I no
longer believe this. Now I believe that you are ensnared by
group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies.
Group-think is the way that 40% of the electorate could honestly
believe
that significant voter fraud changed the outcome of the 2020 election
even though there has very persistently been zero evidence of this.
https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider, which is
DEFINIED that H(D,D) needs to return the answer "Halting" if D(D)
Halts, is correct to give the answer non-halting even though D(D) Ha;ts. >>>
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million to
trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain views by
telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies.
So, you seem to beleive in 100% of your lies.
Yes, there is a portion of the population that fails to see what is
true, because, like you, they think their own ideas are more important
that what actually is true. As was philosophized, they ignore the truth,
but listen to what their itching ears what to hear. That fits you to the
T, as you won't see the errors that are pointed out to you, and you make
up more lies to try to hide your errors.
At least they KNEW they were lying, but didn't care, and had to pay
the price.
You don't seem to understand that you are lying just as bad as they
were.
I am absolutely not lying Truth is the most important thing to me even
much more important than love.
THen why to you lie so much, or are you just that stupid.
It is clear you just don't know what you are talking about and are just making stuff up.
It seems you have lied so much that you have convinced yourself of your
lies, and can no longer bear to let the truth in, so you just deny
anything that goes against your lies.
You have killed your own mind.
All of this work is aimed at formalizing the notion of truth because the
HP, LP, IT and Tarski's Undefinability theorem are all instances of the
same Olcott(2004) pathological self-reference error.
So, maybe you need to realize that Truth has to match what is actually
true, and you need to work with the definitions that exist, not the
alternate ideas you make up.
A Halt Decider is DEFINED that
H(M,w) needs to answer about the behavior of M(w).
You don't see to understand that, and it seems to even be a blind spot,
as you like dropping that part when you quote what H is supposed to do.
You seem to see "see" self-references where there are not actual self-references, but the effect of the "self-reference" is built from
simpler components. It seems you don't even understand what a "Self-Reference" actually is, maybe even what a "reference" actually is.
For the halt decider, P is built on a COPY of the claimed decider and
given a representation of that resultand machine. Not a single reference
in sight.
Maybe they will believe that tiny space aliens living in the heads of
Fox leadership took control of their brains and forced them to pay.
The actual behavior of the actual input is correctly determined by an
embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns.
But embedded_H isn't "embedded_UTM", so you are just living a lie.
embedded_H is embedded_UTM for the first N steps even when these N steps
include 10,000 recursive simulations.
Nope. Just your LIES. You clearly don't understand what a UTM is.
After 10,000 recursive simulations even an idiot can infer that more
will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state >> of ⟨Ĥ.qn⟩ in any finite number of steps.
The fact that if embedded_H does 10,000 recursive simulations and aborts means that H^ will halt after 10,001.
Your propblem is you logic only works if you can find an N that is
bigger than N+1
You and I both know that mathematical induction proves this in far less
than 10,000 recursive simulations. Why you deny it when you should know
this is true is beyond me.
Nope, you are just proving that you don't even know what mathematical induction means.
You are just too stupid.
You are just proving you are a liar.
On 4/21/2023 9:37 PM, Richard Damon wrote:
On 4/21/23 10:10 PM, olcott wrote:
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following
verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its input D >>>>>>>>> until H
correctly determines that its simulated D would never stop running >>>>>>>>> unless aborted then
(b) H can abort its simulation of D and correctly report that D >>>>>>>>> specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H
is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the
definition of "correct simulation" that Professer Sipser uses, >>>>>>>> your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H it >>>>>>> cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any finite
number of steps because Ĥ is defined to have a pathological
relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the definition
that allows using a simulation instead of the actual machines
behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
another simulating halt decider such as embedded_H1 having no such >>>>>>> pathological relationship as the basis of the actual behavior of the >>>>>>> input to embedded_H we are comparing apples to lemons and
rejecting the
apples because lemons are too sour.
simulator gives, not the apples that you H gives.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the
actual truth of the system.
H MUST answer about the behavior of the actual machine to be a
Halt Decider, since that is what the mapping a Halt Decider is
supposed to answer is based on.
When a simulating halt decider or even a plain UTM examines the
behavior
of its input and the SHD or UTM has a pathological relationship to its >>>>> input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual
behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM,
then YES, the UTM will generate an infinite behavior, but so does
the machine itself, and ANY UTM will see that same infinite behavior.
The point is that that behavior of the input to embedded_H must be
measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.
No, the behavior measured must be the DEFINED behavior, which IS the
behavior of the ACTUAL MACHINE.
That Halts, so H gets the wrong answer.
I know that this is totally obvious thus I had to conclude that anyone
denying it must be a liar that is only playing head games for sadistic
pleasure.
No, the fact that you think what you say shows that you are a TOTAL
IDIOT.
I did not take into account the power of group think that got at least
100 million Americans to believe the election fraud changed the outcome
of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of Texas
only turned up one Republican that cheated.
Nope, you just don't understand the truth. You are ready for the
truth, because it shows that you have been wrong, and you fragile ego
can't handle that.
Only during the 2022 election did it look like this was starting to turn >>> around a little bit.
You have been wrong a lot longer than that.
The problem is that you SHD is NOT a UTM, and thus the fact that it
aborts its simulation and returns an answer changes the behavior of
the machine that USED it (compared to a UTM), and thus to be
"correct", the SHD needs to take that into account.
I used to think that you were simply lying to play head games, I no
longer believe this. Now I believe that you are ensnared by
group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies.
Group-think is the way that 40% of the electorate could honestly
believe
that significant voter fraud changed the outcome of the 2020 election >>>>> even though there has very persistently been zero evidence of this.
https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider, which is
DEFINIED that H(D,D) needs to return the answer "Halting" if D(D)
Halts, is correct to give the answer non-halting even though D(D)
Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million to
trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain views by
telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies.
So, you seem to beleive in 100% of your lies.
Yes, there is a portion of the population that fails to see what is
true, because, like you, they think their own ideas are more important
that what actually is true. As was philosophized, they ignore the
truth, but listen to what their itching ears what to hear. That fits
you to the T, as you won't see the errors that are pointed out to you,
and you make up more lies to try to hide your errors.
At least they KNEW they were lying, but didn't care, and had to pay
the price.
You don't seem to understand that you are lying just as bad as they
were.
I am absolutely not lying Truth is the most important thing to me even
much more important than love.
THen why to you lie so much, or are you just that stupid.
It is clear you just don't know what you are talking about and are
just making stuff up.
It seems you have lied so much that you have convinced yourself of
your lies, and can no longer bear to let the truth in, so you just
deny anything that goes against your lies.
You have killed your own mind.
All of this work is aimed at formalizing the notion of truth because the >>> HP, LP, IT and Tarski's Undefinability theorem are all instances of the
same Olcott(2004) pathological self-reference error.
So, maybe you need to realize that Truth has to match what is actually
true, and you need to work with the definitions that exist, not the
alternate ideas you make up.
A Halt Decider is DEFINED that
H(M,w) needs to answer about the behavior of M(w).
You don't see to understand that, and it seems to even be a blind
spot, as you like dropping that part when you quote what H is supposed
to do.
You seem to see "see" self-references where there are not actual
self-references, but the effect of the "self-reference" is built from
simpler components. It seems you don't even understand what a
"Self-Reference" actually is, maybe even what a "reference" actually is.
For the halt decider, P is built on a COPY of the claimed decider and
given a representation of that resultand machine. Not a single
reference in sight.
Maybe they will believe that tiny space aliens living in the heads of >>>>> Fox leadership took control of their brains and forced them to pay.
The actual behavior of the actual input is correctly determined by an >>>>> embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns. >>>>>
But embedded_H isn't "embedded_UTM", so you are just living a lie.
embedded_H is embedded_UTM for the first N steps even when these N steps >>> include 10,000 recursive simulations.
Nope. Just your LIES. You clearly don't understand what a UTM is.
After 10,000 recursive simulations even an idiot can infer that more
will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state
of ⟨Ĥ.qn⟩ in any finite number of steps.
The fact that if embedded_H does 10,000 recursive simulations and
aborts means that H^ will halt after 10,001.
Your propblem is you logic only works if you can find an N that is
bigger than N+1
You and I both know that mathematical induction proves this in far less
than 10,000 recursive simulations. Why you deny it when you should know
this is true is beyond me.
Nope, you are just proving that you don't even know what mathematical
induction means.
You are just too stupid.
You are just proving you are a liar.
You know that a halt decider must compute the mapping from its actual
input based on the actual specified behavior of this input and then contradict yourself insisting that the actual behavior of this actual
input is the wrong behavior to measure.
On 4/24/23 10:36 AM, olcott wrote:
On 4/21/2023 9:37 PM, Richard Damon wrote:
On 4/21/23 10:10 PM, olcott wrote:
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following >>>>>>>>>> verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its input >>>>>>>>>> D until H
correctly determines that its simulated D would never stop >>>>>>>>>> running
unless aborted then
(b) H can abort its simulation of D and correctly report that D >>>>>>>>>> specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>> is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the
definition of "correct simulation" that Professer Sipser uses, >>>>>>>>> your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H >>>>>>>> it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>> finite
number of steps because Ĥ is defined to have a pathological
relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the definition >>>>>>> that allows using a simulation instead of the actual machines
behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or even
another simulating halt decider such as embedded_H1 having no such >>>>>>>> pathological relationship as the basis of the actual behavior of >>>>>>>> the
input to embedded_H we are comparing apples to lemons and
rejecting the
apples because lemons are too sour.
simulator gives, not the apples that you H gives.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the
actual truth of the system.
H MUST answer about the behavior of the actual machine to be a
Halt Decider, since that is what the mapping a Halt Decider is
supposed to answer is based on.
When a simulating halt decider or even a plain UTM examines the
behavior
of its input and the SHD or UTM has a pathological relationship to >>>>>> its
input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual
behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM,
then YES, the UTM will generate an infinite behavior, but so does
the machine itself, and ANY UTM will see that same infinite behavior. >>>>>
The point is that that behavior of the input to embedded_H must be
measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.
No, the behavior measured must be the DEFINED behavior, which IS the
behavior of the ACTUAL MACHINE.
That Halts, so H gets the wrong answer.
I know that this is totally obvious thus I had to conclude that anyone >>>> denying it must be a liar that is only playing head games for sadistic >>>> pleasure.
No, the fact that you think what you say shows that you are a TOTAL
IDIOT.
I did not take into account the power of group think that got at least >>>> 100 million Americans to believe the election fraud changed the outcome >>>> of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of Texas
only turned up one Republican that cheated.
Nope, you just don't understand the truth. You are ready for the
truth, because it shows that you have been wrong, and you fragile ego
can't handle that.
Only during the 2022 election did it look like this was starting to
turn
around a little bit.
You have been wrong a lot longer than that.
The problem is that you SHD is NOT a UTM, and thus the fact that it
aborts its simulation and returns an answer changes the behavior of
the machine that USED it (compared to a UTM), and thus to be
"correct", the SHD needs to take that into account.
I used to think that you were simply lying to play head games, I no >>>>>> longer believe this. Now I believe that you are ensnared by
group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies.
Group-think is the way that 40% of the electorate could honestly
believe
that significant voter fraud changed the outcome of the 2020 election >>>>>> even though there has very persistently been zero evidence of this. >>>>>> https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider, which
is DEFINIED that H(D,D) needs to return the answer "Halting" if
D(D) Halts, is correct to give the answer non-halting even though
D(D) Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million to >>>>>> trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain views
by telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies.
So, you seem to beleive in 100% of your lies.
Yes, there is a portion of the population that fails to see what is
true, because, like you, they think their own ideas are more
important that what actually is true. As was philosophized, they
ignore the truth, but listen to what their itching ears what to hear.
That fits you to the T, as you won't see the errors that are pointed
out to you, and you make up more lies to try to hide your errors.
At least they KNEW they were lying, but didn't care, and had to pay
the price.
You don't seem to understand that you are lying just as bad as they
were.
I am absolutely not lying Truth is the most important thing to me even >>>> much more important than love.
THen why to you lie so much, or are you just that stupid.
It is clear you just don't know what you are talking about and are
just making stuff up.
It seems you have lied so much that you have convinced yourself of
your lies, and can no longer bear to let the truth in, so you just
deny anything that goes against your lies.
You have killed your own mind.
All of this work is aimed at formalizing the notion of truth because
the
HP, LP, IT and Tarski's Undefinability theorem are all instances of the >>>> same Olcott(2004) pathological self-reference error.
So, maybe you need to realize that Truth has to match what is
actually true, and you need to work with the definitions that exist,
not the alternate ideas you make up.
A Halt Decider is DEFINED that
H(M,w) needs to answer about the behavior of M(w).
You don't see to understand that, and it seems to even be a blind
spot, as you like dropping that part when you quote what H is
supposed to do.
You seem to see "see" self-references where there are not actual
self-references, but the effect of the "self-reference" is built from
simpler components. It seems you don't even understand what a
"Self-Reference" actually is, maybe even what a "reference" actually is. >>>
For the halt decider, P is built on a COPY of the claimed decider and
given a representation of that resultand machine. Not a single
reference in sight.
Maybe they will believe that tiny space aliens living in the heads of >>>>>> Fox leadership took control of their brains and forced them to pay. >>>>>>
The actual behavior of the actual input is correctly determined by an >>>>>> embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns. >>>>>>
But embedded_H isn't "embedded_UTM", so you are just living a lie.
embedded_H is embedded_UTM for the first N steps even when these N
steps
include 10,000 recursive simulations.
Nope. Just your LIES. You clearly don't understand what a UTM is.
After 10,000 recursive simulations even an idiot can infer that more
will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final state
of ⟨Ĥ.qn⟩ in any finite number of steps.
The fact that if embedded_H does 10,000 recursive simulations and
aborts means that H^ will halt after 10,001.
Your propblem is you logic only works if you can find an N that is
bigger than N+1
You and I both know that mathematical induction proves this in far less >>>> than 10,000 recursive simulations. Why you deny it when you should know >>>> this is true is beyond me.
Nope, you are just proving that you don't even know what mathematical
induction means.
You are just too stupid.
You are just proving you are a liar.
You know that a halt decider must compute the mapping from its actual
input based on the actual specified behavior of this input and then
contradict yourself insisting that the actual behavior of this actual
input is the wrong behavior to measure.
Right, and the "ACtual Specified Behavior" of the input is DEFINED to be
the ACTUAL BEHAVIOR of the machine that input represents,
which will be
identical to the actual behavior of that input processed by an ACTUAL
UTM (which, by definition don't stop until they reach a final step).
By THAT definition, D(D) Halts since H(D,D) returns non-halting, and
thus is wrong.
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 10:36 AM, olcott wrote:
On 4/21/2023 9:37 PM, Richard Damon wrote:
On 4/21/23 10:10 PM, olcott wrote:
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>> verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its input >>>>>>>>>>> D until H
correctly determines that its simulated D would never stop >>>>>>>>>>> running
unless aborted then
(b) H can abort its simulation of D and correctly report that D >>>>>>>>>>> specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>> is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the
definition of "correct simulation" that Professer Sipser uses, >>>>>>>>>> your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H >>>>>>>>> it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>>> finite
number of steps because Ĥ is defined to have a pathological >>>>>>>>> relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the definition >>>>>>>> that allows using a simulation instead of the actual machines
behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure >>>>>>>> simulator gives, not the apples that you H gives.
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or >>>>>>>>> even
another simulating halt decider such as embedded_H1 having no such >>>>>>>>> pathological relationship as the basis of the actual behavior >>>>>>>>> of the
input to embedded_H we are comparing apples to lemons and
rejecting the
apples because lemons are too sour.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the
actual truth of the system.
H MUST answer about the behavior of the actual machine to be a >>>>>>>> Halt Decider, since that is what the mapping a Halt Decider is >>>>>>>> supposed to answer is based on.
When a simulating halt decider or even a plain UTM examines the
behavior
of its input and the SHD or UTM has a pathological relationship
to its
input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual
behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM,
then YES, the UTM will generate an infinite behavior, but so does
the machine itself, and ANY UTM will see that same infinite behavior. >>>>>>
The point is that that behavior of the input to embedded_H must be
measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.
No, the behavior measured must be the DEFINED behavior, which IS the
behavior of the ACTUAL MACHINE.
That Halts, so H gets the wrong answer.
I know that this is totally obvious thus I had to conclude that anyone >>>>> denying it must be a liar that is only playing head games for sadistic >>>>> pleasure.
No, the fact that you think what you say shows that you are a TOTAL
IDIOT.
I did not take into account the power of group think that got at least >>>>> 100 million Americans to believe the election fraud changed the
outcome
of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of Texas >>>>> only turned up one Republican that cheated.
Nope, you just don't understand the truth. You are ready for the
truth, because it shows that you have been wrong, and you fragile
ego can't handle that.
Only during the 2022 election did it look like this was starting to
turn
around a little bit.
You have been wrong a lot longer than that.
The problem is that you SHD is NOT a UTM, and thus the fact that
it aborts its simulation and returns an answer changes the
behavior of the machine that USED it (compared to a UTM), and thus >>>>>> to be "correct", the SHD needs to take that into account.
I used to think that you were simply lying to play head games, I no >>>>>>> longer believe this. Now I believe that you are ensnared by
group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies.
Group-think is the way that 40% of the electorate could honestly >>>>>>> believe
that significant voter fraud changed the outcome of the 2020
election
even though there has very persistently been zero evidence of this. >>>>>>> https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider, which
is DEFINIED that H(D,D) needs to return the answer "Halting" if
D(D) Halts, is correct to give the answer non-halting even though
D(D) Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million
to trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain views
by telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies.
So, you seem to beleive in 100% of your lies.
Yes, there is a portion of the population that fails to see what is
true, because, like you, they think their own ideas are more
important that what actually is true. As was philosophized, they
ignore the truth, but listen to what their itching ears what to
hear. That fits you to the T, as you won't see the errors that are
pointed out to you, and you make up more lies to try to hide your
errors.
At least they KNEW they were lying, but didn't care, and had to
pay the price.
You don't seem to understand that you are lying just as bad as
they were.
I am absolutely not lying Truth is the most important thing to me even >>>>> much more important than love.
THen why to you lie so much, or are you just that stupid.
It is clear you just don't know what you are talking about and are
just making stuff up.
It seems you have lied so much that you have convinced yourself of
your lies, and can no longer bear to let the truth in, so you just
deny anything that goes against your lies.
You have killed your own mind.
All of this work is aimed at formalizing the notion of truth
because the
HP, LP, IT and Tarski's Undefinability theorem are all instances of
the
same Olcott(2004) pathological self-reference error.
So, maybe you need to realize that Truth has to match what is
actually true, and you need to work with the definitions that exist,
not the alternate ideas you make up.
A Halt Decider is DEFINED that
H(M,w) needs to answer about the behavior of M(w).
You don't see to understand that, and it seems to even be a blind
spot, as you like dropping that part when you quote what H is
supposed to do.
You seem to see "see" self-references where there are not actual
self-references, but the effect of the "self-reference" is built
from simpler components. It seems you don't even understand what a
"Self-Reference" actually is, maybe even what a "reference" actually
is.
For the halt decider, P is built on a COPY of the claimed decider
and given a representation of that resultand machine. Not a single
reference in sight.
Maybe they will believe that tiny space aliens living in the
heads of
Fox leadership took control of their brains and forced them to pay. >>>>>>>
The actual behavior of the actual input is correctly determined
by an
embedded UTM that has been adapted to watch the behavior of its
simulation of its input and match any non-halting behavior patterns. >>>>>>>
But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>
embedded_H is embedded_UTM for the first N steps even when these N
steps
include 10,000 recursive simulations.
Nope. Just your LIES. You clearly don't understand what a UTM is.
After 10,000 recursive simulations even an idiot can infer that more >>>>> will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final >>>>> state
of ⟨Ĥ.qn⟩ in any finite number of steps.
The fact that if embedded_H does 10,000 recursive simulations and
aborts means that H^ will halt after 10,001.
Your propblem is you logic only works if you can find an N that is
bigger than N+1
You and I both know that mathematical induction proves this in far
less
than 10,000 recursive simulations. Why you deny it when you should
know
this is true is beyond me.
Nope, you are just proving that you don't even know what
mathematical induction means.
You are just too stupid.
You are just proving you are a liar.
You know that a halt decider must compute the mapping from its actual
input based on the actual specified behavior of this input and then
contradict yourself insisting that the actual behavior of this actual
input is the wrong behavior to measure.
Right, and the "ACtual Specified Behavior" of the input is DEFINED to
be the ACTUAL BEHAVIOR of the machine that input represents,
*When you say that P must be ~P instead of P we know that you are wacky*
The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated by embedded_H. From these N steps we can prove by mathematical induction
that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own final state of ⟨Ĥ.qn⟩ in any finite number of steps.
which will be identical to the actual behavior of that input processed
by an ACTUAL UTM (which, by definition don't stop until they reach a
final step).
The verified facts prove otherwise, people that persistently deny
verified facts may be in danger of Hell fire, depending on their
motives.
My motive is to mathematically formalize the notion of True(L,x) thus refuting Tarski and Gödel.
We really need this now because AI systems are hallucinating: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
By THAT definition, D(D) Halts since H(D,D) returns non-halting, and
thus is wrong.
On 4/25/23 12:29 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 10:36 AM, olcott wrote:
On 4/21/2023 9:37 PM, Richard Damon wrote:
On 4/21/23 10:10 PM, olcott wrote:
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation.
MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>>> verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its >>>>>>>>>>>> input D until H
correctly determines that its simulated D would never stop >>>>>>>>>>>> running
unless aborted then
(b) H can abort its simulation of D and correctly report that D >>>>>>>>>>>> specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>>> is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the >>>>>>>>>>> definition of "correct simulation" that Professer Sipser >>>>>>>>>>> uses, your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by embedded_H >>>>>>>>>> it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>>>> finite
number of steps because Ĥ is defined to have a pathological >>>>>>>>>> relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the
definition that allows using a simulation instead of the actual >>>>>>>>> machines behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure >>>>>>>>> simulator gives, not the apples that you H gives.
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM or >>>>>>>>>> even
another simulating halt decider such as embedded_H1 having no >>>>>>>>>> such
pathological relationship as the basis of the actual behavior >>>>>>>>>> of the
input to embedded_H we are comparing apples to lemons and
rejecting the
apples because lemons are too sour.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the >>>>>>>>> actual truth of the system.
H MUST answer about the behavior of the actual machine to be a >>>>>>>>> Halt Decider, since that is what the mapping a Halt Decider is >>>>>>>>> supposed to answer is based on.
When a simulating halt decider or even a plain UTM examines the >>>>>>>> behavior
of its input and the SHD or UTM has a pathological relationship >>>>>>>> to its
input then when another SHD or UTM not having a pathological
relationship to this input is an incorrect proxy for the actual >>>>>>>> behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM, >>>>>>> then YES, the UTM will generate an infinite behavior, but so does >>>>>>> the machine itself, and ANY UTM will see that same infinite
behavior.
The point is that that behavior of the input to embedded_H must be >>>>>> measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.
No, the behavior measured must be the DEFINED behavior, which IS
the behavior of the ACTUAL MACHINE.
That Halts, so H gets the wrong answer.
I know that this is totally obvious thus I had to conclude that
anyone
denying it must be a liar that is only playing head games for
sadistic
pleasure.
No, the fact that you think what you say shows that you are a TOTAL
IDIOT.
I did not take into account the power of group think that got at
least
100 million Americans to believe the election fraud changed the
outcome
of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of Texas >>>>>> only turned up one Republican that cheated.
Nope, you just don't understand the truth. You are ready for the
truth, because it shows that you have been wrong, and you fragile
ego can't handle that.
Only during the 2022 election did it look like this was starting
to turn
around a little bit.
You have been wrong a lot longer than that.
So, you seem to beleive in 100% of your lies.
The problem is that you SHD is NOT a UTM, and thus the fact that >>>>>>> it aborts its simulation and returns an answer changes the
behavior of the machine that USED it (compared to a UTM), and
thus to be "correct", the SHD needs to take that into account.
I used to think that you were simply lying to play head games, I no >>>>>>>> longer believe this. Now I believe that you are ensnared by
group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies. >>>>>>>
Group-think is the way that 40% of the electorate could honestly >>>>>>>> believe
that significant voter fraud changed the outcome of the 2020
election
even though there has very persistently been zero evidence of this. >>>>>>>> https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider, which >>>>>>> is DEFINIED that H(D,D) needs to return the answer "Halting" if
D(D) Halts, is correct to give the answer non-halting even though >>>>>>> D(D) Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million >>>>>>>> to trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain views >>>>>>> by telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies. >>>>>
Yes, there is a portion of the population that fails to see what is
true, because, like you, they think their own ideas are more
important that what actually is true. As was philosophized, they
ignore the truth, but listen to what their itching ears what to
hear. That fits you to the T, as you won't see the errors that are
pointed out to you, and you make up more lies to try to hide your
errors.
At least they KNEW they were lying, but didn't care, and had to
pay the price.
You don't seem to understand that you are lying just as bad as
they were.
I am absolutely not lying Truth is the most important thing to me
even
much more important than love.
THen why to you lie so much, or are you just that stupid.
It is clear you just don't know what you are talking about and are
just making stuff up.
It seems you have lied so much that you have convinced yourself of
your lies, and can no longer bear to let the truth in, so you just
deny anything that goes against your lies.
You have killed your own mind.
All of this work is aimed at formalizing the notion of truth
because the
HP, LP, IT and Tarski's Undefinability theorem are all instances
of the
same Olcott(2004) pathological self-reference error.
So, maybe you need to realize that Truth has to match what is
actually true, and you need to work with the definitions that
exist, not the alternate ideas you make up.
A Halt Decider is DEFINED that
H(M,w) needs to answer about the behavior of M(w).
You don't see to understand that, and it seems to even be a blind
spot, as you like dropping that part when you quote what H is
supposed to do.
You seem to see "see" self-references where there are not actual
self-references, but the effect of the "self-reference" is built
from simpler components. It seems you don't even understand what a
"Self-Reference" actually is, maybe even what a "reference"
actually is.
For the halt decider, P is built on a COPY of the claimed decider
and given a representation of that resultand machine. Not a single
reference in sight.
Maybe they will believe that tiny space aliens living in the
heads of
Fox leadership took control of their brains and forced them to pay. >>>>>>>>
The actual behavior of the actual input is correctly determined >>>>>>>> by an
embedded UTM that has been adapted to watch the behavior of its >>>>>>>> simulation of its input and match any non-halting behavior
patterns.
But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>>
embedded_H is embedded_UTM for the first N steps even when these N >>>>>> steps
include 10,000 recursive simulations.
Nope. Just your LIES. You clearly don't understand what a UTM is.
After 10,000 recursive simulations even an idiot can infer that more >>>>>> will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final >>>>>> state
of ⟨Ĥ.qn⟩ in any finite number of steps.
The fact that if embedded_H does 10,000 recursive simulations and
aborts means that H^ will halt after 10,001.
Your propblem is you logic only works if you can find an N that is
bigger than N+1
You and I both know that mathematical induction proves this in far >>>>>> less
than 10,000 recursive simulations. Why you deny it when you should >>>>>> know
this is true is beyond me.
Nope, you are just proving that you don't even know what
mathematical induction means.
You are just too stupid.
You are just proving you are a liar.
You know that a halt decider must compute the mapping from its actual
input based on the actual specified behavior of this input and then
contradict yourself insisting that the actual behavior of this actual
input is the wrong behavior to measure.
Right, and the "ACtual Specified Behavior" of the input is DEFINED to
be the ACTUAL BEHAVIOR of the machine that input represents,
*When you say that P must be ~P instead of P we know that you are wacky*
What ~P
The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated
by embedded_H. From these N steps we can prove by mathematical induction
that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own >> final state of ⟨Ĥ.qn⟩ in any finite number of steps.
But we don't care about the "First N steps of (Ĥ) correctly simulated",
we care about the behavior of the actual machine Ĥ (Ĥ) or the actual
FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:29 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 10:36 AM, olcott wrote:
On 4/21/2023 9:37 PM, Richard Damon wrote:
On 4/21/23 10:10 PM, olcott wrote:
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation. >>>>>>>>>>>>>>
MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>>>> verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its >>>>>>>>>>>>> input D until H
correctly determines that its simulated D would never stop >>>>>>>>>>>>> running
unless aborted then
(b) H can abort its simulation of D and correctly report >>>>>>>>>>>>> that D
specifies a non-halting sequence of configurations.
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM >>>>>>>>>>>>
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>>>> is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the >>>>>>>>>>>> definition of "correct simulation" that Professer Sipser >>>>>>>>>>>> uses, your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by >>>>>>>>>>> embedded_H it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in any >>>>>>>>>>> finite
number of steps because Ĥ is defined to have a pathological >>>>>>>>>>> relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the
definition that allows using a simulation instead of the
actual machines behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the pure >>>>>>>>>> simulator gives, not the apples that you H gives.
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM >>>>>>>>>>> or even
another simulating halt decider such as embedded_H1 having no >>>>>>>>>>> such
pathological relationship as the basis of the actual behavior >>>>>>>>>>> of the
input to embedded_H we are comparing apples to lemons and >>>>>>>>>>> rejecting the
apples because lemons are too sour.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the >>>>>>>>>> actual truth of the system.
H MUST answer about the behavior of the actual machine to be a >>>>>>>>>> Halt Decider, since that is what the mapping a Halt Decider is >>>>>>>>>> supposed to answer is based on.
When a simulating halt decider or even a plain UTM examines the >>>>>>>>> behavior
of its input and the SHD or UTM has a pathological relationship >>>>>>>>> to its
input then when another SHD or UTM not having a pathological >>>>>>>>> relationship to this input is an incorrect proxy for the actual >>>>>>>>> behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a UTM, >>>>>>>> then YES, the UTM will generate an infinite behavior, but so
does the machine itself, and ANY UTM will see that same infinite >>>>>>>> behavior.
The point is that that behavior of the input to embedded_H must be >>>>>>> measured relative to the pathological relationship or it is not
measuring the actual behavior of the actual input.
No, the behavior measured must be the DEFINED behavior, which IS
the behavior of the ACTUAL MACHINE.
That Halts, so H gets the wrong answer.
I know that this is totally obvious thus I had to conclude that
anyone
denying it must be a liar that is only playing head games for
sadistic
pleasure.
No, the fact that you think what you say shows that you are a
TOTAL IDIOT.
I did not take into account the power of group think that got at >>>>>>> least
100 million Americans to believe the election fraud changed the
outcome
of the 2020 election even though there is zero evidence of this
anywhere. Even a huge cash prize offered by the Lt. governor of
Texas
only turned up one Republican that cheated.
Nope, you just don't understand the truth. You are ready for the
truth, because it shows that you have been wrong, and you fragile
ego can't handle that.
Only during the 2022 election did it look like this was starting >>>>>>> to turn
around a little bit.
You have been wrong a lot longer than that.
So, you seem to beleive in 100% of your lies.
The problem is that you SHD is NOT a UTM, and thus the fact that >>>>>>>> it aborts its simulation and returns an answer changes the
behavior of the machine that USED it (compared to a UTM), and
thus to be "correct", the SHD needs to take that into account. >>>>>>>>
I used to think that you were simply lying to play head games, >>>>>>>>> I no
longer believe this. Now I believe that you are ensnared by
group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies. >>>>>>>>
Group-think is the way that 40% of the electorate could
honestly believe
that significant voter fraud changed the outcome of the 2020 >>>>>>>>> election
even though there has very persistently been zero evidence of >>>>>>>>> this.
https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider,
which is DEFINIED that H(D,D) needs to return the answer
"Halting" if D(D) Halts, is correct to give the answer
non-halting even though D(D) Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787 million >>>>>>>>> to trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain
views by telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies. >>>>>>
Yes, there is a portion of the population that fails to see what
is true, because, like you, they think their own ideas are more
important that what actually is true. As was philosophized, they
ignore the truth, but listen to what their itching ears what to
hear. That fits you to the T, as you won't see the errors that are >>>>>> pointed out to you, and you make up more lies to try to hide your
errors.
At least they KNEW they were lying, but didn't care, and had to >>>>>>>> pay the price.
You don't seem to understand that you are lying just as bad as >>>>>>>> they were.
I am absolutely not lying Truth is the most important thing to me >>>>>>> even
much more important than love.
THen why to you lie so much, or are you just that stupid.
It is clear you just don't know what you are talking about and are >>>>>> just making stuff up.
It seems you have lied so much that you have convinced yourself of >>>>>> your lies, and can no longer bear to let the truth in, so you just >>>>>> deny anything that goes against your lies.
You have killed your own mind.
All of this work is aimed at formalizing the notion of truth
because the
HP, LP, IT and Tarski's Undefinability theorem are all instances >>>>>>> of the
same Olcott(2004) pathological self-reference error.
So, maybe you need to realize that Truth has to match what is
actually true, and you need to work with the definitions that
exist, not the alternate ideas you make up.
A Halt Decider is DEFINED that
H(M,w) needs to answer about the behavior of M(w).
You don't see to understand that, and it seems to even be a blind
spot, as you like dropping that part when you quote what H is
supposed to do.
You seem to see "see" self-references where there are not actual
self-references, but the effect of the "self-reference" is built
from simpler components. It seems you don't even understand what a >>>>>> "Self-Reference" actually is, maybe even what a "reference"
actually is.
For the halt decider, P is built on a COPY of the claimed decider
and given a representation of that resultand machine. Not a single >>>>>> reference in sight.
Maybe they will believe that tiny space aliens living in the >>>>>>>>> heads of
Fox leadership took control of their brains and forced them to >>>>>>>>> pay.
The actual behavior of the actual input is correctly determined >>>>>>>>> by an
embedded UTM that has been adapted to watch the behavior of its >>>>>>>>> simulation of its input and match any non-halting behavior
patterns.
But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>>>
embedded_H is embedded_UTM for the first N steps even when these >>>>>>> N steps
include 10,000 recursive simulations.
Nope. Just your LIES. You clearly don't understand what a UTM is.
After 10,000 recursive simulations even an idiot can infer that more >>>>>>> will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own final >>>>>>> state
of ⟨Ĥ.qn⟩ in any finite number of steps.
The fact that if embedded_H does 10,000 recursive simulations and
aborts means that H^ will halt after 10,001.
Your propblem is you logic only works if you can find an N that is >>>>>> bigger than N+1
You and I both know that mathematical induction proves this in
far less
than 10,000 recursive simulations. Why you deny it when you
should know
this is true is beyond me.
Nope, you are just proving that you don't even know what
mathematical induction means.
You are just too stupid.
You are just proving you are a liar.
You know that a halt decider must compute the mapping from its actual >>>>> input based on the actual specified behavior of this input and then
contradict yourself insisting that the actual behavior of this actual >>>>> input is the wrong behavior to measure.
Right, and the "ACtual Specified Behavior" of the input is DEFINED
to be the ACTUAL BEHAVIOR of the machine that input represents,
*When you say that P must be ~P instead of P we know that you are wacky*
What ~P
The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly simulated
by embedded_H. From these N steps we can prove by mathematical induction >>> that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
final state of ⟨Ĥ.qn⟩ in any finite number of steps.
But we don't care about the "First N steps of (Ĥ) correctly
simulated", we care about the behavior of the actual machine Ĥ (Ĥ) or
the actual FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]
The actual behavior of the input is the behavior of N steps correctly simulated by embedded_H because embedded_H remains a UTM until it aborts
its simulation.
That these N steps provide a sufficient mathematical induction proof
that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own final state of ⟨Ĥ.qn⟩ in any finite number of steps is the correct basis for the halt status decision by embedded_H.
That no textbook ever noticed that the behavior under pathological self- reference(Olcott 2004) could possibly vary from behavior when PSR does
not exist is only because everyone rejected to notion of a simulation as
any basis for halt decider out-of-hand without review.
For the whole history of the halting problem everyone simply assumed
that the halt decider must provide a correct yes/no answer when no
correct yes/no answer exists.
No one ever noticed that the pathological input would be trapped in
recursive simulation that never reaches any final state when this counter-example input is input to a simulating halt decider.
On 4/25/23 11:45 PM, olcott wrote:
On 4/25/2023 6:56 AM, Richard Damon wrote:
On 4/25/23 12:29 AM, olcott wrote:
On 4/24/2023 6:35 PM, Richard Damon wrote:
On 4/24/23 10:36 AM, olcott wrote:
On 4/21/2023 9:37 PM, Richard Damon wrote:
On 4/21/23 10:10 PM, olcott wrote:
On 4/21/2023 8:02 PM, Richard Damon wrote:
On 4/21/23 8:51 PM, olcott wrote:
On 4/21/2023 6:35 PM, Richard Damon wrote:
On 4/21/23 7:22 PM, olcott wrote:
On 4/21/2023 5:36 PM, Richard Damon wrote:
On 4/21/23 11:35 AM, olcott wrote:
On 4/21/2023 6:18 AM, Richard Damon wrote:
So, you don't understand the nature of simulation. >>>>>>>>>>>>>>>
MIT Professor Michael Sipser has agreed that the following >>>>>>>>>>>>>> verbatim paragraph is correct:
a) If simulating halt decider H correctly simulates its >>>>>>>>>>>>>> input D until H
correctly determines that its simulated D would never stop >>>>>>>>>>>>>> running
unless aborted then
(b) H can abort its simulation of D and correctly report >>>>>>>>>>>>>> that D
specifies a non-halting sequence of configurations. >>>>>>>>>>>>>>
Thus it is established that:
The behavior of D correctly simulated by H
is the correct behavior to measure.
*IF* H correctly simulates per the definition of a UTM >>>>>>>>>>>>>
It doesn't, so it isn't.
The behavior of ⟨Ĥ⟩ correctly simulated by embedded_H >>>>>>>>>>>>>> is the correct behavior to measure.
Since the simulation done by embedded_H does not meet the >>>>>>>>>>>>> definition of "correct simulation" that Professer Sipser >>>>>>>>>>>>> uses, your arguement is VOID.
You are just PROVING your stupidity.
Always with the strawman error.
I am saying that when ⟨Ĥ⟩ is correctly simulated by >>>>>>>>>>>> embedded_H it cannot
possibly reach its own simulated final state of ⟨Ĥ.qn⟩ in >>>>>>>>>>>> any finite
number of steps because Ĥ is defined to have a pathological >>>>>>>>>>>> relationship
to embedded_H.
Since H never "Correctly Simulates" the input per the
definition that allows using a simulation instead of the >>>>>>>>>>> actual machines behavior, YOUR method is the STRAWMAN.
Maybe, but the question is asking for the lemons that the >>>>>>>>>>> pure simulator gives, not the apples that you H gives.
When we examine the behavior of ⟨Ĥ⟩ simulated by a pure UTM >>>>>>>>>>>> or even
another simulating halt decider such as embedded_H1 having >>>>>>>>>>>> no such
pathological relationship as the basis of the actual
behavior of the
input to embedded_H we are comparing apples to lemons and >>>>>>>>>>>> rejecting the
apples because lemons are too sour.
H is just doing the wrong thing.
Your failure to see that just shows how blind you are to the >>>>>>>>>>> actual truth of the system.
H MUST answer about the behavior of the actual machine to be >>>>>>>>>>> a Halt Decider, since that is what the mapping a Halt Decider >>>>>>>>>>> is supposed to answer is based on.
When a simulating halt decider or even a plain UTM examines >>>>>>>>>> the behavior
of its input and the SHD or UTM has a pathological
relationship to its
input then when another SHD or UTM not having a pathological >>>>>>>>>> relationship to this input is an incorrect proxy for the
actual behavior
of this actual input to the original SHD or UTM.
Nope. If an input has your "pathological" relationship to a
UTM, then YES, the UTM will generate an infinite behavior, but >>>>>>>>> so does the machine itself, and ANY UTM will see that same
infinite behavior.
The point is that that behavior of the input to embedded_H must be >>>>>>>> measured relative to the pathological relationship or it is not >>>>>>>> measuring the actual behavior of the actual input.
No, the behavior measured must be the DEFINED behavior, which IS >>>>>>> the behavior of the ACTUAL MACHINE.
That Halts, so H gets the wrong answer.
I know that this is totally obvious thus I had to conclude that >>>>>>>> anyone
denying it must be a liar that is only playing head games for
sadistic
pleasure.
No, the fact that you think what you say shows that you are a
TOTAL IDIOT.
I did not take into account the power of group think that got at >>>>>>>> least
100 million Americans to believe the election fraud changed the >>>>>>>> outcome
of the 2020 election even though there is zero evidence of this >>>>>>>> anywhere. Even a huge cash prize offered by the Lt. governor of >>>>>>>> Texas
only turned up one Republican that cheated.
Nope, you just don't understand the truth. You are ready for the >>>>>>> truth, because it shows that you have been wrong, and you fragile >>>>>>> ego can't handle that.
Only during the 2022 election did it look like this was starting >>>>>>>> to turn
around a little bit.
You have been wrong a lot longer than that.
So, you seem to beleive in 100% of your lies.
The problem is that you SHD is NOT a UTM, and thus the fact
that it aborts its simulation and returns an answer changes the >>>>>>>>> behavior of the machine that USED it (compared to a UTM), and >>>>>>>>> thus to be "correct", the SHD needs to take that into account. >>>>>>>>>
I used to think that you were simply lying to play head games, >>>>>>>>>> I no
longer believe this. Now I believe that you are ensnared by >>>>>>>>>> group-think.
Nope, YOU are the one ensnared in your own fantasy world of lies. >>>>>>>>>
Group-think is the way that 40% of the electorate could
honestly believe
that significant voter fraud changed the outcome of the 2020 >>>>>>>>>> election
even though there has very persistently been zero evidence of >>>>>>>>>> this.
https://www.psychologytoday.com/us/basics/groupthink
And you fantasy world is why you think that a Halt Decider,
which is DEFINIED that H(D,D) needs to return the answer
"Halting" if D(D) Halts, is correct to give the answer
non-halting even though D(D) Ha;ts.
You are just beliving your own lies.
Hopefully they will not believe that Fox news paid $787
million to trick
people into believing that there was no voter fraud.
No, they are paying $787 million BECAUSE they tried to gain
views by telling them the lies they wanted to hear.
Yes, but even now 30% of the electorate may still believe the lies. >>>>>>>
Yes, there is a portion of the population that fails to see what >>>>>>> is true, because, like you, they think their own ideas are more
important that what actually is true. As was philosophized, they >>>>>>> ignore the truth, but listen to what their itching ears what to
hear. That fits you to the T, as you won't see the errors that
are pointed out to you, and you make up more lies to try to hide >>>>>>> your errors.
At least they KNEW they were lying, but didn't care, and had to >>>>>>>>> pay the price.
You don't seem to understand that you are lying just as bad as >>>>>>>>> they were.
I am absolutely not lying Truth is the most important thing to >>>>>>>> me even
much more important than love.
THen why to you lie so much, or are you just that stupid.
It is clear you just don't know what you are talking about and
are just making stuff up.
It seems you have lied so much that you have convinced yourself
of your lies, and can no longer bear to let the truth in, so you >>>>>>> just deny anything that goes against your lies.
You have killed your own mind.
All of this work is aimed at formalizing the notion of truth
because the
HP, LP, IT and Tarski's Undefinability theorem are all instances >>>>>>>> of the
same Olcott(2004) pathological self-reference error.
So, maybe you need to realize that Truth has to match what is
actually true, and you need to work with the definitions that
exist, not the alternate ideas you make up.
A Halt Decider is DEFINED that
H(M,w) needs to answer about the behavior of M(w).
You don't see to understand that, and it seems to even be a blind >>>>>>> spot, as you like dropping that part when you quote what H is
supposed to do.
You seem to see "see" self-references where there are not actual >>>>>>> self-references, but the effect of the "self-reference" is built >>>>>>> from simpler components. It seems you don't even understand what >>>>>>> a "Self-Reference" actually is, maybe even what a "reference"
actually is.
For the halt decider, P is built on a COPY of the claimed decider >>>>>>> and given a representation of that resultand machine. Not a
single reference in sight.
Maybe they will believe that tiny space aliens living in the >>>>>>>>>> heads of
Fox leadership took control of their brains and forced them to >>>>>>>>>> pay.
The actual behavior of the actual input is correctly
determined by an
embedded UTM that has been adapted to watch the behavior of its >>>>>>>>>> simulation of its input and match any non-halting behavior >>>>>>>>>> patterns.
But embedded_H isn't "embedded_UTM", so you are just living a lie. >>>>>>>>>
embedded_H is embedded_UTM for the first N steps even when these >>>>>>>> N steps
include 10,000 recursive simulations.
Nope. Just your LIES. You clearly don't understand what a UTM is. >>>>>>>
After 10,000 recursive simulations even an idiot can infer that >>>>>>>> more
will not cause ⟨Ĥ⟩ simulated by embedded_H to reach its own >>>>>>>> final state
of ⟨Ĥ.qn⟩ in any finite number of steps.
The fact that if embedded_H does 10,000 recursive simulations and >>>>>>> aborts means that H^ will halt after 10,001.
Your propblem is you logic only works if you can find an N that
is bigger than N+1
You and I both know that mathematical induction proves this in >>>>>>>> far less
than 10,000 recursive simulations. Why you deny it when you
should know
this is true is beyond me.
Nope, you are just proving that you don't even know what
mathematical induction means.
You are just too stupid.
You are just proving you are a liar.
You know that a halt decider must compute the mapping from its actual >>>>>> input based on the actual specified behavior of this input and then >>>>>> contradict yourself insisting that the actual behavior of this actual >>>>>> input is the wrong behavior to measure.
Right, and the "ACtual Specified Behavior" of the input is DEFINED
to be the ACTUAL BEHAVIOR of the machine that input represents,
*When you say that P must be ~P instead of P we know that you are
wacky*
What ~P
The actual behavior of ⟨Ĥ⟩ correctly simulated by embedded_H is
necessarily the behavior of the first N steps of ⟨Ĥ⟩ correctly
simulated
by embedded_H. From these N steps we can prove by mathematical
induction
that ⟨Ĥ⟩ correctly simulated by embedded_H cannot possibly reach it own
final state of ⟨Ĥ.qn⟩ in any finite number of steps.
But we don't care about the "First N steps of (Ĥ) correctly
simulated", we care about the behavior of the actual machine Ĥ (Ĥ) or
the actual FULL correct simulation of UTM (Ĥ) (Ĥ) [ie the input to H]
The actual behavior of the input is the behavior of N steps correctly
simulated by embedded_H because embedded_H remains a UTM until it aborts
its simulation.
ILLOGICAL STATEMENT.
The actual behavior of the actual input is not necessarily the behavior
of a non-input as it has been assumed since forever.
On 4/26/23 10:34 PM, olcott wrote:
The actual behavior of the actual input is not necessarily the
behavior of a non-input as it has been assumed since forever.
But it isn't a "non-input" but is an actual property of the actual
input, and the property DEFINED as what the decider is supposed to decide.
Your inability to understand this simple requirement has made you life a total waste.I have said that this is my life's one legacy.
You just don't seem to understand even the simplest of truths, likely
because you are just a pathological liar and truth means nothing to you.
On 4/27/2023 6:19 AM, Richard Damon wrote:
On 4/26/23 10:34 PM, olcott wrote:
The actual behavior of the actual input is not necessarily the
behavior of a non-input as it has been assumed since forever.
But it isn't a "non-input" but is an actual property of the actual
input, and the property DEFINED as what the decider is supposed to
decide.
The actual behavior of the actual input MUST take into account that pathological relationship between Ĥ and embedded_H.
Your inability to understand this simple requirement has made you lifeI have said that this is my life's one legacy.
a total waste.
You just don't seem to understand even the simplest of truths, likely
because you are just a pathological liar and truth means nothing to you.
Everyone besides you believes that I believe what I say.
I can't be an actual liar if I believe what I say.
On 4/27/23 9:15 PM, olcott wrote:
On 4/27/2023 6:19 AM, Richard Damon wrote:
On 4/26/23 10:34 PM, olcott wrote:
The actual behavior of the actual input is not necessarily the
behavior of a non-input as it has been assumed since forever.
But it isn't a "non-input" but is an actual property of the actual
input, and the property DEFINED as what the decider is supposed to
decide.
The actual behavior of the actual input MUST take into account that
pathological relationship between Ĥ and embedded_H.
Your inability to understand this simple requirement has made youI have said that this is my life's one legacy.
life a total waste.
You just don't seem to understand even the simplest of truths, likely
because you are just a pathological liar and truth means nothing to you.
Everyone besides you believes that I believe what I say.
I can't be an actual liar if I believe what I say.
You are just proving yourself to be a liar.
Just because you "believe" it doesn't totally make it not a lie.
On 4/27/2023 9:41 PM, Richard Damon wrote:
On 4/27/23 9:15 PM, olcott wrote:
On 4/27/2023 6:19 AM, Richard Damon wrote:
On 4/26/23 10:34 PM, olcott wrote:
The actual behavior of the actual input is not necessarily the
behavior of a non-input as it has been assumed since forever.
But it isn't a "non-input" but is an actual property of the actual
input, and the property DEFINED as what the decider is supposed to
decide.
The actual behavior of the actual input MUST take into account that
pathological relationship between Ĥ and embedded_H.
Your inability to understand this simple requirement has made youI have said that this is my life's one legacy.
life a total waste.
You just don't seem to understand even the simplest of truths,
likely because you are just a pathological liar and truth means
nothing to you.
Everyone besides you believes that I believe what I say.
I can't be an actual liar if I believe what I say.
You are just proving yourself to be a liar.
Just because you "believe" it doesn't totally make it not a lie.
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an intentional untruth. https://www.dictionary.com/browse/lie
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I won't
teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
Yes it does and you are stupid for saying otherwise.
On 4/27/23 11:15 PM, olcott wrote:
On 4/27/2023 9:41 PM, Richard Damon wrote:
On 4/27/23 9:15 PM, olcott wrote:
On 4/27/2023 6:19 AM, Richard Damon wrote:
On 4/26/23 10:34 PM, olcott wrote:
The actual behavior of the actual input is not necessarily the
behavior of a non-input as it has been assumed since forever.
But it isn't a "non-input" but is an actual property of the actual
input, and the property DEFINED as what the decider is supposed to
decide.
The actual behavior of the actual input MUST take into account that
pathological relationship between Ĥ and embedded_H.
Your inability to understand this simple requirement has made youI have said that this is my life's one legacy.
life a total waste.
You just don't seem to understand even the simplest of truths,
likely because you are just a pathological liar and truth means
nothing to you.
Everyone besides you believes that I believe what I say.
I can't be an actual liar if I believe what I say.
You are just proving yourself to be a liar.
Just because you "believe" it doesn't totally make it not a lie.
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an
intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an
intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an
intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an
intentional untruth. https://www.dictionary.com/browse/lie
YES IT DOES (and you call me stupid) !!!
a false statement made with deliberate intent to deceive; an
intentional untruth. https://www.dictionary.com/browse/lie
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I won't
teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I
won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I
won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I
won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. The dictionary definition of lying is “to make a false statement with the intention to deceive” (OED 1989) but there are numerous problems with
this definition. It is both too narrow, since it requires falsity, and
too broad, since it allows for lying about something other than what is
being stated, and lying to someone who is believed to be listening in
but who is not being addressed.
The most widely accepted definition of lying is the following: “A lie is
a statement made by one who does not believe it with the intention that someone else shall be led to believe it” (Isenberg 1973, 248) (cf. “[lying is] making a statement believed to be false, with the intention
of getting another to accept it as true” (Primoratz 1984, 54n2)). This definition does not specify the addressee, however. It may be restated
as follows:
(L1) To lie =df to make a believed-false statement to another person
with the intention that the other person believe that statement to be true.
L1 is the traditional definition of lying. According to L1, there are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement condition).
Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful (untruthfulness condition).
Third, lying requires that the untruthful statement be made to another
person (addressee condition).
Fourth, lying requires that the person intend that that other person
believe the untruthful statement to be true (intention to deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
On 4/28/23 11:21 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I
won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In other words you honestly believe that an honest mistake is a lie.
THAT MAKES YOU STUPID !!! (yet not a liar)
So, you ADMIT that you ideas are a "Mistake"?
You ADMIT that your statements are untrue because you ideas, while
sincerly held by you, are admitted to be WRONG?
Note, these definition point to statements which are made that are
clearly false can be considered as lies on their face value.
Note also, I tend to use the term "Pathological liar", which implies
this sort error, the speaker, due to mental deficiencies have lost the ability to actual know what is true or false. This seems to describe you
to the T.
I also use the term "Ignorant Liar" which means you lie out of a lack of knowledge of the truth.
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I
won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. The
dictionary definition of lying is “to make a false statement with the
intention to deceive” (OED 1989) but there are numerous problems with
this definition. It is both too narrow, since it requires falsity, and
too broad, since it allows for lying about something other than what
is being stated, and lying to someone who is believed to be listening
in but who is not being addressed.
The most widely accepted definition of lying is the following: “A lie
is a statement made by one who does not believe it with the intention
that someone else shall be led to believe it” (Isenberg 1973, 248)
(cf. “[lying is] making a statement believed to be false, with the
intention of getting another to accept it as true” (Primoratz 1984,
54n2)). This definition does not specify the addressee, however. It
may be restated as follows:
(L1) To lie =df to make a believed-false statement to another person
with the intention that the other person believe that statement to be
true.
L1 is the traditional definition of lying. According to L1, there are
at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement
condition).
Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful
(untruthfulness condition).
Third, lying requires that the untruthful statement be made to another
person (addressee condition).
Fourth, lying requires that the person intend that that other person
believe the untruthful statement to be true (intention to deceive the
addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
So, you are trying to use arguments to justify that you can say "false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept truth
differed from your ideas does not excuse you from claiming that you can
say them as FACT, and not be a liar.
The fact that your error has been pointed out an enormous number of
times, makes you blatant disregard for the actual truth, a suitable
stand in for your own belief.
If you don't understand from all instruction you have been given that
you are wrong, you are just proved to be totally mentally incapable.
If you want to claim that you are not a liar by reason of insanity, make
that plea, but that just becomes an admission that you are a
pathological liar, a liar because of a mental illness.
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:21 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I >>>>>> won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is
incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In other words you honestly believe that an honest mistake is a lie.
THAT MAKES YOU STUPID !!! (yet not a liar)
So, you ADMIT that you ideas are a "Mistake"?
No, to the best of my knowledge I have correctly proved all of my
assertions are semantic tautologies thus necessarily true.
The fact that few besides me understand that they are semantic
tautologies is not actual rebuttal at all.
You ADMIT that your statements are untrue because you ideas, while
sincerly held by you, are admitted to be WRONG?
Note, these definition point to statements which are made that are
clearly false can be considered as lies on their face value.
I can call you a liar on the basis that when you sleep at night you
probably lie down. This is not what is meant by liar.
Note also, I tend to use the term "Pathological liar", which implies
this sort error, the speaker, due to mental deficiencies have lost the
ability to actual know what is true or false. This seems to describe
you to the T.
I also use the term "Ignorant Liar" which means you lie out of a lack
of knowledge of the truth.
I am not a liar in any sense of the common accepted definition of liar
that requires that four conditions be met.
there are at least four necessary conditions for lying:
First, lying requires that a person make a statement (statement
condition).
Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful (untruthfulness condition).
Third, lying requires that the untruthful statement be made to another
person (addressee condition).
Fourth, lying requires that the person intend that that other person
believe the untruthful statement to be true (intention to deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
That you continue to call me a "liar" while failing to disclose that you
are are not referring to what everyone else means by the term meets the
legal definition of "actual malice"
https://www.mtsu.edu/first-amendment/article/889/actual-malice
On 4/28/23 12:05 PM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:21 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I >>>>>>> won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is >>>>>>> incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In other words you honestly believe that an honest mistake is a lie.
THAT MAKES YOU STUPID !!! (yet not a liar)
So, you ADMIT that you ideas are a "Mistake"?
No, to the best of my knowledge I have correctly proved all of my
assertions are semantic tautologies thus necessarily true.
The fact that few besides me understand that they are semantic
tautologies is not actual rebuttal at all.
No, but the fact that you can't rebute the claims against your
arguments, and really haven't tried, implies that you know that your
claims are baseless.
IF your counter to the fact that you have made clearly factually
incorrect statements is that "Honest Mistakes" are not lies, just shows
what you consider your grounds to defined yourself.
You ADMIT that your statements are untrue because you ideas, while
sincerly held by you, are admitted to be WRONG?
Note, these definition point to statements which are made that are
clearly false can be considered as lies on their face value.
I can call you a liar on the basis that when you sleep at night you
probably lie down. This is not what is meant by liar.
So, you admit you don't understand the defintion of liar?
Note also, I tend to use the term "Pathological liar", which implies
this sort error, the speaker, due to mental deficiencies have lost
the ability to actual know what is true or false. This seems to
describe you to the T.
I also use the term "Ignorant Liar" which means you lie out of a lack
of knowledge of the truth.
I am not a liar in any sense of the common accepted definition of liar
that requires that four conditions be met.
But are by MY definition that I posted, one who makes false or
misleading statments.
there are at least four necessary conditions for lying:
First, lying requires that a person make a statement (statement
condition).
Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful
(untruthfulness condition).
Third, lying requires that the untruthful statement be made to another
person (addressee condition).
Fourth, lying requires that the person intend that that other person
believe the untruthful statement to be true (intention to deceive the
addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
That you continue to call me a "liar" while failing to disclose that you
are are not referring to what everyone else means by the term meets the
legal definition of "actual malice"
https://www.mtsu.edu/first-amendment/article/889/actual-malice
So, you don't think that definition 3 or 5 of the reference you made,
that did NOT require knowledge of the error by the person.
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I >>>>>>> won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement is >>>>>>> incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. The
dictionary definition of lying is “to make a false statement with
the intention to deceive” (OED 1989) but there are numerous problems >>>> with this definition. It is both too narrow, since it requires
falsity, and too broad, since it allows for lying about something
other than what is being stated, and lying to someone who is
believed to be listening in but who is not being addressed.
The most widely accepted definition of lying is the following: “A
lie is a statement made by one who does not believe it with the
intention that someone else shall be led to believe it” (Isenberg
1973, 248) (cf. “[lying is] making a statement believed to be false, >>>> with the intention of getting another to accept it as true”
(Primoratz 1984, 54n2)). This definition does not specify the
addressee, however. It may be restated as follows:
(L1) To lie =df to make a believed-false statement to another person
with the intention that the other person believe that statement to
be true.
L1 is the traditional definition of lying. According to L1, there
are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement
condition).
Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful
(untruthfulness condition).
Third, lying requires that the untruthful statement be made to
another person (addressee condition).
Fourth, lying requires that the person intend that that other person
believe the untruthful statement to be true (intention to deceive
the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
So, you are trying to use arguments to justify that you can say
"false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept truth
differed from your ideas does not excuse you from claiming that you
can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify that
it is a semantic tautology does not even make my assertion false.
So, you admit that you don't know that actually meaning of a FACT.
The fact that your error has been pointed out an enormous number of
times, makes you blatant disregard for the actual truth, a suitable
stand in for your own belief.
That fact that no one has understood my semantic tautologies only proves
that no one has understood my semantic tautologies. It does not even
prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but then try
to change the rules at the far end, without understanding that you are accepting things your logic likely rejects, shows that you don't
understand how logic actually works.
You present "semantic tautologies" based on FALSE definition and results
that you can not prove.
If you don't understand from all instruction you have been given that
you are wrong, you are just proved to be totally mentally incapable.
If you want to claim that you are not a liar by reason of insanity,
make that plea, but that just becomes an admission that you are a
pathological liar, a liar because of a mental illness.
That you continue to believe that lies do not require an intention to
deceive after the above has been pointed out makes you willfully
ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you that
you are wrong, but you continue to spout words that have been proven incorrect make YOU a pathological liar.
Also, I am not "ignorant", since that means not having knowledge or
awareness of something, but I do understand what you are saying and
aware of your ideas, AND I POINT OUT YOUR ERRORS.
YOU are the ignorant
one, as you don't seem to understand enough to even comment about the rebutalls to your claims.
THAT show ignorance, and stupidity.
On 4/28/2023 11:41 AM, Richard Damon wrote:
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and I >>>>>>>> won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement >>>>>>>> is incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. The
dictionary definition of lying is “to make a false statement with
the intention to deceive” (OED 1989) but there are numerous
problems with this definition. It is both too narrow, since it
requires falsity, and too broad, since it allows for lying about
something other than what is being stated, and lying to someone who
is believed to be listening in but who is not being addressed.
The most widely accepted definition of lying is the following: “A
lie is a statement made by one who does not believe it with the
intention that someone else shall be led to believe it” (Isenberg
1973, 248) (cf. “[lying is] making a statement believed to be
false, with the intention of getting another to accept it as true” >>>>> (Primoratz 1984, 54n2)). This definition does not specify the
addressee, however. It may be restated as follows:
(L1) To lie =df to make a believed-false statement to another
person with the intention that the other person believe that
statement to be true.
L1 is the traditional definition of lying. According to L1, there
are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement
condition).
Second, lying requires that the person believe the statement to be
false; that is, lying requires that the statement be untruthful
(untruthfulness condition).
Third, lying requires that the untruthful statement be made to
another person (addressee condition).
Fourth, lying requires that the person intend that that other
person believe the untruthful statement to be true (intention to
deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
So, you are trying to use arguments to justify that you can say
"false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept truth
differed from your ideas does not excuse you from claiming that you
can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify that
it is a semantic tautology does not even make my assertion false.
So, you admit that you don't know that actually meaning of a FACT.
I mean rue in the absolute sense of the word true such as:
2 + 3 = 5 is verified as necessarily true on the basis of its meaning.
Semantic tautologies are the only kind of facts that are necessarily
true in all possible worlds.
The fact that your error has been pointed out an enormous number of
times, makes you blatant disregard for the actual truth, a suitable
stand in for your own belief.
That fact that no one has understood my semantic tautologies only proves >>> that no one has understood my semantic tautologies. It does not even
prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but then
try to change the rules at the far end, without understanding that you
are accepting things your logic likely rejects, shows that you don't
understand how logic actually works.
That I do not have a complete grasp of every nuance of mathematical
logic does not show that I do not have a sufficient grasp of those
aspects that I refer to.
My next goal is to attain a complete understanding of all of the basic terminology of model theory. I had a key insight about model theory
sometime in the last month that indicates that I must master its basic terminology.
You present "semantic tautologies" based on FALSE definition and
results that you can not prove.
It may seem that way from the POV of not understanding what I am saying.
The entire body of analytical truth is a set of semantic tautologies.
That you are unfamiliar with the meaning of these terms is no actual
rebuttal at all.
If you don't understand from all instruction you have been given
that you are wrong, you are just proved to be totally mentally
incapable.
If you want to claim that you are not a liar by reason of insanity,
make that plea, but that just becomes an admission that you are a
pathological liar, a liar because of a mental illness.
That you continue to believe that lies do not require an intention to
deceive after the above has been pointed out makes you willfully
ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you that
you are wrong, but you continue to spout words that have been proven
incorrect make YOU a pathological liar.
No it only proves that you continue to have no grasp of what a semantic tautology could possibly be. Any expression that is verified as
necessarily true entirely on the basis of its meaning is a semantic tautology.
Cats are animals is necessarily true even if no cats ever physically
existed.
Also, I am not "ignorant", since that means not having knowledge or
awareness of something, but I do understand what you are saying and
aware of your ideas, AND I POINT OUT YOUR ERRORS.
Until you fully understand what a semantic tautology is and why it is necessarily true you remain sufficiently ignorant.
YOU are the ignorant one, as you don't seem to understand enough to
even comment about the rebutalls to your claims.
THAT show ignorance, and stupidity.
On 4/28/23 1:15 PM, olcott wrote:
On 4/28/2023 11:41 AM, Richard Damon wrote:
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and >>>>>>>>> I won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement >>>>>>>>> is incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others.
The dictionary definition of lying is “to make a false statement >>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>> problems with this definition. It is both too narrow, since it
requires falsity, and too broad, since it allows for lying about
something other than what is being stated, and lying to someone
who is believed to be listening in but who is not being addressed. >>>>>>
The most widely accepted definition of lying is the following: “A >>>>>> lie is a statement made by one who does not believe it with the
intention that someone else shall be led to believe it” (Isenberg >>>>>> 1973, 248) (cf. “[lying is] making a statement believed to be
false, with the intention of getting another to accept it as true” >>>>>> (Primoratz 1984, 54n2)). This definition does not specify the
addressee, however. It may be restated as follows:
(L1) To lie =df to make a believed-false statement to another
person with the intention that the other person believe that
statement to be true.
L1 is the traditional definition of lying. According to L1, there
are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement
condition).
Second, lying requires that the person believe the statement to be >>>>>> false; that is, lying requires that the statement be untruthful
(untruthfulness condition).
Third, lying requires that the untruthful statement be made to
another person (addressee condition).
Fourth, lying requires that the person intend that that other
person believe the untruthful statement to be true (intention to
deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
So, you are trying to use arguments to justify that you can say
"false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept
truth differed from your ideas does not excuse you from claiming
that you can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify that >>>> it is a semantic tautology does not even make my assertion false.
So, you admit that you don't know that actually meaning of a FACT.
I mean rue in the absolute sense of the word true such as:
2 + 3 = 5 is verified as necessarily true on the basis of its meaning.
Semantic tautologies are the only kind of facts that are necessarily
true in all possible worlds.
The fact that your error has been pointed out an enormous number of
times, makes you blatant disregard for the actual truth, a suitable
stand in for your own belief.
That fact that no one has understood my semantic tautologies only
proves
that no one has understood my semantic tautologies. It does not even
prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but then
try to change the rules at the far end, without understanding that
you are accepting things your logic likely rejects, shows that you
don't understand how logic actually works.
That I do not have a complete grasp of every nuance of mathematical
logic does not show that I do not have a sufficient grasp of those
aspects that I refer to.
My next goal is to attain a complete understanding of all of the basic
terminology of model theory. I had a key insight about model theory
sometime in the last month that indicates that I must master its basic
terminology.
You present "semantic tautologies" based on FALSE definition and
results that you can not prove.
It may seem that way from the POV of not understanding what I am saying.
The entire body of analytical truth is a set of semantic tautologies.
That you are unfamiliar with the meaning of these terms is no actual
rebuttal at all.
If you don't understand from all instruction you have been given
that you are wrong, you are just proved to be totally mentally
incapable.
If you want to claim that you are not a liar by reason of insanity,
make that plea, but that just becomes an admission that you are a
pathological liar, a liar because of a mental illness.
That you continue to believe that lies do not require an intention to
deceive after the above has been pointed out makes you willfully
ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you that
you are wrong, but you continue to spout words that have been proven
incorrect make YOU a pathological liar.
No it only proves that you continue to have no grasp of what a semantic
tautology could possibly be. Any expression that is verified as
necessarily true entirely on the basis of its meaning is a semantic
tautology.
Except that isn't the meaning of a "Tautology".
The COMMON definition is "the saying of the same thing twice in
different words, generally considered to be a fault of style (e.g., they arrived one after the other in succession)".
The Meaning in the fielc of Logic is "In mathematical logic, a tautology (from Greek: ταυτολογία) is a formula or assertion that is true in every
possible interpretation."
So, neither of them point to the meaning of the words.
If you are just making up words, you are admitting you have lost from
the start.
The problem is that word meanings, especially for "natural" language are
to ill defined to be used to form the basis of formal logic. You need to
work with FORMAL definitions, which become part of the Truth Makers of
the system. At that point, either you semantic tautologies are real tautologies because they are alway true in every model, or they are not tautologies.
Cats are animals is necessarily true even if no cats ever physically
existed.
Nope. If cats don't exist in the system, the statement is not
necessarily true. For instance, the statement is NOT true in the system
of the Natural Numbers.
Also, I am not "ignorant", since that means not having knowledge or
awareness of something, but I do understand what you are saying and
aware of your ideas, AND I POINT OUT YOUR ERRORS.
Until you fully understand what a semantic tautology is and why it is
necessarily true you remain sufficiently ignorant.
As far as you have explained, it is an illogical concept based on
undefined grounds. You refuse to state whether your "semantic" is "by
the meaning of the words" at which point you need understand that either
you are using the "natural" meaning and break the rules of formal logic,
or you mean the formal meaning within the system, at which point what is
the difference between your "semantic" connections as you define them
and the classical meaning of semantic being related to showable by a
chain of connections to the truth makers of the system.
Note, if you take that later definition, then either you need to cripple
the logic you allow or the implication operator and the principle of explosion both exist in your system. (If you don't define the
implication operator as a base operation,
but do include "not", "and"
and "or" as operation, it can just be defined in the system).
YOU are the ignorant one, as you don't seem to understand enough to
even comment about the rebutalls to your claims.
THAT show ignorance, and stupidity.
On 4/28/2023 4:21 PM, Richard Damon wrote:
On 4/28/23 1:15 PM, olcott wrote:
On 4/28/2023 11:41 AM, Richard Damon wrote:
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, and >>>>>>>>>> I won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the statement >>>>>>>>>> is incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others.
The dictionary definition of lying is “to make a false statement >>>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>>> problems with this definition. It is both too narrow, since it
requires falsity, and too broad, since it allows for lying about >>>>>>> something other than what is being stated, and lying to someone
who is believed to be listening in but who is not being addressed. >>>>>>>
The most widely accepted definition of lying is the following: “A >>>>>>> lie is a statement made by one who does not believe it with the
intention that someone else shall be led to believe it” (Isenberg >>>>>>> 1973, 248) (cf. “[lying is] making a statement believed to be
false, with the intention of getting another to accept it as
true” (Primoratz 1984, 54n2)). This definition does not specify >>>>>>> the addressee, however. It may be restated as follows:
(L1) To lie =df to make a believed-false statement to another
person with the intention that the other person believe that
statement to be true.
L1 is the traditional definition of lying. According to L1, there >>>>>>> are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement
condition).
Second, lying requires that the person believe the statement to
be false; that is, lying requires that the statement be
untruthful (untruthfulness condition).
Third, lying requires that the untruthful statement be made to
another person (addressee condition).
Fourth, lying requires that the person intend that that other
person believe the untruthful statement to be true (intention to >>>>>>> deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi
So, you are trying to use arguments to justify that you can say
"false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept
truth differed from your ideas does not excuse you from claiming
that you can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify that >>>>> it is a semantic tautology does not even make my assertion false.
So, you admit that you don't know that actually meaning of a FACT.
I mean rue in the absolute sense of the word true such as:
2 + 3 = 5 is verified as necessarily true on the basis of its meaning.
Semantic tautologies are the only kind of facts that are necessarily
true in all possible worlds.
The fact that your error has been pointed out an enormous number
of times, makes you blatant disregard for the actual truth, a
suitable stand in for your own belief.
That fact that no one has understood my semantic tautologies only
proves
that no one has understood my semantic tautologies. It does not even >>>>> prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but then
try to change the rules at the far end, without understanding that
you are accepting things your logic likely rejects, shows that you
don't understand how logic actually works.
That I do not have a complete grasp of every nuance of mathematical
logic does not show that I do not have a sufficient grasp of those
aspects that I refer to.
My next goal is to attain a complete understanding of all of the basic
terminology of model theory. I had a key insight about model theory
sometime in the last month that indicates that I must master its basic
terminology.
You present "semantic tautologies" based on FALSE definition and
results that you can not prove.
It may seem that way from the POV of not understanding what I am saying. >>> The entire body of analytical truth is a set of semantic tautologies.
That you are unfamiliar with the meaning of these terms is no actual
rebuttal at all.
If you don't understand from all instruction you have been given
that you are wrong, you are just proved to be totally mentally
incapable.
If you want to claim that you are not a liar by reason of
insanity, make that plea, but that just becomes an admission that
you are a pathological liar, a liar because of a mental illness.
That you continue to believe that lies do not require an intention to >>>>> deceive after the above has been pointed out makes you willfully
ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you
that you are wrong, but you continue to spout words that have been
proven incorrect make YOU a pathological liar.
No it only proves that you continue to have no grasp of what a semantic
tautology could possibly be. Any expression that is verified as
necessarily true entirely on the basis of its meaning is a semantic
tautology.
Except that isn't the meaning of a "Tautology".
In logic, a formula is satisfiable if it is true under at least one interpretation, and thus a tautology is a formula whose negation is unsatisfiable. In other words, it cannot be false. It cannot be untrue.
https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.
What I actually mean is analytic truth, yet math people will have no
clue about this because all of math is syntactic rather than semantic. https://plato.stanford.edu/entries/analytic-synthetic/
Because of this I coined my own term [semantic tautology] as the most self-descriptive term that I could find as a place-holder for my notion.
The COMMON definition is "the saying of the same thing twice in
different words, generally considered to be a fault of style (e.g.,
they arrived one after the other in succession)".
The Meaning in the fielc of Logic is "In mathematical logic, a
tautology (from Greek: ταυτολογία) is a formula or assertion that is
true in every possible interpretation."
So, neither of them point to the meaning of the words.
Did I say that I am limiting the application [semantic tautology] to words?
When dealing with logic a [semantic tautology] may simply be a tautology(logic). When dealing with formalized natural language it may
be more clear to refer to it as as [semantic tautology] in that the
semantic meaning of natural language expression are formalized as
axioms.
If you are just making up words, you are admitting you have lost from
the start.
The problem is that word meanings, especially for "natural" language
are to ill defined to be used to form the basis of formal logic. You
need to
Not when natural language is formalized.
Semantic Grammar and the Power of Computational Language
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
work with FORMAL definitions, which become part of the Truth Makers of
the system. At that point, either you semantic tautologies are real
tautologies because they are alway true in every model, or they are
not tautologies.
Cats are animals in the currently existing model of the world, Cats may
not exist in other possible worlds. [semantic tautology] applies with a
model of the world.
Cats are animals is necessarily true even if no cats ever physically
existed.
Nope. If cats don't exist in the system, the statement is not
necessarily true. For instance, the statement is NOT true in the
system of the Natural Numbers.
Cats are animals at the semantic level in the current model of the
world. The model of the world has GUID placeholders for the notion of
{cats} and {animals} and for every other unique sense meaning.
Also, I am not "ignorant", since that means not having knowledge or
awareness of something, but I do understand what you are saying and
aware of your ideas, AND I POINT OUT YOUR ERRORS.
Until you fully understand what a semantic tautology is and why it is
necessarily true you remain sufficiently ignorant.
As far as you have explained, it is an illogical concept based on
undefined grounds. You refuse to state whether your "semantic" is "by
the meaning of the words" at which point you need understand that either
When I refer to {semantic} and don't restrict this to the meaning of
words then it applies to every formal language expression, natural
language expression and formalized natural language expression.
That you assume otherwise is your mistake.
you are using the "natural" meaning and break the rules of formal
logic, or you mean the formal meaning within the system, at which
point what is the difference between your "semantic" connections as
you define them and the classical meaning of semantic being related to
showable by a chain of connections to the truth makers of the system.
We don't need to formalize the notions of {cats} and {animals} to know
that cats <are> animals according to the meaning of those terms.
Note, if you take that later definition, then either you need to
cripple the logic you allow or the implication operator and the
principle of explosion both exist in your system. (If you don't define
the implication operator as a base operation,
I have already said quite a few times that I am probably replacing the implication operator with the Semantic Necessity operator: ⊨□
That you can't seem to remember key points that I make and repeat many
times is very annoying.
but do include "not", "and" and "or" as operation, it can just be
defined in the system).
YOU are the ignorant one, as you don't seem to understand enough to
even comment about the rebutalls to your claims.
THAT show ignorance, and stupidity.
On 4/28/23 6:17 PM, olcott wrote:
On 4/28/2023 4:21 PM, Richard Damon wrote:
On 4/28/23 1:15 PM, olcott wrote:
On 4/28/2023 11:41 AM, Richard Damon wrote:
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, >>>>>>>>>>> and I won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the
statement is incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. >>>>>>>> The dictionary definition of lying is “to make a false statement >>>>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>>>> problems with this definition. It is both too narrow, since it >>>>>>>> requires falsity, and too broad, since it allows for lying about >>>>>>>> something other than what is being stated, and lying to someone >>>>>>>> who is believed to be listening in but who is not being addressed. >>>>>>>>
The most widely accepted definition of lying is the following: >>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>> the intention that someone else shall be led to believe it”
(Isenberg 1973, 248) (cf. “[lying is] making a statement
believed to be false, with the intention of getting another to >>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition does >>>>>>>> not specify the addressee, however. It may be restated as follows: >>>>>>>>
(L1) To lie =df to make a believed-false statement to another
person with the intention that the other person believe that
statement to be true.
L1 is the traditional definition of lying. According to L1,
there are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement >>>>>>>> condition).
Second, lying requires that the person believe the statement to >>>>>>>> be false; that is, lying requires that the statement be
untruthful (untruthfulness condition).
Third, lying requires that the untruthful statement be made to >>>>>>>> another person (addressee condition).
Fourth, lying requires that the person intend that that other
person believe the untruthful statement to be true (intention to >>>>>>>> deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>
So, you are trying to use arguments to justify that you can say
"false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept
truth differed from your ideas does not excuse you from claiming >>>>>>> that you can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify
that
it is a semantic tautology does not even make my assertion false.
So, you admit that you don't know that actually meaning of a FACT.
I mean rue in the absolute sense of the word true such as:
2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>
Semantic tautologies are the only kind of facts that are necessarily
true in all possible worlds.
The fact that your error has been pointed out an enormous number >>>>>>> of times, makes you blatant disregard for the actual truth, a
suitable stand in for your own belief.
That fact that no one has understood my semantic tautologies only
proves
that no one has understood my semantic tautologies. It does not even >>>>>> prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but then
try to change the rules at the far end, without understanding that
you are accepting things your logic likely rejects, shows that you
don't understand how logic actually works.
That I do not have a complete grasp of every nuance of mathematical
logic does not show that I do not have a sufficient grasp of those
aspects that I refer to.
My next goal is to attain a complete understanding of all of the basic >>>> terminology of model theory. I had a key insight about model theory
sometime in the last month that indicates that I must master its basic >>>> terminology.
You present "semantic tautologies" based on FALSE definition and
results that you can not prove.
It may seem that way from the POV of not understanding what I am
saying.
The entire body of analytical truth is a set of semantic tautologies.
That you are unfamiliar with the meaning of these terms is no actual
rebuttal at all.
If you don't understand from all instruction you have been given >>>>>>> that you are wrong, you are just proved to be totally mentally
incapable.
If you want to claim that you are not a liar by reason of
insanity, make that plea, but that just becomes an admission that >>>>>>> you are a pathological liar, a liar because of a mental illness. >>>>>>>
That you continue to believe that lies do not require an intention to >>>>>> deceive after the above has been pointed out makes you willfully
ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you
that you are wrong, but you continue to spout words that have been
proven incorrect make YOU a pathological liar.
No it only proves that you continue to have no grasp of what a semantic >>>> tautology could possibly be. Any expression that is verified as
necessarily true entirely on the basis of its meaning is a semantic
tautology.
Except that isn't the meaning of a "Tautology".
In logic, a formula is satisfiable if it is true under at least one
interpretation, and thus a tautology is a formula whose negation is
unsatisfiable. In other words, it cannot be false. It cannot be untrue.
Right, but that means using the rules of the field, so only definition
of that field.
Thus, your "Meaning of the Words" needs to quote ONLY actual definitions
that have been accepted in the field.
https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.
What I actually mean is analytic truth, yet math people will have no
clue about this because all of math is syntactic rather than semantic.
https://plato.stanford.edu/entries/analytic-synthetic/
I thought you previously were claiming that all of mathematics had to be analytic!
And why do you call out an article about analytic-synthetic when you are making a distintion between semantic and syntactic? That seems to be a non-sequitor.
And math is NOT just syntactic, as syntax can't express many of the properties used in math.
Because of this I coined my own term [semantic tautology] as the most
self-descriptive term that I could find as a place-holder for my notion.
Right, do don't understand how math works, so you make up terms that you can't actually define to fix it.
The COMMON definition is "the saying of the same thing twice in
different words, generally considered to be a fault of style (e.g.,
they arrived one after the other in succession)".
The Meaning in the fielc of Logic is "In mathematical logic, a
tautology (from Greek: ταυτολογία) is a formula or assertion that is
true in every possible interpretation."
So, neither of them point to the meaning of the words.
Did I say that I am limiting the application [semantic tautology] to
words?
You haven't given any other definition, so yes, by default you have.
You can't use the classic semantic of logic, since you disagree with how
that works, so you only have words. (Classic logic semantics lets you
show the principle of explosion works, so you can't be using that).
When dealing with logic a [semantic tautology] may simply be a
tautology(logic). When dealing with formalized natural language it may
be more clear to refer to it as as [semantic tautology] in that the
semantic meaning of natural language expression are formalized as
axioms.
In other words, you don't know what you are talking about and using word salad.
If you are just making up words, you are admitting you have lost from
the start.
The problem is that word meanings, especially for "natural" language
are to ill defined to be used to form the basis of formal logic. You
need to
Not when natural language is formalized.
Semantic Grammar and the Power of Computational Language
But then you need to use that formalize version, and be in a system that
uses it.
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
So, you are admitting you don't know how formal logic works.
Note, ChatGPT is proven to not understand how to get actually correct
answer (or at least doesn't always apply those rules).
work with FORMAL definitions, which become part of the Truth Makers
of the system. At that point, either you semantic tautologies are
real tautologies because they are alway true in every model, or they
are not tautologies.
Cats are animals in the currently existing model of the world, Cats may
not exist in other possible worlds. [semantic tautology] applies with
a model of the world.
WHICH model of the world?
(Note, you didn't use any UUID's, so you can't argue with them)
Cats are also a type of tractor.
It depends on WHICH model of (what part of) the world you are working.
Also, it depends on actually being in a model of "the world" and not somethint else.
You are just showing how little you understand about the basis of formal logic.
Cats are animals is necessarily true even if no cats ever physically
existed.
Nope. If cats don't exist in the system, the statement is not
necessarily true. For instance, the statement is NOT true in the
system of the Natural Numbers.
Cats are animals at the semantic level in the current model of the
world. The model of the world has GUID placeholders for the notion of
{cats} and {animals} and for every other unique sense meaning.
No, you didn't use them, and the GUIDs only apply to the system that
actually defines them.
So in *A* model of the world, with the addition of the GUIDs on the
terms, you can make that claim.
The is not a unique "The" model of the world.
Also, I am not "ignorant", since that means not having knowledge or
awareness of something, but I do understand what you are saying and
aware of your ideas, AND I POINT OUT YOUR ERRORS.
Until you fully understand what a semantic tautology is and why it is
necessarily true you remain sufficiently ignorant.
As far as you have explained, it is an illogical concept based on
undefined grounds. You refuse to state whether your "semantic" is "by
the meaning of the words" at which point you need understand that either
When I refer to {semantic} and don't restrict this to the meaning of
words then it applies to every formal language expression, natural
language expression and formalized natural language expression.
So, you don't understand what you are talking about.
SO, you admit that you system falls to the principle of explosion, as
the classic definition of semantic in classic logic is enough to allow it.
That you assume otherwise is your mistake.
In other words, you don't know how to say things precisly,
That is why I (and the CYC project) use GUIDs.
you are using the "natural" meaning and break the rules of formal
logic, or you mean the formal meaning within the system, at which
point what is the difference between your "semantic" connections as
you define them and the classical meaning of semantic being related
to showable by a chain of connections to the truth makers of the system. >>>
We don't need to formalize the notions of {cats} and {animals} to know
that cats <are> animals according to the meaning of those terms.
Unless they are tractors, or something else using the word.
Note, if you take that later definition, then either you need to
cripple the logic you allow or the implication operator and the
principle of explosion both exist in your system. (If you don't
define the implication operator as a base operation,
I have already said quite a few times that I am probably replacing the
implication operator with the Semantic Necessity operator: ⊨□
But are you removing the AND and OR and NOT operator,
if not, anything
done by implication can be done with a combination of those.
I don't think you actually understand how the operator works.
Also, can you actually DEFINE (not just show an exampe) of what this
operator defines?
That you can't seem to remember key points that I make and repeat many
times is very annoying.
The fact that you never actually define things, and ignore my comments
make that your fault.
I think the problem is you don't know how to do any of the things I ask about, so when I keep asking you to do them, you get annoyed because I
keep showing how stupid you are.
On 4/28/2023 10:05 PM, Richard Damon wrote:
On 4/28/23 6:17 PM, olcott wrote:
On 4/28/2023 4:21 PM, Richard Damon wrote:
On 4/28/23 1:15 PM, olcott wrote:
On 4/28/2023 11:41 AM, Richard Damon wrote:
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, >>>>>>>>>>>> and I won't teach lies to kids.
5 to express what is false; convey a false impression. >>>>>>>>>>>>
It does not ALWAYS require actual knowledge that the
statement is incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar. >>>>>>>>>>
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. >>>>>>>>> The dictionary definition of lying is “to make a false
statement with the intention to deceive” (OED 1989) but there >>>>>>>>> are numerous problems with this definition. It is both too
narrow, since it requires falsity, and too broad, since it
allows for lying about something other than what is being
stated, and lying to someone who is believed to be listening in >>>>>>>>> but who is not being addressed.
The most widely accepted definition of lying is the following: >>>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>>> the intention that someone else shall be led to believe it” >>>>>>>>> (Isenberg 1973, 248) (cf. “[lying is] making a statement
believed to be false, with the intention of getting another to >>>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition >>>>>>>>> does not specify the addressee, however. It may be restated as >>>>>>>>> follows:
(L1) To lie =df to make a believed-false statement to another >>>>>>>>> person with the intention that the other person believe that >>>>>>>>> statement to be true.
L1 is the traditional definition of lying. According to L1,
there are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement >>>>>>>>> condition).
Second, lying requires that the person believe the statement to >>>>>>>>> be false; that is, lying requires that the statement be
untruthful (untruthfulness condition).
Third, lying requires that the untruthful statement be made to >>>>>>>>> another person (addressee condition).
Fourth, lying requires that the person intend that that other >>>>>>>>> person believe the untruthful statement to be true (intention >>>>>>>>> to deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>>
So, you are trying to use arguments to justify that you can say >>>>>>>> "false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept >>>>>>>> truth differed from your ideas does not excuse you from claiming >>>>>>>> that you can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify >>>>>>> that
it is a semantic tautology does not even make my assertion false. >>>>>>>
So, you admit that you don't know that actually meaning of a FACT. >>>>>>
I mean rue in the absolute sense of the word true such as:
2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>>
Semantic tautologies are the only kind of facts that are necessarily >>>>> true in all possible worlds.
The fact that your error has been pointed out an enormous number >>>>>>>> of times, makes you blatant disregard for the actual truth, a
suitable stand in for your own belief.
That fact that no one has understood my semantic tautologies only >>>>>>> proves
that no one has understood my semantic tautologies. It does not even >>>>>>> prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but
then try to change the rules at the far end, without understanding >>>>>> that you are accepting things your logic likely rejects, shows
that you don't understand how logic actually works.
That I do not have a complete grasp of every nuance of mathematical
logic does not show that I do not have a sufficient grasp of those
aspects that I refer to.
My next goal is to attain a complete understanding of all of the basic >>>>> terminology of model theory. I had a key insight about model theory
sometime in the last month that indicates that I must master its basic >>>>> terminology.
You present "semantic tautologies" based on FALSE definition and
results that you can not prove.
It may seem that way from the POV of not understanding what I am
saying.
The entire body of analytical truth is a set of semantic tautologies. >>>>> That you are unfamiliar with the meaning of these terms is no actual >>>>> rebuttal at all.
If you don't understand from all instruction you have been given >>>>>>>> that you are wrong, you are just proved to be totally mentally >>>>>>>> incapable.
If you want to claim that you are not a liar by reason of
insanity, make that plea, but that just becomes an admission
that you are a pathological liar, a liar because of a mental
illness.
That you continue to believe that lies do not require an
intention to
deceive after the above has been pointed out makes you willfully >>>>>>> ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you
that you are wrong, but you continue to spout words that have been >>>>>> proven incorrect make YOU a pathological liar.
No it only proves that you continue to have no grasp of what a
semantic
tautology could possibly be. Any expression that is verified as
necessarily true entirely on the basis of its meaning is a semantic
tautology.
Except that isn't the meaning of a "Tautology".
In logic, a formula is satisfiable if it is true under at least one
interpretation, and thus a tautology is a formula whose negation is
unsatisfiable. In other words, it cannot be false. It cannot be untrue.
Right, but that means using the rules of the field, so only definition
of that field.
I could augment this field yet this might not be required for
mathematical expressions. It might be the case that ordinary model
theory will work just fine.
Non-standard models of arithmetic seems a little too strange.
Thus, your "Meaning of the Words" needs to quote ONLY actual
definitions that have been accepted in the field.
There have not been any accepted definitions of formalized natural
language in the field of mathematics. The closest thing in mathematics
is the categorical propositions. In the field of formalized natural
language different approaches are used.
https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.
What I actually mean is analytic truth, yet math people will have no
clue about this because all of math is syntactic rather than semantic.
https://plato.stanford.edu/entries/analytic-synthetic/
I thought you previously were claiming that all of mathematics had to
be analytic!
This is probably beyond your knowledge of philosophy.
The key philosopher in the field Quine seems to be a
blithering idiot that can't even understand that bachelors
are unmarried. I am referring to the logical positivist
view of the analytic / synthetic distinction.
*Logical positivist definitions*
analytic proposition: a proposition whose truth depends solely on the
meaning of its terms
analytic proposition: a proposition that is true (or false) by definition
analytic proposition: a proposition that is made true (or false) solely
by the conventions of language
https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_distinction
And why do you call out an article about analytic-synthetic when you
are making a distintion between semantic and syntactic? That seems to
be a non-sequitor.
This again is your lack of knowledge of philosophy analytic <is>
semantic.
And math is NOT just syntactic, as syntax can't express many of the
properties used in math.
I am in the process of learning much more about model theory, it seems
to have some weird quirks.
Because of this I coined my own term [semantic tautology] as the most
self-descriptive term that I could find as a place-holder for my notion.
Right, do don't understand how math works, so you make up terms that
you can't actually define to fix it.
The analytic synthetic distinction is from philosophy as well as the formalization of natural language is not within mathematics.
The COMMON definition is "the saying of the same thing twice in
different words, generally considered to be a fault of style (e.g.,
they arrived one after the other in succession)".
The Meaning in the fielc of Logic is "In mathematical logic, a
tautology (from Greek: ταυτολογία) is a formula or assertion that is
true in every possible interpretation."
So, neither of them point to the meaning of the words.
Did I say that I am limiting the application [semantic tautology] to
words?
You haven't given any other definition, so yes, by default you have.
That may seem that way to someone not very familiar with the term.
You can't use the classic semantic of logic, since you disagree with
how that works, so you only have words. (Classic logic semantics lets
you show the principle of explosion works, so you can't be using that).
I am starting with the syllogism as my logical basis, it makes sure to
anchor the meaning of its terms in defined sets. This may end up being
very much like model theory.
When dealing with logic a [semantic tautology] may simply be a
tautology(logic). When dealing with formalized natural language it may
be more clear to refer to it as as [semantic tautology] in that the
semantic meaning of natural language expression are formalized as
axioms.
In other words, you don't know what you are talking about and using
word salad.
I don't know enough about what I am talking about when referring to
model theory. My knowledge of formalized semantics comes from Rudolf
Carnap's (1952) meaning postulates. These same idea can be applied to
math.
If you are just making up words, you are admitting you have lost
from the start.
The problem is that word meanings, especially for "natural" language
are to ill defined to be used to form the basis of formal logic. You
need to
Not when natural language is formalized.
Semantic Grammar and the Power of Computational Language
But then you need to use that formalize version, and be in a system
that uses it.
Not at all, no human can do this. Steven Wolfram is referring to what
large language models are doing. These models computed literally one
billion years worth of human research in a short amount of time. I am referring to the 60 minutes story.
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
So, you are admitting you don't know how formal logic works.
I am not saying anything like that. It is more along the lines that you
do not know enough about formalized natural language.
Note, ChatGPT is proven to not understand how to get actually correct
answer (or at least doesn't always apply those rules).
It does deduction stochastically.
work with FORMAL definitions, which become part of the Truth Makers
of the system. At that point, either you semantic tautologies are
real tautologies because they are alway true in every model, or they
are not tautologies.
Cats are animals in the currently existing model of the world, Cats may
not exist in other possible worlds. [semantic tautology] applies with
a model of the world.
WHICH model of the world?
the currently existing model of the world
the currently existing model of the world
the currently existing model of the world
(Note, you didn't use any UUID's, so you can't argue with them)
I don't need to use GUID's myself to point out that they can be used in
place of ambiguous finite strings that have many subtly different sense meanings. A "cat" could be an abbreviation for a brand of earth moving equipment.
Cats are also a type of tractor.
It depends on WHICH model of (what part of) the world you are working.
I am assuming that the complete model of the current world already
exists as a type hierarchy of GUIDs that are mapped to equivalent
English words.
Also, it depends on actually being in a model of "the world" and not
somethint else.
The "cats" are animals is an aspect of the current model of the world in English. That "cats" are also the abbreviation of a brand of Earth
moving equipment is mapped from a different GUID.
You are just showing how little you understand about the basis of
formal logic.
No, you are showing how little you understand of knowledge ontologies.
Cats are animals is necessarily true even if no cats ever
physically existed.
Nope. If cats don't exist in the system, the statement is not
necessarily true. For instance, the statement is NOT true in the
system of the Natural Numbers.
Cats are animals at the semantic level in the current model of the
world. The model of the world has GUID placeholders for the notion of
{cats} and {animals} and for every other unique sense meaning.
No, you didn't use them, and the GUIDs only apply to the system that
actually defines them.
I have said that I am talking about knowledge ontology type hierarchies
about 5000 times, none recently.
So in *A* model of the world, with the addition of the GUIDs on the
terms, you can make that claim.
The is not a unique "The" model of the world.
I am only referring to the current worlds of all possible worlds.
Possible worlds is from philosophy so you probably won't know about it.
You continue to conflate your own lack of knowledge of philosophy for my
lack of knowledge of logic. My knowledge of logic is pretty good with
the exception of model theory.
Also, I am not "ignorant", since that means not having knowledge
or awareness of something, but I do understand what you are saying >>>>>> and aware of your ideas, AND I POINT OUT YOUR ERRORS.
Until you fully understand what a semantic tautology is and why it is >>>>> necessarily true you remain sufficiently ignorant.
As far as you have explained, it is an illogical concept based on
undefined grounds. You refuse to state whether your "semantic" is
"by the meaning of the words" at which point you need understand
that either
When I refer to {semantic} and don't restrict this to the meaning of
words then it applies to every formal language expression, natural
language expression and formalized natural language expression.
So, you don't understand what you are talking about.
Again this is your ignorance and not mine.
Semantics (from Ancient Greek σημαντικός (sēmantikós) 'significant')[a][1] is the study of reference, meaning, or truth. The
term can be used to refer to subfields of several distinct disciplines, including philosophy, linguistics and computer science. https://en.wikipedia.org/wiki/Semantics
SO, you admit that you system falls to the principle of explosion, as
the classic definition of semantic in classic logic is enough to allow
it.
I am not sure. I have to learn more model theory first.
I am sure that no semantic meaning can be correctly
derived on the basis of a contradiction or a falsehood.
That you assume otherwise is your mistake.
In other words, you don't know how to say things precisly,
A notable feature of relevance logics is that they are paraconsistent
logics: the existence of a contradiction will not cause "explosion".
This follows from the fact that a conditional with a contradictory
antecedent that does not share any propositional or predicate letters
with the consequent cannot be true (or derivable). https://en.wikipedia.org/wiki/Relevance_logic
That is why I (and the CYC project) use GUIDs.
you are using the "natural" meaning and break the rules of formal
logic, or you mean the formal meaning within the system, at which
point what is the difference between your "semantic" connections as
you define them and the classical meaning of semantic being related
to showable by a chain of connections to the truth makers of the
system.
We don't need to formalize the notions of {cats} and {animals} to know
that cats <are> animals according to the meaning of those terms.
Unless they are tractors, or something else using the word.
Note, if you take that later definition, then either you need to
cripple the logic you allow or the implication operator and the
principle of explosion both exist in your system. (If you don't
define the implication operator as a base operation,
I have already said quite a few times that I am probably replacing
the implication operator with the Semantic Necessity operator: ⊨□
But are you removing the AND and OR and NOT operator,
I never said anything like that, where do you get this stuff from?
if not, anything done by implication can be done with a combination of
those.
Propositional logic has been adapted so that there is some semantic connection between its terms. Relevance logic may be sufficient.
I am examining these things at the foundational basic architecture level
you mistake this for a lack of understanding of the details. All of the
the details have not been fully reverse engineered yet.
I don't think you actually understand how the operator works.
Its truth table tells me everything that I need to know.
Also, can you actually DEFINE (not just show an exampe) of what this
operator defines?
I can't do that because you do not have a sufficient understand of the
term semantic in the you assumed it only applies to the meaning of
words.
That you can't seem to remember key points that I make and repeat many
times is very annoying.
The fact that you never actually define things, and ignore my comments
make that your fault.
I think the problem is you don't know how to do any of the things I
ask about, so when I keep asking you to do them, you get annoyed
because I keep showing how stupid you are.
I am mostly ignorant of model theory and am actually correcting that.
You seem mostly ignorant of philosophy thus cannot understand the
philosophy of logic.
On 4/28/23 6:17 PM, olcott wrote:
On 4/28/2023 4:21 PM, Richard Damon wrote:
On 4/28/23 1:15 PM, olcott wrote:
On 4/28/2023 11:41 AM, Richard Damon wrote:
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, >>>>>>>>>>> and I won't teach lies to kids.
5 to express what is false; convey a false impression.
It does not ALWAYS require actual knowledge that the
statement is incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar.
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. >>>>>>>> The dictionary definition of lying is “to make a false statement >>>>>>>> with the intention to deceive” (OED 1989) but there are numerous >>>>>>>> problems with this definition. It is both too narrow, since it >>>>>>>> requires falsity, and too broad, since it allows for lying about >>>>>>>> something other than what is being stated, and lying to someone >>>>>>>> who is believed to be listening in but who is not being addressed. >>>>>>>>
The most widely accepted definition of lying is the following: >>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>> the intention that someone else shall be led to believe it”
(Isenberg 1973, 248) (cf. “[lying is] making a statement
believed to be false, with the intention of getting another to >>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition does >>>>>>>> not specify the addressee, however. It may be restated as follows: >>>>>>>>
(L1) To lie =df to make a believed-false statement to another
person with the intention that the other person believe that
statement to be true.
L1 is the traditional definition of lying. According to L1,
there are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement >>>>>>>> condition).
Second, lying requires that the person believe the statement to >>>>>>>> be false; that is, lying requires that the statement be
untruthful (untruthfulness condition).
Third, lying requires that the untruthful statement be made to >>>>>>>> another person (addressee condition).
Fourth, lying requires that the person intend that that other
person believe the untruthful statement to be true (intention to >>>>>>>> deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>
So, you are trying to use arguments to justify that you can say
"false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept
truth differed from your ideas does not excuse you from claiming >>>>>>> that you can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify
that
it is a semantic tautology does not even make my assertion false.
So, you admit that you don't know that actually meaning of a FACT.
I mean rue in the absolute sense of the word true such as:
2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>
Semantic tautologies are the only kind of facts that are necessarily
true in all possible worlds.
The fact that your error has been pointed out an enormous number >>>>>>> of times, makes you blatant disregard for the actual truth, a
suitable stand in for your own belief.
That fact that no one has understood my semantic tautologies only
proves
that no one has understood my semantic tautologies. It does not even >>>>>> prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but then
try to change the rules at the far end, without understanding that
you are accepting things your logic likely rejects, shows that you
don't understand how logic actually works.
That I do not have a complete grasp of every nuance of mathematical
logic does not show that I do not have a sufficient grasp of those
aspects that I refer to.
My next goal is to attain a complete understanding of all of the basic >>>> terminology of model theory. I had a key insight about model theory
sometime in the last month that indicates that I must master its basic >>>> terminology.
You present "semantic tautologies" based on FALSE definition and
results that you can not prove.
It may seem that way from the POV of not understanding what I am
saying.
The entire body of analytical truth is a set of semantic tautologies.
That you are unfamiliar with the meaning of these terms is no actual
rebuttal at all.
If you don't understand from all instruction you have been given >>>>>>> that you are wrong, you are just proved to be totally mentally
incapable.
If you want to claim that you are not a liar by reason of
insanity, make that plea, but that just becomes an admission that >>>>>>> you are a pathological liar, a liar because of a mental illness. >>>>>>>
That you continue to believe that lies do not require an intention to >>>>>> deceive after the above has been pointed out makes you willfully
ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you
that you are wrong, but you continue to spout words that have been
proven incorrect make YOU a pathological liar.
No it only proves that you continue to have no grasp of what a semantic >>>> tautology could possibly be. Any expression that is verified as
necessarily true entirely on the basis of its meaning is a semantic
tautology.
Except that isn't the meaning of a "Tautology".
In logic, a formula is satisfiable if it is true under at least one
interpretation, and thus a tautology is a formula whose negation is
unsatisfiable. In other words, it cannot be false. It cannot be untrue.
Right, but that means using the rules of the field, so only definition
of that field.
Thus, your "Meaning of the Words" needs to quote ONLY actual definitions
that have been accepted in the field.
https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.
What I actually mean is analytic truth, yet math people will have no
clue about this because all of math is syntactic rather than semantic.
https://plato.stanford.edu/entries/analytic-synthetic/
I thought you previously were claiming that all of mathematics had to be analytic!
And why do you call out an article about analytic-synthetic when you are making a distintion between semantic and syntactic? That seems to be a non-sequitor.
And math is NOT just syntactic, as syntax can't express many of the properties used in math.
Because of this I coined my own term [semantic tautology] as the most
self-descriptive term that I could find as a place-holder for my notion.
Right, do don't understand how math works, so you make up terms that you can't actually define to fix it.
The COMMON definition is "the saying of the same thing twice in
different words, generally considered to be a fault of style (e.g.,
they arrived one after the other in succession)".
The Meaning in the fielc of Logic is "In mathematical logic, a
tautology (from Greek: ταυτολογία) is a formula or assertion that is
true in every possible interpretation."
So, neither of them point to the meaning of the words.
Did I say that I am limiting the application [semantic tautology] to
words?
You haven't given any other definition, so yes, by default you have.
You can't use the classic semantic of logic, since you disagree with how
that works, so you only have words. (Classic logic semantics lets you
show the principle of explosion works, so you can't be using that).
When dealing with logic a [semantic tautology] may simply be a
tautology(logic). When dealing with formalized natural language it may
be more clear to refer to it as as [semantic tautology] in that the
semantic meaning of natural language expression are formalized as
axioms.
In other words, you don't know what you are talking about and using word salad.
If you are just making up words, you are admitting you have lost from
the start.
The problem is that word meanings, especially for "natural" language
are to ill defined to be used to form the basis of formal logic. You
need to
Not when natural language is formalized.
Semantic Grammar and the Power of Computational Language
But then you need to use that formalize version, and be in a system that
uses it.
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
So, you are admitting you don't know how formal logic works.
Note, ChatGPT is proven to not understand how to get actually correct
answer (or at least doesn't always apply those rules).
work with FORMAL definitions, which become part of the Truth Makers
of the system. At that point, either you semantic tautologies are
real tautologies because they are alway true in every model, or they
are not tautologies.
Cats are animals in the currently existing model of the world, Cats may
not exist in other possible worlds. [semantic tautology] applies with
a model of the world.
WHICH model of the world?
(Note, you didn't use any UUID's, so you can't argue with them)
Cats are also a type of tractor.
It depends on WHICH model of (what part of) the world you are working.
Also, it depends on actually being in a model of "the world" and not somethint else.
You are just showing how little you understand about the basis of formal logic.
Cats are animals is necessarily true even if no cats ever physically
existed.
Nope. If cats don't exist in the system, the statement is not
necessarily true. For instance, the statement is NOT true in the
system of the Natural Numbers.
Cats are animals at the semantic level in the current model of the
world. The model of the world has GUID placeholders for the notion of
{cats} and {animals} and for every other unique sense meaning.
No, you didn't use them, and the GUIDs only apply to the system that
actually defines them.
So in *A* model of the world, with the addition of the GUIDs on the
terms, you can make that claim.
The is not a unique "The" model of the world.
Also, I am not "ignorant", since that means not having knowledge or
awareness of something, but I do understand what you are saying and
aware of your ideas, AND I POINT OUT YOUR ERRORS.
Until you fully understand what a semantic tautology is and why it is
necessarily true you remain sufficiently ignorant.
As far as you have explained, it is an illogical concept based on
undefined grounds. You refuse to state whether your "semantic" is "by
the meaning of the words" at which point you need understand that either
When I refer to {semantic} and don't restrict this to the meaning of
words then it applies to every formal language expression, natural
language expression and formalized natural language expression.
So, you don't understand what you are talking about.
SO, you admit that you system falls to the principle of explosion, as
the classic definition of semantic in classic logic is enough to allow it.
That you assume otherwise is your mistake.
In other words, you don't know how to say things precisly,
you are using the "natural" meaning and break the rules of formal
logic, or you mean the formal meaning within the system, at which
point what is the difference between your "semantic" connections as
you define them and the classical meaning of semantic being related
to showable by a chain of connections to the truth makers of the system. >>>
We don't need to formalize the notions of {cats} and {animals} to know
that cats <are> animals according to the meaning of those terms.
Unless they are tractors, or something else using the word.
Note, if you take that later definition, then either you need to
cripple the logic you allow or the implication operator and the
principle of explosion both exist in your system. (If you don't
define the implication operator as a base operation,
I have already said quite a few times that I am probably replacing the
implication operator with the Semantic Necessity operator: ⊨□
But are you removing the AND and OR and NOT operator, if not, anything
done by implication can be done with a combination of those.
I don't think you actually understand how the operator works.
Also, can you actually DEFINE (not just show an exampe) of what this
operator defines?
That you can't seem to remember key points that I make and repeat many
times is very annoying.
The fact that you never actually define things, and ignore my comments
make that your fault.
I think the problem is you don't know how to do any of the things I ask about, so when I keep asking you to do them, you get annoyed because I
keep showing how stupid you are.
but do include "not", "and" and "or" as operation, it can just be
defined in the system).
YOU are the ignorant one, as you don't seem to understand enough to
even comment about the rebutalls to your claims.
THAT show ignorance, and stupidity.
On 4/28/2023 10:05 PM, Richard Damon wrote:
On 4/28/23 6:17 PM, olcott wrote:
On 4/28/2023 4:21 PM, Richard Damon wrote:
On 4/28/23 1:15 PM, olcott wrote:
On 4/28/2023 11:41 AM, Richard Damon wrote:
On 4/28/23 11:50 AM, olcott wrote:
On 4/28/2023 10:44 AM, Richard Damon wrote:
On 4/28/23 11:26 AM, olcott wrote:
On 4/28/2023 10:14 AM, Richard Damon wrote:
On 4/28/23 10:59 AM, olcott wrote:
On 4/28/2023 6:40 AM, Richard Damon wrote:
https://www.dictionary.com/browse/lie
3 an inaccurate or untrue statement; falsehood:
When I went to school, history books were full of lies, >>>>>>>>>>>> and I won't teach lies to kids.
5 to express what is false; convey a false impression. >>>>>>>>>>>>
It does not ALWAYS require actual knowledge that the
statement is incorrect.
Yes it does and you are stupid for saying otherwise.
Then why do the definition I quoted say otherwise?
That just shows you are the one that is stupid, and a liar. >>>>>>>>>>
In this case you are proving to be stupid: (yet not a liar)
1. Traditional Definition of Lying
There is no universally accepted definition of lying to others. >>>>>>>>> The dictionary definition of lying is “to make a false
statement with the intention to deceive” (OED 1989) but there >>>>>>>>> are numerous problems with this definition. It is both too
narrow, since it requires falsity, and too broad, since it
allows for lying about something other than what is being
stated, and lying to someone who is believed to be listening in >>>>>>>>> but who is not being addressed.
The most widely accepted definition of lying is the following: >>>>>>>>> “A lie is a statement made by one who does not believe it with >>>>>>>>> the intention that someone else shall be led to believe it” >>>>>>>>> (Isenberg 1973, 248) (cf. “[lying is] making a statement
believed to be false, with the intention of getting another to >>>>>>>>> accept it as true” (Primoratz 1984, 54n2)). This definition >>>>>>>>> does not specify the addressee, however. It may be restated as >>>>>>>>> follows:
(L1) To lie =df to make a believed-false statement to another >>>>>>>>> person with the intention that the other person believe that >>>>>>>>> statement to be true.
L1 is the traditional definition of lying. According to L1,
there are at least four necessary conditions for lying.
First, lying requires that a person make a statement (statement >>>>>>>>> condition).
Second, lying requires that the person believe the statement to >>>>>>>>> be false; that is, lying requires that the statement be
untruthful (untruthfulness condition).
Third, lying requires that the untruthful statement be made to >>>>>>>>> another person (addressee condition).
Fourth, lying requires that the person intend that that other >>>>>>>>> person believe the untruthful statement to be true (intention >>>>>>>>> to deceive the addressee condition).
https://plato.stanford.edu/entries/lying-definition/#TraDefLyi >>>>>>>>>
So, you are trying to use arguments to justify that you can say >>>>>>>> "false statements" and not be considered a liar.
The fact that you seem to have KNOWN that the generally accept >>>>>>>> truth differed from your ideas does not excuse you from claiming >>>>>>>> that you can say them as FACT, and not be a liar.
When I say that an idea is a fact I mean that it is a semantic
tautology. That you don't understand things well enough to verify >>>>>>> that
it is a semantic tautology does not even make my assertion false. >>>>>>>
So, you admit that you don't know that actually meaning of a FACT. >>>>>>
I mean rue in the absolute sense of the word true such as:
2 + 3 = 5 is verified as necessarily true on the basis of its meaning. >>>>>
Semantic tautologies are the only kind of facts that are necessarily >>>>> true in all possible worlds.
The fact that your error has been pointed out an enormous number >>>>>>>> of times, makes you blatant disregard for the actual truth, a
suitable stand in for your own belief.
That fact that no one has understood my semantic tautologies only >>>>>>> proves
that no one has understood my semantic tautologies. It does not even >>>>>>> prove that my assertion is incorrect.
No, the fact that you ACCEPT most existing logic is valid, but
then try to change the rules at the far end, without understanding >>>>>> that you are accepting things your logic likely rejects, shows
that you don't understand how logic actually works.
That I do not have a complete grasp of every nuance of mathematical
logic does not show that I do not have a sufficient grasp of those
aspects that I refer to.
My next goal is to attain a complete understanding of all of the basic >>>>> terminology of model theory. I had a key insight about model theory
sometime in the last month that indicates that I must master its basic >>>>> terminology.
You present "semantic tautologies" based on FALSE definition and
results that you can not prove.
It may seem that way from the POV of not understanding what I am
saying.
The entire body of analytical truth is a set of semantic tautologies. >>>>> That you are unfamiliar with the meaning of these terms is no actual >>>>> rebuttal at all.
If you don't understand from all instruction you have been given >>>>>>>> that you are wrong, you are just proved to be totally mentally >>>>>>>> incapable.
If you want to claim that you are not a liar by reason of
insanity, make that plea, but that just becomes an admission
that you are a pathological liar, a liar because of a mental
illness.
That you continue to believe that lies do not require an
intention to
deceive after the above has been pointed out makes you willfully >>>>>>> ignorant, yet still not a liar.
But, by the definiton I use, since it has been made clear to you
that you are wrong, but you continue to spout words that have been >>>>>> proven incorrect make YOU a pathological liar.
No it only proves that you continue to have no grasp of what a
semantic
tautology could possibly be. Any expression that is verified as
necessarily true entirely on the basis of its meaning is a semantic
tautology.
Except that isn't the meaning of a "Tautology".
In logic, a formula is satisfiable if it is true under at least one
interpretation, and thus a tautology is a formula whose negation is
unsatisfiable. In other words, it cannot be false. It cannot be untrue.
Right, but that means using the rules of the field, so only definition
of that field.
Thus, your "Meaning of the Words" needs to quote ONLY actual
definitions that have been accepted in the field.
https://en.wikipedia.org/wiki/Tautology_(logic)#:~:text=In%20logic%2C%20a%20formula%20is,are%20known%20formally%20as%20contradictions.
What I actually mean is analytic truth, yet math people will have no
clue about this because all of math is syntactic rather than semantic.
https://plato.stanford.edu/entries/analytic-synthetic/
I thought you previously were claiming that all of mathematics had to
be analytic!
Everyone that knows philosophy of mathematics knows that this is true.
And why do you call out an article about analytic-synthetic when you
are making a distintion between semantic and syntactic? That seems to
be a non-sequitor.
And math is NOT just syntactic, as syntax can't express many of the
properties used in math.
Because of this I coined my own term [semantic tautology] as the most
self-descriptive term that I could find as a place-holder for my notion.
Right, do don't understand how math works, so you make up terms that
you can't actually define to fix it.
The COMMON definition is "the saying of the same thing twice in
different words, generally considered to be a fault of style (e.g.,
they arrived one after the other in succession)".
The Meaning in the fielc of Logic is "In mathematical logic, a
tautology (from Greek: ταυτολογία) is a formula or assertion that is
true in every possible interpretation."
So, neither of them point to the meaning of the words.
Did I say that I am limiting the application [semantic tautology] to
words?
You haven't given any other definition, so yes, by default you have.
Only for people that don't have a clue what the term [semantics] means.
You can't use the classic semantic of logic, since you disagree with
how that works, so you only have words. (Classic logic semantics lets
you show the principle of explosion works, so you can't be using that).
When dealing with logic a [semantic tautology] may simply be a
tautology(logic). When dealing with formalized natural language it may
be more clear to refer to it as as [semantic tautology] in that the
semantic meaning of natural language expression are formalized as
axioms.
In other words, you don't know what you are talking about and using
word salad.
If you are just making up words, you are admitting you have lost
from the start.
The problem is that word meanings, especially for "natural" language
are to ill defined to be used to form the basis of formal logic. You
need to
Not when natural language is formalized.
Semantic Grammar and the Power of Computational Language
But then you need to use that formalize version, and be in a system
that uses it.
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
So, you are admitting you don't know how formal logic works.
Note, ChatGPT is proven to not understand how to get actually correct
answer (or at least doesn't always apply those rules).
work with FORMAL definitions, which become part of the Truth Makers
of the system. At that point, either you semantic tautologies are
real tautologies because they are alway true in every model, or they
are not tautologies.
Cats are animals in the currently existing model of the world, Cats may
not exist in other possible worlds. [semantic tautology] applies with
a model of the world.
WHICH model of the world?
Clearly you have never heard of possible worlds semantics. https://en.wikipedia.org/wiki/Possible_world
(Note, you didn't use any UUID's, so you can't argue with them)
Cats are also a type of tractor.
It depends on WHICH model of (what part of) the world you are working.
Also, it depends on actually being in a model of "the world" and not
somethint else.
You are just showing how little you understand about the basis of
formal logic.
I am mostly focusing on the philosophical foundation of logic rather
than logic itself. To most logicians this is just silly nonsense.
They don't care whether or not the rules are consistent, the rules
are the word of God to logicians.
Cats are animals is necessarily true even if no cats ever
physically existed.
Nope. If cats don't exist in the system, the statement is not
necessarily true. For instance, the statement is NOT true in the
system of the Natural Numbers.
Cats are animals at the semantic level in the current model of the
world. The model of the world has GUID placeholders for the notion of
{cats} and {animals} and for every other unique sense meaning.
No, you didn't use them, and the GUIDs only apply to the system that
actually defines them.
So in *A* model of the world, with the addition of the GUIDs on the
terms, you can make that claim.
The is not a unique "The" model of the world.
Sure maybe the living animal: "cat" has always been a ten story office building and everyone has been fooled into thinking otherwise.
Also, I am not "ignorant", since that means not having knowledge
or awareness of something, but I do understand what you are saying >>>>>> and aware of your ideas, AND I POINT OUT YOUR ERRORS.
Until you fully understand what a semantic tautology is and why it is >>>>> necessarily true you remain sufficiently ignorant.
As far as you have explained, it is an illogical concept based on
undefined grounds. You refuse to state whether your "semantic" is
"by the meaning of the words" at which point you need understand
that either
When I refer to {semantic} and don't restrict this to the meaning of
words then it applies to every formal language expression, natural
language expression and formalized natural language expression.
So, you don't understand what you are talking about.
SO, you admit that you system falls to the principle of explosion, as
the classic definition of semantic in classic logic is enough to allow
it.
I am not stupid enough to believe that
FALSE <proves> Donald Trump is the Christ.
Anyone with any sense rejects this nonsense:
ex falso [sequitur] quodlibet
That you assume otherwise is your mistake.
In other words, you don't know how to say things precisly,
you are using the "natural" meaning and break the rules of formal
logic, or you mean the formal meaning within the system, at which
point what is the difference between your "semantic" connections as
you define them and the classical meaning of semantic being related
to showable by a chain of connections to the truth makers of the
system.
We don't need to formalize the notions of {cats} and {animals} to know
that cats <are> animals according to the meaning of those terms.
Unless they are tractors, or something else using the word.
That is why I stipulated that in the hypothetical formal system that I
am referring to each unique sense meaning has its own GUID.
Note, if you take that later definition, then either you need to
cripple the logic you allow or the implication operator and the
principle of explosion both exist in your system. (If you don't
define the implication operator as a base operation,
I have already said quite a few times that I am probably replacing
the implication operator with the Semantic Necessity operator: ⊨□
But are you removing the AND and OR and NOT operator, if not, anything
done by implication can be done with a combination of those.
Show me.
I don't think you actually understand how the operator works.
It is a freaking truth table.
Also, can you actually DEFINE (not just show an exampe) of what this
operator defines?
Assume that everything known to mankind is in a type hierarchy, the GUID
is the identifier in the type hierarchy for each unique sense meaning.
That you can't seem to remember key points that I make and repeat many
times is very annoying.
The fact that you never actually define things, and ignore my comments
make that your fault.
I think the problem is you don't know how to do any of the things I
ask about, so when I keep asking you to do them, you get annoyed
because I keep showing how stupid you are.
but do include "not", "and" and "or" as operation, it can just be
defined in the system).
YOU are the ignorant one, as you don't seem to understand enough
to even comment about the rebutalls to your claims.
THAT show ignorance, and stupidity.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 427 |
Nodes: | 16 (3 / 13) |
Uptime: | 34:24:34 |
Calls: | 9,029 |
Calls today: | 12 |
Files: | 13,384 |
Messages: | 6,008,751 |