H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0
// Ĥ.Hq0 is the first state of The Linz hypothetical halt decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt
// ∞ means an infinite loop has been appended to the Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H returns making Ĥ self-contradictory.
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0
// Ĥ.Hq0 is the first state of The Linz hypothetical halt decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt
// ∞ means an infinite loop has been appended to the Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might be
automatically spam-filtered if you keep posting the same post so many
times.
All of the rebuttals have been incorrect.
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0
// Ĥ.Hq0 is the first state of The Linz hypothetical halt decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt
// ∞ means an infinite loop has been appended to the Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might be
automatically spam-filtered if you keep posting the same post so many
times.
All of the rebuttals have been incorrect.
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 10:49 AM, olcott wrote:
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0
// Ĥ.Hq0 is the first state of The Linz hypothetical halt decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt
// ∞ means an infinite loop has been appended to the Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not
halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H >>>>> returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might be
automatically spam-filtered if you keep posting the same post so
many times.
All of the rebuttals have been incorrect.
Yes, all of YOUR rebuttals have been incorrect.
You are just proving you don't understand what you are talking about.
The fact that you just repeat your claims, shows that you have nothing
to support it.
That is the same rhetoric entirely bereft of any supporting reasoning (REBoaSR) form of most of your rebuttals. Those that have more than
this are addressed. Once I have proven my point then your rebuttal
becomes (REBoaSR). The only way that I can tell that I have proved
my point to you is that your rebuttals becomes nonsense or (REBoaSR).
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 12:02 PM, olcott wrote:
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn >>>>>>> thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0
// Ĥ.Hq0 is the first state of The Linz hypothetical halt decider >>>>>>> // H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt >>>>>>> // ∞ means an infinite loop has been appended to the Ĥ.Hqy state >>>>>>> //
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does
not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H >>>>>>> returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might be
automatically spam-filtered if you keep posting the same post so
many times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect?
I did and you ignored them.
Nope, you just made more incorrect claims.
The scope of my current work has changed It is not that the
halting problem can be solved, it is that the halting problem
proofs were always wrong about the undecidability of the
halting problem.
One fundamental change that we can make to my prior presentations
is that we can now say that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer because it is not reporting on the behavior of the direct execution
of Ĥ ⟨Ĥ⟩.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
H ⟨Ĥ⟩ ⟨Ĥ⟩ versus Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ or H1(D,D) versions H(D,D)
The verifiably correct execution trace of H1(D,D) that
includes as a part of it the verifiably correct execution
trace of H(D,D).
Disagreeing with these verifiably correct execution traces
is analogous to disagreeing with first grade arithmetic.
The part that you could never understand is that an input
to a machine that has not yet been aborted is not the same
input to an identical machine after this input has already
been aborted by another copy of itself.
You simply assumed that I must be wrong and refused to
examine the details.
You simply assumed that I must be wrong and refused to
examine the details.
You simply assumed that I must be wrong and refused to
examine the details.
You simply assumed that I must be wrong and refused to
examine the details.
You simply assumed that I must be wrong and refused to
examine the details.
On 2/29/2024 8:16 PM, Richard Damon wrote:
On 2/29/24 8:20 PM, olcott wrote:
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 12:02 PM, olcott wrote:
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn >>>>>>>>> thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0 >>>>>>>>> // Ĥ.Hq0 is the first state of The Linz hypothetical halt decider >>>>>>>>> // H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt >>>>>>>>> // ∞ means an infinite loop has been appended to the Ĥ.Hqy state >>>>>>>>> //
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does
not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might be >>>>>>>> automatically spam-filtered if you keep posting the same post so >>>>>>>> many times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect?
I did and you ignored them.
Nope, you just made more incorrect claims.
The scope of my current work has changed It is not that the
halting problem can be solved, it is that the halting problem
proofs were always wrong about the undecidability of the
halting problem.
So, you are admitting that you are confused.
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
If you admit that we can't "solve" the Halting Problem, meaning making
an H that gets the right answer to all input, then BY DEFINITION, that
means the Halting Problem is uncomputable, which means it is undecidable.
No what I am saying is that H always could correctly determine
the halt status of the incorrectly presumed impossible input.
That term MEANS, that there does not exist a Turing Machine that can
correct compute that result for all inputs, which is EXACTLY what you
are conceeding.
One fundamental change that we can make to my prior presentations
is that we can now say that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer >>> because it is not reporting on the behavior of the direct execution
of Ĥ ⟨Ĥ⟩.
Right, and if H is a computation, which it must be if it is a Turing
Machine or Equivalent, then NO COPY of H can get the right answer.
The outer H can see that the inner Ĥ.H has already aborted
its simulation or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so
H sees things that Ĥ.H cannot see.
I could never have understood this until I made the halting
problem 100% concrete. This was the only possible way for
me to see gaps in the reasoning that could not possibly
be otherwise uncovered.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
H ⟨Ĥ⟩ ⟨Ĥ⟩ versus Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ or H1(D,D) versions H(D,D)
H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must give the same answer, as must H1 if it
is actually a copy of H.
The verifiably correct execution trace of H1(D,D) that
includes as a part of it the verifiably correct execution
trace of H(D,D).
Nope. Your "trace" just shows that H1 and H are not the same
computation, and imply that H never was one in the first place, so not
a Turing Machine Equivalent.
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
Because I have been a software engineer for thirty years
and learned the 86 language back when it was new this
may be much easier for me than for you.
Disagreeing with these verifiably correct execution traces
is analogous to disagreeing with first grade arithmetic.
Nope. You making the false claims just shows you are a ignorant
pathalogical liar.
When one person asserts first grade arithmetic and the other
disagrees the one that disagrees is necessarily incorrect.
Do you understand this?
The x86 execution trace is this exact same thing in terms
of objectively verifiable truth yet much more difficult
to understand.
Do you know the x86 language at all?
How well do you know C?
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
No what I am saying is that H always could correctly determine
the halt status of the incorrectly presumed impossible input.
The outer H can see that the inner Ĥ.H has already aborted
its simulation > or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so
H sees things that Ĥ.H cannot see.
I could never have understood this until I made the halting
problem 100% concrete.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
When one person asserts first grade arithmetic and the other
disagrees the one that disagrees is necessarily incorrect.
The x86 execution trace is this exact same thing in terms
of objectively verifiable truth yet much more difficult
to understand.
Do you know the x86 language at all?
How well do you know C?
On 2/29/2024 4:55 PM, wij wrote:
On Thu, 2024-02-29 at 16:51 -0600, olcott wrote:
On 2/29/2024 4:38 PM, wij wrote:
On Thu, 2024-02-29 at 16:29 -0600, olcott wrote:
On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote:
On 2/29/2024 2:48 PM, wij wrote:
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs
to report on
A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>
That no memory space is specified because Turing machines >>>>>>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>>> machines is the human memory where these ideas are >>>>>>>>>>>>>>>>> located.
The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>>> epistemological antinomies is incoherent.
People that learn these things by rote never notice this. >>>>>>>>>>>>>>>>> Philosophers that examine these things looking for >>>>>>>>>>>>>>>>> incoherence find it.
...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, playing >>>>>>>>>>>>>>>> idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct >>>>>>>>>>>>>>> understanding of truth.
People believes GUR are going to survive.
People does not believe GUR are going to vanish.
What the Hell is GUR ?
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ
Basically, GUR says that no one even your god can defy that >>>>>>>>>>>> HP is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be used for >>>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)
The general notion of decision problem undecidability is >>>>>>>>>>> fundamentally
flawed in all of those cases where a decider is required to >>>>>>>>>>> correctly
answer a self-contradictory (thus incorrect) question.
When we account for this then epistemological antinomies are >>>>>>>>>>> always
excluded from the domain of every decision problem making all of >>>>>>>>>>> these decision problems decidable.
It seems you try to change what the halting problem again. >>>>>>>>>>
https://en.wikipedia.org/wiki/Halting_problem
In computability theory, the halting problem is the problem of >>>>>>>>>> determining, from a
description
of
an
arbitrary computer program and an input, whether the program >>>>>>>>>> will finish running, or
continue
to
run
forever....
This wiki definition had been shown many times. But, since >>>>>>>>>> your English is
terrible, you often read it as something else (actually,
deliberately
interpreted it differently, so called 'lie')
If you want to refute Halting Problem, you must first
understand what the
problem is about, right? You never hit the target that every >>>>>>>>>> one can see, but POOP.
Note: My email was delivered strangely. It swapped to sci.logic !!! >>>>>>>>
If we have the decision problem that no one can answer this
question:
Is this sentence true or false: "What time is it?"
This is not the halting problem.
Someone has to point out that there is something wrong with it. >>>>>>>>>
This is another problem (not the HP neither)
The halting problem is one of many problems that is
only "undecidable" because the notion of decidability
incorrectly requires a correct answer to a self-contradictory
(thus incorrect) question.
What is the 'correct answer' to all HP like problems ?
The correct answer to all undecidable decision problems
that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the
domain of any and all decision problems. This applies
to the Halting Problem and many others.
So, what is the correct answer of this problem ?: "Is this sentence
true or false: "What time is
it?"
The same, what is the correct answer of the halting problem in your
opinion?
All incorrect questions are rejected as invalid input.
The question is: "Is this sentence true or false: "What time is it?"
(or the halting problem).
why you answer the incoherent "All incorrect questions are rejected as
invalid input."?
The key issue with decision theory is that deciders are required to
correctly answer a self-contradictory (thus incorrect) questions.
The key difficulty with resolving this issue that most modern day philosophers do not understand that both of these questions are equally incorrect:
(a) Is this sentence true or false: "What time it is?"
(b) Is this sentence true or false: "This sentence is not true."
They do not understand that the Liar Paradox is simply not a truth bearer.
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0
// Ĥ.Hq0 is the first state of The Linz hypothetical halt decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt
// ∞ means an infinite loop has been appended to the Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not
halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H >>>>> returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might be
automatically spam-filtered if you keep posting the same post so
many times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect?
I did and you ignored them.
On 2/29/2024 10:14 PM, Richard Damon wrote:
On 2/29/24 10:27 PM, olcott wrote:
On 2/29/2024 8:16 PM, Richard Damon wrote:
On 2/29/24 8:20 PM, olcott wrote:
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 12:02 PM, olcott wrote:
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that
Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn >>>>>>>>>>> thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0 >>>>>>>>>>> // Ĥ.Hq0 is the first state of The Linz hypothetical halt >>>>>>>>>>> decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt >>>>>>>>>>> // ∞ means an infinite loop has been appended to the Ĥ.Hqy state >>>>>>>>>>> //
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩
does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might >>>>>>>>>> be automatically spam-filtered if you keep posting the same >>>>>>>>>> post so many times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect?
I did and you ignored them.
Nope, you just made more incorrect claims.
The scope of my current work has changed It is not that the
halting problem can be solved, it is that the halting problem
proofs were always wrong about the undecidability of the
halting problem.
So, you are admitting that you are confused.
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
If you admit that we can't "solve" the Halting Problem, meaning
making an H that gets the right answer to all input, then BY
DEFINITION, that means the Halting Problem is uncomputable, which
means it is undecidable.
No what I am saying is that H always could correctly determine
the halt status of the incorrectly presumed impossible input.
That term MEANS, that there does not exist a Turing Machine that can
correct compute that result for all inputs, which is EXACTLY what
you are conceeding.
One fundamental change that we can make to my prior presentations
is that we can now say that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer >>>>> because it is not reporting on the behavior of the direct execution
of Ĥ ⟨Ĥ⟩.
Right, and if H is a computation, which it must be if it is a Turing
Machine or Equivalent, then NO COPY of H can get the right answer.
The outer H can see that the inner Ĥ.H has already aborted
its simulation or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so
H sees things that Ĥ.H cannot see.
I could never have understood this until I made the halting
problem 100% concrete. This was the only possible way for
me to see gaps in the reasoning that could not possibly
be otherwise uncovered.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
H ⟨Ĥ⟩ ⟨Ĥ⟩ versus Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ or H1(D,D) versions H(D,D)
H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must give the same answer, as must H1 if
it is actually a copy of H.
The verifiably correct execution trace of H1(D,D) that
includes as a part of it the verifiably correct execution
trace of H(D,D).
Nope. Your "trace" just shows that H1 and H are not the same
computation, and imply that H never was one in the first place, so
not a Turing Machine Equivalent.
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
Nope, because it doesn't simulate the CALL H correctly.
That you don't know the x86 language well enough to verify that
the call is simulated correctly provide zero basis for that claim.
I don't show the steps of H and H1 because that generates 7,019
pages of text. There is a flag to turn their display on.
Remember, the H that D calls is part of D, and thus must be simulated.HH(DD,DD) does simulation HH that only generates 251 pages of text.
H(D,D) can tell that D is calling itself and the proves non terminating behavior.
Because I have been a software engineer for thirty years
and learned the 86 language back when it was new this
may be much easier for me than for you.
So, in 30 years, y=ou have never used a "CALL" instruction, or never
new what it did?
That is a ridiculous statement.
I don't show the steps of this call to avoid burying
the few lines of the trace of D mixed in with hundreds
or thousands of pages of text after the call is made.
There is a flag to turn this on. I could upload the 7,019
pages of the execution trace of H1(D,D). I just did that
it is 32 MB
Disagreeing with these verifiably correct execution traces
is analogous to disagreeing with first grade arithmetic.
Nope. You making the false claims just shows you are a ignorant
pathalogical liar.
When one person asserts first grade arithmetic and the other
disagrees the one that disagrees is necessarily incorrect.
But if the assertion of first grade arithmetic is that 1 + 2 = 12,
they are wrong.
Likewise a correct simulator can be verified to be correct
when you examine the execution trace of H1(D,D) with D
calls H(D,D). and see that both H1 and H do correctly simulate
the D correctly.
_D()
[00001d12] 55 push ebp
[00001d13] 8bec mov ebp,esp
[00001d15] 51 push ecx
[00001d16] 8b4508 mov eax,[ebp+08]
[00001d19] 50 push eax
[00001d1a] 8b4d08 mov ecx,[ebp+08]
[00001d1d] 51 push ecx
[00001d1e] e81ff8ffff call 00001542
[00001d23] 83c408 add esp,+08
[00001d26] 8945fc mov [ebp-04],eax
[00001d29] 837dfc00 cmp dword [ebp-04],+00
[00001d2d] 7402 jz 00001d31
[00001d2f] ebfe jmp 00001d2f
[00001d31] 8b45fc mov eax,[ebp-04]
[00001d34] 8be5 mov esp,ebp
[00001d36] 5d pop ebp
[00001d37] c3 ret
Size in bytes:(0038) [00001d37]
_main()
[00001d42] 55 push ebp
[00001d43] 8bec mov ebp,esp
[00001d45] 68121d0000 push 00001d12
[00001d4a] 68121d0000 push 00001d12
[00001d4f] e8eef6ffff call 00001442
[00001d54] 83c408 add esp,+08
[00001d57] 50 push eax
[00001d58] 6863070000 push 00000763
[00001d5d] e820eaffff call 00000782
[00001d62] 83c408 add esp,+08
[00001d65] 33c0 xor eax,eax
[00001d67] 5d pop ebp
[00001d68] c3 ret
Size in bytes:(0039) [00001d68]
machine stack stack machine assembly
address address data code language
======== ======== ======== ========= ============= [00001d42][00102fe9][00000000] 55 push ebp [00001d43][00102fe9][00000000] 8bec mov ebp,esp [00001d45][00102fe5][00001d12] 68121d0000 push 00001d12 [00001d4a][00102fe1][00001d12] 68121d0000 push 00001d12 [00001d4f][00102fdd][00001d54] e8eef6ffff call 00001442
H1: Begin Simulation Execution Trace Stored at:113095
Address_of_H1:1442
[00001d12][00113081][00113085] 55 push ebp [00001d13][00113081][00113085] 8bec mov ebp,esp [00001d15][0011307d][00103051] 51 push ecx [00001d16][0011307d][00103051] 8b4508 mov eax,[ebp+08] [00001d19][00113079][00001d12] 50 push eax [00001d1a][00113079][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][00113075][00001d12] 51 push ecx [00001d1e][00113071][00001d23] e81ff8ffff call 00001542
H: Begin Simulation Execution Trace Stored at:15dabd
Address_of_H:1542
[00001d12][0015daa9][0015daad] 55 push ebp [00001d13][0015daa9][0015daad] 8bec mov ebp,esp [00001d15][0015daa5][0014da79] 51 push ecx [00001d16][0015daa5][0014da79] 8b4508 mov eax,[ebp+08] [00001d19][0015daa1][00001d12] 50 push eax [00001d1a][0015daa1][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][0015da9d][00001d12] 51 push ecx [00001d1e][0015da99][00001d23] e81ff8ffff call 00001542
H: Recursive Simulation Detected Simulation Stopped
[00001d23][0011307d][00103051] 83c408 add esp,+08 [00001d26][0011307d][00000000] 8945fc mov [ebp-04],eax [00001d29][0011307d][00000000] 837dfc00 cmp dword [ebp-04],+00 [00001d2d][0011307d][00000000] 7402 jz 00001d31 [00001d31][0011307d][00000000] 8b45fc mov eax,[ebp-04] [00001d34][00113081][00113085] 8be5 mov esp,ebp [00001d36][00113085][00001541] 5d pop ebp [00001d37][00113089][00001d12] c3 ret
H1: End Simulation Input Terminated Normally
[00001d54][00102fe9][00000000] 83c408 add esp,+08 [00001d57][00102fe5][00000001] 50 push eax [00001d58][00102fe1][00000763] 6863070000 push 00000763 [00001d5d][00102fe1][00000763] e820eaffff call 00000782
Input_Halts = 1
[00001d62][00102fe9][00000000] 83c408 add esp,+08 [00001d65][00102fe9][00000000] 33c0 xor eax,eax [00001d67][00102fed][00000018] 5d pop ebp [00001d68][00102ff1][00000000] c3 ret
Number of Instructions Executed(470247) == 7019 Pages
I don't show the 7,019 pages of H1 and H
Do you understand this?
The x86 execution trace is this exact same thing in terms
of objectively verifiable truth yet much more difficult
to understand.
Right, and it shows that H didn't correctly simulate the input
(because H doesn't actually simulate call instructions)
What I showed above proves that H1 and H simulate D correctly.
After all, yoru x86UTM shows that D(D) will Halt when run, and thus
when correctly simulated.
H1(D,D) is the equivalent of Linz H and H(D,D) is the
equivalent of Linz Ĥ. H1(D,D) does show that D(D) halts.
H(D,D) gets its output contradicted so cannot correctly
report on the behavior of D(D).
Only an idiot would think that a correct simulation can snow something
that isn't what actually happens.
I changed my mind on this because it turns out that Linz H and
my H1 do correctly report on their self-contradictory inputs.
The Linz H case is more straight forward.
Ĥ merely breaks its own internal decider and has no
actual effect on H.
Do you know the x86 language at all?
How well do you know C?
VERY WELL.
I guess you have forgotten my background.
I was doing assembly programming in 1971 (admittedly, not the x86,
since it didn't exist yet).
I programmed the pregenitors to the x86 processor, in assembly, before
the x86 existed
I was programming in C before the ANSI standard came out.
Not very much experience with x86, yet probably
good enough to understand the language. The hard
part of assembly is know any assembly at all.
After that the next one is pretty easy.
I was programming in C when K&R was the standard.
I met Bjarne Stroustrup back when he was promoting
his brand new C++ programming language.
On 3/1/2024 5:28 AM, immibis wrote:
On 1/03/24 04:27, olcott wrote:
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
It also means you are ignoring much accumulated knowledge and known
mistakes and the answers to those mistakes. For example, assuming that
x86utm has anything to do with Turing machines.
If that was true then mistakes in my reasoning could be pointed
out by reasoning instead of dogma.
No what I am saying is that H always could correctly determine
the halt status of the incorrectly presumed impossible input.
If you change what the problem is, then maybe, but then you solved a
different problem that is not the halting problem.
I have now reverted back to the original problem in its
entirety and found that the halting problem proofs do not
show that halting is actually undecidable.
This is best explained in the Linz proof where the
counter-example contains the hypothetical halt decider.
All of the proofs where the original H is directly called
by D are not the way that Turing Machines actually work.
For the Linz proof the counter-example input can fool its
own embedded Ĥ.H yet cannot fool the actual Linz H.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Ĥ contradicts Ĥ.H and does not contradict H, thus H is
able to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
H simply looks for whatever wrong answer that Ĥ.H returns
and reports on the halting or not halting behavior of that.
The outer H can see that the inner Ĥ.H has already aborted
its simulation > or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so
H sees things that Ĥ.H cannot see.
If you terminate a computation before the computation computes
something, that doesn't mean that it doesn't compute it. It just means
you terminated it too early.
H simply waits until Ĥ.H reaches Ĥ.Hqy or Ĥ.Hqn. When Ĥ.H
reaches Ĥ.Hqy H can see the infinite loop repeated state.
When Ĥ.H reaches Ĥ.Hqn H can see that Ĥ has halted.
If it was 100% concrete then every machine that can possibly beI could never have understood this until I made the halting
problem 100% concrete.
The Turing machine halting problem is already 100% concrete. By making
it about x86utm, which has nothing to do with Turing machines, you
made a mistake.
encoded by the second ⊢* wildcard state transition would have
all of its steps actually listed. Since this would be an infinite
list it is impossible to make this proof 100% concrete.
My x86utm shows a 100% complete example that is isomorphic to
the Linz proof. We merely must construe the H/D combination as
the single Linz Ĥ machine and H1 as the Linz H.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
It always applies unless you cheat. Or else please show me two Turing
machine execution traces where the same machine with the same initial
tape computes two different answers.
The execution trace of H1(D,D) such that D calls H(D,D)
already shows an isomorphic example. So far dozens of people have
rejected this on the basis of dogma and not on the basis of showing
any instruction of D that was simulated incorrectly.
H(D,D) does not simulate the call to itself because it has reached
the earliest possible non-termination criteria.
HH(DD,DD) does correctly simulate itself and uses the repeated
state as its non-termination criteria.
Whether or not these are computable is moot at this point because
numerous alternative criteria would work equally well. All that we
really need is some computable criteria for Ĥ.H to transition to
Ĥ.Hqy or Ĥ.Hqn. As long as some computable criteria exists then H
can correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
x86 execution traces have nothing to do with Turing machines unless
you can prove they do.
The Church-Turing thesis already sufficiently proves that they do.
The Church-Turing thesis (formerly commonly known simply as Church's
thesis) says that any real-world computation can be translated into an equivalent computation involving a Turing machine. https://mathworld.wolfram.com/Church-TuringThesis.html
*There are additional nuances that are not relevant at the current time*
As long as some computable criteria exists for Ĥ.H to transition to
Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
When one person asserts first grade arithmetic and the other
disagrees the one that disagrees is necessarily incorrect.
We assert first grade computer science: the same program with the same
input always makes the same output. You disagree. You are necessarily
incorrect.
*That is correct when all things are exactly equal*
(a) Ĥ.H embedded in Ĥ has its return values contradicted H does not have its return values contradicted this makes Ĥ.H and H different machines.
(b) H watches the behavior of Ĥ such that its Ĥ.H has executed far more steps than H thus Ĥ.H and H are in entirely different machine states.
(c) These same two things equally apply to H(D,D) and H1(D,D).
The behavior of D after its simulation has been aborted is what
H1 sees. The behavior of D before its simulation has been aborted
is what H sees. You can't say this is impossible because the x86
execution trace proves that it is true.
(d) The same thing as (c) applies to H ⟨Ĥ⟩ ⟨Ĥ⟩. As long as some computable criteria exists for Ĥ.H to transition to Ĥ.Hqy or
Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
The x86 execution trace is this exact same thing in terms
of objectively verifiable truth yet much more difficult
to understand.
Do you know the x86 language at all?
How well do you know C?
x86 execution traces have nothing to do with Turing machines unless
you can prove they do.
The Church-Turing thesis (formerly commonly known simply as Church's
thesis) says that any real-world computation can be translated into an equivalent computation involving a Turing machine. https://mathworld.wolfram.com/Church-TuringThesis.html
*There are additional nuances that are not relevant at the current time*
As long as some computable criteria exists for Ĥ.H to transition
to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
On 3/1/2024 11:16 AM, Richard Damon wrote:
On 3/1/24 11:48 AM, olcott wrote:
On 3/1/2024 5:28 AM, immibis wrote:
On 1/03/24 04:27, olcott wrote:
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
It also means you are ignoring much accumulated knowledge and known
mistakes and the answers to those mistakes. For example, assuming
that x86utm has anything to do with Turing machines.
If that was true then mistakes in my reasoning could be pointed
out by reasoning instead of dogma.
Except that you falsely describe any reasoning as Dogma.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
When we hypothesize that a Turing equivalent RASP machine
can find out its own machine address (as H does) or we
refer to the earlier version of H named HH that looks
inside the internal state of its simulated machine as
Mike affirmed is computable.
Then when H1 simulates D that calls a simulated H(D,D)
then H(D,D) does have some criterion measure to apply to
its input. The means that Ĥ.H has some criterion measure
to apply to its input.
This proves that Ĥ.H does transition to Ĥ.Hqy or Ĥ.Hqn.
This proves that Linz H has a correct criterion measure
to apply to ⟨Ĥ⟩ ⟨Ĥ⟩ that is consistent with the behavior
of the directly executed Ĥ applied to ⟨Ĥ⟩.
If Linz H is modeled after Olcott H then we know that
Ĥ.H transitions to Ĥ.Hqn which is the wrong answer,
yet (as the execution trace shows below) this gives
Linz H and my H1 the correct basis to decide halting
on the counter-example input.
int D(int (*x)())
{
int Halt_Status = H(x, x);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
Output("Input_Halts = ", H1(D,D));
}
machine stack stack machine assembly
address address data code language
======== ======== ======== ========= ============= [00001d42][00102fe9][00000000] 55 push ebp ; begin main()
[00001d43][00102fe9][00000000] 8bec mov ebp,esp [00001d45][00102fe5][00001d12] 68121d0000 push 00001d12 ; push D [00001d4a][00102fe1][00001d12] 68121d0000 push 00001d12 ; push D [00001d4f][00102fdd][00001d54] e8eef6ffff call 00001442 ; call H1(D,D)
H1: Begin Simulation Execution Trace Stored at:113095
Address_of_H1:1442
[00001d12][00113081][00113085] 55 push ebp ; begin D
[00001d13][00113081][00113085] 8bec mov ebp,esp [00001d15][0011307d][00103051] 51 push ecx [00001d16][0011307d][00103051] 8b4508 mov eax,[ebp+08] [00001d19][00113079][00001d12] 50 push eax ; push D [00001d1a][00113079][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][00113075][00001d12] 51 push ecx ; push D [00001d1e][00113071][00001d23] e81ff8ffff call 00001542 ; call H(D,D)
H: Begin Simulation Execution Trace Stored at:15dabd
Address_of_H:1542
[00001d12][0015daa9][0015daad] 55 push ebp ; begin D
[00001d13][0015daa9][0015daad] 8bec mov ebp,esp [00001d15][0015daa5][0014da79] 51 push ecx [00001d16][0015daa5][0014da79] 8b4508 mov eax,[ebp+08] [00001d19][0015daa1][00001d12] 50 push eax ; push D [00001d1a][0015daa1][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][0015da9d][00001d12] 51 push ecx ; push D [00001d1e][0015da99][00001d23] e81ff8ffff call 00001542 ; call H(D,D)
H: Recursive Simulation Detected Simulation Stopped (return 0 to caller)
[00001d23][0011307d][00103051] 83c408 add esp,+08 ; returned to D [00001d26][0011307d][00000000] 8945fc mov [ebp-04],eax [00001d29][0011307d][00000000] 837dfc00 cmp dword [ebp-04],+00 [00001d2d][0011307d][00000000] 7402 jz 00001d31 [00001d31][0011307d][00000000] 8b45fc mov eax,[ebp-04] [00001d34][00113081][00113085] 8be5 mov esp,ebp [00001d36][00113085][00001541] 5d pop ebp [00001d37][00113089][00001d12] c3 ret ; exit D
H1: End Simulation Input Terminated Normally (return 1 to caller)
[00001d54][00102fe9][00000000] 83c408 add esp,+08 [00001d57][00102fe5][00000001] 50 push eax ; H1 return value
[00001d58][00102fe1][00000763] 6863070000 push 00000763 ; string address [00001d5d][00102fe1][00000763] e820eaffff call 00000782 ; call Output Input_Halts = 1
[00001d62][00102fe9][00000000] 83c408 add esp,+08 [00001d65][00102fe9][00000000] 33c0 xor eax,eax [00001d67][00102fed][00000018] 5d pop ebp [00001d68][00102ff1][00000000] c3 ret ; exit main()
Number of Instructions Executed(470247) == 7019 Pages
On 3/1/2024 9:05 AM, Richard Damon wrote:
On 3/1/24 12:54 AM, olcott wrote:
On 2/29/2024 10:14 PM, Richard Damon wrote:
On 2/29/24 10:27 PM, olcott wrote:
On 2/29/2024 8:16 PM, Richard Damon wrote:
On 2/29/24 8:20 PM, olcott wrote:
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 12:02 PM, olcott wrote:
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that >>>>>>>>>>>>> Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0 >>>>>>>>>>>>> // Ĥ.Hq0 is the first state of The Linz hypothetical halt >>>>>>>>>>>>> decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not >>>>>>>>>>>>> halt
// ∞ means an infinite loop has been appended to the Ĥ.Hqy >>>>>>>>>>>>> state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩
does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that >>>>>>>>>>>>> Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You >>>>>>>>>>>> might be automatically spam-filtered if you keep posting the >>>>>>>>>>>> same post so many times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect?
I did and you ignored them.
Nope, you just made more incorrect claims.
The scope of my current work has changed It is not that the
halting problem can be solved, it is that the halting problem
proofs were always wrong about the undecidability of the
halting problem.
So, you are admitting that you are confused.
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
If you admit that we can't "solve" the Halting Problem, meaning
making an H that gets the right answer to all input, then BY
DEFINITION, that means the Halting Problem is uncomputable, which
means it is undecidable.
No what I am saying is that H always could correctly determine
the halt status of the incorrectly presumed impossible input.
That term MEANS, that there does not exist a Turing Machine that
can correct compute that result for all inputs, which is EXACTLY
what you are conceeding.
One fundamental change that we can make to my prior presentations >>>>>>> is that we can now say that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer
because it is not reporting on the behavior of the direct execution >>>>>>> of Ĥ ⟨Ĥ⟩.
Right, and if H is a computation, which it must be if it is a
Turing Machine or Equivalent, then NO COPY of H can get the right
answer.
The outer H can see that the inner Ĥ.H has already aborted
its simulation or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so
H sees things that Ĥ.H cannot see.
I could never have understood this until I made the halting
problem 100% concrete. This was the only possible way for
me to see gaps in the reasoning that could not possibly
be otherwise uncovered.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
H ⟨Ĥ⟩ ⟨Ĥ⟩ versus Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ or H1(D,D) versions H(D,D)
H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must give the same answer, as must H1 if
it is actually a copy of H.
The verifiably correct execution trace of H1(D,D) that
includes as a part of it the verifiably correct execution
trace of H(D,D).
Nope. Your "trace" just shows that H1 and H are not the same
computation, and imply that H never was one in the first place, so >>>>>> not a Turing Machine Equivalent.
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
Nope, because it doesn't simulate the CALL H correctly.
That you don't know the x86 language well enough to verify that
the call is simulated correctly provide zero basis for that claim.
How is it simulated "CORRECTLY"
The results of the call need to be the execution of the code that the
call goes to.
(a) For HH(DD,DD) is does do this yet does not show the
251 pages of text
(b) For H(D,D) it does not do this because H correctly
determines that this call results in nested simulation
before it even simulates this call.
I don't show the steps of H and H1 because that generates 7,019
pages of text. There is a flag to turn their display on.
But then you don' show that actual results of the call.
I need not show the results of this call because for H(D,D) and
HH(DD,DD) is can be analytically determined that a correct halt
status criteria has been met by simply looking at the execution
trace the simulated D and DD.
Your "Meta-analysis" of what the simulated code sees as its
simulation, not what actually happens.
Remember, the H that D calls is part of D, and thus must be simulated.HH(DD,DD) does simulation HH that only generates 251 pages of text.
H(D,D) can tell that D is calling itself and the proves non
terminating behavior.
Only if H isn't actually a computation, which has been shown.
I think that that HH(DD,DD) version may be more easily shown to be
computable than the H(D,D) version. Someone in this forum said that
a Turing Machine equivalent such as a RASP machine can somehow
compute its own machine address.
In any case HH(DD,DD) is simply a UTM sharing a portion of its own
Turing machine tape with the machine that it is simulating. This
provides complete access to HH to the internal state of DD.
Your execution model is just incorrect and your decider just isn't a
compuation, and your input D isn't a computation, so you can make the
required Turing Machines out of them.
Because I have been a software engineer for thirty years
and learned the 86 language back when it was new this
may be much easier for me than for you.
So, in 30 years, y=ou have never used a "CALL" instruction, or never
new what it did?
That is a ridiculous statement.
But either that is true, or you are just a liar, or both to claim that
your trace of H's simulation of D(D) is "correct".
One cannot actually provide a correct rebuttal to a sequence
of steps that correspond to the specified x86 machine code.
One can foolishly disagree with this as if they were foolishly
disagreeing with first grade arithmetic.
The trace show
I don't show the steps of this call to avoid burying
the few lines of the trace of D mixed in with hundreds
or thousands of pages of text after the call is made.
There is a flag to turn this on. I could upload the 7,019
pages of the execution trace of H1(D,D). I just did that
it is 32 MB
You avoid showin the trace, because it prove you are lying.
By lying you mean persistently mistaken belief rather than intentional falsehood thus proving that you know that you are lying about me lying.
The logic used ADMITS that it isn't looking at the actual input,
because it says that "IF" H doesn't abort then D is non-halting, but H
DOES abort, so H was looking at the wrong D.
None of this matters now. H/D are equivalent to Linz Ĥ and
HI is equivalent to Linz H.
As long as some computable criteria exists for Ĥ.H to transition
to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
Disagreeing with these verifiably correct execution traces
is analogous to disagreeing with first grade arithmetic.
Nope. You making the false claims just shows you are a ignorant
pathalogical liar.
When one person asserts first grade arithmetic and the other
disagrees the one that disagrees is necessarily incorrect.
But if the assertion of first grade arithmetic is that 1 + 2 = 12,
they are wrong.
Likewise a correct simulator can be verified to be correct
when you examine the execution trace of H1(D,D) with D
calls H(D,D). and see that both H1 and H do correctly simulate
the D correctly.
Nope, the "Correct Simulation" of a program is simulator invariant.
I have no idea what you mean by this.
The ultimate measure of correct simulation is that a simulator
correctly simulates the actual x86 steps that its input specifies
in the order that they are specified.
Any other criterion seems like some kind of double talk.
Thus if H and H1 show different results, one of them is wrong.
Not when it is empirically verified that H1(D,D) and H(D,D)
are correctly simulating the machine language of D in the
order that D specifies its steps to H1 and H.
_D()
[00001d12] 55 push ebp
[00001d13] 8bec mov ebp,esp
[00001d15] 51 push ecx
[00001d16] 8b4508 mov eax,[ebp+08]
[00001d19] 50 push eax
[00001d1a] 8b4d08 mov ecx,[ebp+08]
[00001d1d] 51 push ecx
[00001d1e] e81ff8ffff call 00001542
[00001d23] 83c408 add esp,+08
[00001d26] 8945fc mov [ebp-04],eax
[00001d29] 837dfc00 cmp dword [ebp-04],+00
[00001d2d] 7402 jz 00001d31
[00001d2f] ebfe jmp 00001d2f
[00001d31] 8b45fc mov eax,[ebp-04]
[00001d34] 8be5 mov esp,ebp
[00001d36] 5d pop ebp
[00001d37] c3 ret
Size in bytes:(0038) [00001d37]
_main()
[00001d42] 55 push ebp
[00001d43] 8bec mov ebp,esp
[00001d45] 68121d0000 push 00001d12
[00001d4a] 68121d0000 push 00001d12
[00001d4f] e8eef6ffff call 00001442
[00001d54] 83c408 add esp,+08
[00001d57] 50 push eax
[00001d58] 6863070000 push 00000763
[00001d5d] e820eaffff call 00000782
[00001d62] 83c408 add esp,+08
[00001d65] 33c0 xor eax,eax
[00001d67] 5d pop ebp
[00001d68] c3 ret
Size in bytes:(0039) [00001d68]
machine stack stack machine assembly
address address data code language
======== ======== ======== ========= =============
[00001d42][00102fe9][00000000] 55 push ebp
[00001d43][00102fe9][00000000] 8bec mov ebp,esp
[00001d45][00102fe5][00001d12] 68121d0000 push 00001d12
[00001d4a][00102fe1][00001d12] 68121d0000 push 00001d12
[00001d4f][00102fdd][00001d54] e8eef6ffff call 00001442
H1: Begin Simulation Execution Trace Stored at:113095
Address_of_H1:1442
[00001d12][00113081][00113085] 55 push ebp
[00001d13][00113081][00113085] 8bec mov ebp,esp
[00001d15][0011307d][00103051] 51 push ecx
[00001d16][0011307d][00103051] 8b4508 mov eax,[ebp+08]
[00001d19][00113079][00001d12] 50 push eax
[00001d1a][00113079][00001d12] 8b4d08 mov ecx,[ebp+08]
[00001d1d][00113075][00001d12] 51 push ecx
[00001d1e][00113071][00001d23] e81ff8ffff call 00001542
H: Begin Simulation Execution Trace Stored at:15dabd
Address_of_H:1542
[00001d12][0015daa9][0015daad] 55 push ebp
[00001d13][0015daa9][0015daad] 8bec mov ebp,esp
[00001d15][0015daa5][0014da79] 51 push ecx
[00001d16][0015daa5][0014da79] 8b4508 mov eax,[ebp+08]
[00001d19][0015daa1][00001d12] 50 push eax
[00001d1a][0015daa1][00001d12] 8b4d08 mov ecx,[ebp+08]
[00001d1d][0015da9d][00001d12] 51 push ecx
[00001d1e][0015da99][00001d23] e81ff8ffff call 00001542
H: Recursive Simulation Detected Simulation Stopped
[00001d23][0011307d][00103051] 83c408 add esp,+08
[00001d26][0011307d][00000000] 8945fc mov [ebp-04],eax
[00001d29][0011307d][00000000] 837dfc00 cmp dword [ebp-04],+00
[00001d2d][0011307d][00000000] 7402 jz 00001d31
[00001d31][0011307d][00000000] 8b45fc mov eax,[ebp-04]
[00001d34][00113081][00113085] 8be5 mov esp,ebp
[00001d36][00113085][00001541] 5d pop ebp
[00001d37][00113089][00001d12] c3 ret
H1: End Simulation Input Terminated Normally
[00001d54][00102fe9][00000000] 83c408 add esp,+08
[00001d57][00102fe5][00000001] 50 push eax
[00001d58][00102fe1][00000763] 6863070000 push 00000763
[00001d5d][00102fe1][00000763] e820eaffff call 00000782
Input_Halts = 1
[00001d62][00102fe9][00000000] 83c408 add esp,+08
[00001d65][00102fe9][00000000] 33c0 xor eax,eax
[00001d67][00102fed][00000018] 5d pop ebp
[00001d68][00102ff1][00000000] c3 ret
Number of Instructions Executed(470247) == 7019 Pages
I don't show the 7,019 pages of H1 and H
Do you understand this?
The x86 execution trace is this exact same thing in terms
of objectively verifiable truth yet much more difficult
to understand.
Right, and it shows that H didn't correctly simulate the input
(because H doesn't actually simulate call instructions)
What I showed above proves that H1 and H simulate D correctly.
After all, yoru x86UTM shows that D(D) will Halt when run, and thus
when correctly simulated.
H1(D,D) is the equivalent of Linz H and H(D,D) is the
equivalent of Linz Ĥ. H1(D,D) does show that D(D) halts.
H(D,D) gets its output contradicted so cannot correctly
report on the behavior of D(D).
Except that the D calls H instead of H1, and they act differently so
not identical computations, so D was built wrong.
Yet you could find no mistake in the execution traces of D
simulated by H and D simulated by H1 thus proving that you
are using dogma instead of reasoning.
*Here is an annotated version to make rebuttal easier*
(The machine language already posted remains the same)
int D(int (*x)())
{
int Halt_Status = H(x, x);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
Output("Input_Halts = ", H1(D,D));
}
machine stack stack machine assembly
address address data code language
======== ======== ======== ========= ============= [00001d42][00102fe9][00000000] 55 push ebp ; begin main()
[00001d43][00102fe9][00000000] 8bec mov ebp,esp [00001d45][00102fe5][00001d12] 68121d0000 push 00001d12 ; push D [00001d4a][00102fe1][00001d12] 68121d0000 push 00001d12 ; push D [00001d4f][00102fdd][00001d54] e8eef6ffff call 00001442 ; call H1(D,D)
H1: Begin Simulation Execution Trace Stored at:113095
Address_of_H1:1442
[00001d12][00113081][00113085] 55 push ebp ; begin D
[00001d13][00113081][00113085] 8bec mov ebp,esp [00001d15][0011307d][00103051] 51 push ecx [00001d16][0011307d][00103051] 8b4508 mov eax,[ebp+08] [00001d19][00113079][00001d12] 50 push eax ; push D [00001d1a][00113079][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][00113075][00001d12] 51 push ecx ; push D [00001d1e][00113071][00001d23] e81ff8ffff call 00001542 ; call H(D,D)
H: Begin Simulation Execution Trace Stored at:15dabd
Address_of_H:1542
[00001d12][0015daa9][0015daad] 55 push ebp ; begin D
[00001d13][0015daa9][0015daad] 8bec mov ebp,esp [00001d15][0015daa5][0014da79] 51 push ecx [00001d16][0015daa5][0014da79] 8b4508 mov eax,[ebp+08] [00001d19][0015daa1][00001d12] 50 push eax ; push D [00001d1a][0015daa1][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][0015da9d][00001d12] 51 push ecx ; push D [00001d1e][0015da99][00001d23] e81ff8ffff call 00001542 ; call H(D,D)
H: Recursive Simulation Detected Simulation Stopped (return 0 to caller)
[00001d23][0011307d][00103051] 83c408 add esp,+08 ; returned to D [00001d26][0011307d][00000000] 8945fc mov [ebp-04],eax [00001d29][0011307d][00000000] 837dfc00 cmp dword [ebp-04],+00 [00001d2d][0011307d][00000000] 7402 jz 00001d31 [00001d31][0011307d][00000000] 8b45fc mov eax,[ebp-04] [00001d34][00113081][00113085] 8be5 mov esp,ebp [00001d36][00113085][00001541] 5d pop ebp [00001d37][00113089][00001d12] c3 ret ; exit D
H1: End Simulation Input Terminated Normally (return 1 to caller)
[00001d54][00102fe9][00000000] 83c408 add esp,+08 [00001d57][00102fe5][00000001] 50 push eax ; H1 return value
[00001d58][00102fe1][00000763] 6863070000 push 00000763 ; string address [00001d5d][00102fe1][00000763] e820eaffff call 00000782 ; call Output Input_Halts = 1
[00001d62][00102fe9][00000000] 83c408 add esp,+08 [00001d65][00102fe9][00000000] 33c0 xor eax,eax [00001d67][00102fed][00000018] 5d pop ebp [00001d68][00102ff1][00000000] c3 ret ; exit main()
Number of Instructions Executed(470247) == 7019 Pages
On 3/1/2024 2:51 PM, Richard Damon wrote:
On 3/1/24 1:59 PM, olcott wrote:
On 3/1/2024 9:05 AM, Richard Damon wrote:
On 3/1/24 12:54 AM, olcott wrote:
On 2/29/2024 10:14 PM, Richard Damon wrote:
On 2/29/24 10:27 PM, olcott wrote:
On 2/29/2024 8:16 PM, Richard Damon wrote:
On 2/29/24 8:20 PM, olcott wrote:
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 12:02 PM, olcott wrote:
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that >>>>>>>>>>>>>>> Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0 >>>>>>>>>>>>>>> // Ĥ.Hq0 is the first state of The Linz hypothetical halt >>>>>>>>>>>>>>> decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does >>>>>>>>>>>>>>> not halt
// ∞ means an infinite loop has been appended to the >>>>>>>>>>>>>>> Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩
halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩
does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value >>>>>>>>>>>>>>> that Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You >>>>>>>>>>>>>> might be automatically spam-filtered if you keep posting >>>>>>>>>>>>>> the same post so many times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect?
I did and you ignored them.
Nope, you just made more incorrect claims.
The scope of my current work has changed It is not that the
halting problem can be solved, it is that the halting problem >>>>>>>>> proofs were always wrong about the undecidability of the
halting problem.
So, you are admitting that you are confused.
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
If you admit that we can't "solve" the Halting Problem, meaning >>>>>>>> making an H that gets the right answer to all input, then BY
DEFINITION, that means the Halting Problem is uncomputable,
which means it is undecidable.
No what I am saying is that H always could correctly determine
the halt status of the incorrectly presumed impossible input.
That term MEANS, that there does not exist a Turing Machine that >>>>>>>> can correct compute that result for all inputs, which is EXACTLY >>>>>>>> what you are conceeding.
One fundamental change that we can make to my prior presentations >>>>>>>>> is that we can now say that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer
because it is not reporting on the behavior of the direct
execution
of Ĥ ⟨Ĥ⟩.
Right, and if H is a computation, which it must be if it is a
Turing Machine or Equivalent, then NO COPY of H can get the
right answer.
The outer H can see that the inner Ĥ.H has already aborted
its simulation or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so
H sees things that Ĥ.H cannot see.
I could never have understood this until I made the halting
problem 100% concrete. This was the only possible way for
me to see gaps in the reasoning that could not possibly
be otherwise uncovered.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
H ⟨Ĥ⟩ ⟨Ĥ⟩ versus Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ or H1(D,D) versions H(D,D)
H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must give the same answer, as must H1
if it is actually a copy of H.
The verifiably correct execution trace of H1(D,D) that
includes as a part of it the verifiably correct execution
trace of H(D,D).
Nope. Your "trace" just shows that H1 and H are not the same
computation, and imply that H never was one in the first place, >>>>>>>> so not a Turing Machine Equivalent.
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
Nope, because it doesn't simulate the CALL H correctly.
That you don't know the x86 language well enough to verify that
the call is simulated correctly provide zero basis for that claim.
How is it simulated "CORRECTLY"
The results of the call need to be the execution of the code that
the call goes to.
(a) For HH(DD,DD) is does do this yet does not show the
251 pages of text
But from what I remember of HH(DD,DD) it wasn't an actual computation
that did it, but used hidden channels between layers of simulation.
Mike just verified my original design of H(DD,DD) in that
a UTM can pass a portion of its own tape down to its simulated
machines so that they can pass their execution trace data
back up to this UTM.
Thus, not valid.
(b) For H(D,D) it does not do this because H correctly
determines that this call results in nested simulation
before it even simulates this call.
But the answer isn't correct as D(D) will halt even though H(D,D) says
it won't.
Now that I have a better alternative I can say that you are correct
H does get the wrong answer in the same way and for the same reason
that Ĥ.H gets the wrong answer.
The H/D combination is isomorphic to the single Ĥ machine and according
to Turing machine conventions that is the only way that the H/D
combination can actually be implemented.
H1 is equivalent to Linz H and does get the correct answer.
Thus H is WRONG.
Yes H is wrong and this is the same thing as saying that Ĥ.H
is wrong yet not the same as saying that Linz H is wrong.
and you are caught in your LIE.
I don't show the steps of H and H1 because that generates 7,019
pages of text. There is a flag to turn their display on.
But then you don' show that actual results of the call.
I need not show the results of this call because for H(D,D) and
HH(DD,DD) is can be analytically determined that a correct halt
status criteria has been met by simply looking at the execution
trace the simulated D and DD.
Only by your broken logic that says a Halting computation can be
correctly decided to be non-halting.
Because I have a better solution I changed my view on this.
Olcott H was always wrong. Olcott H1 was always right.
Olcott H/D is equivalent to Linz Ĥ and Olcott H1 is equivalent
to Linz H.
Your "Meta-analysis" of what the simulated code sees as its
simulation, not what actually happens.
Remember, the H that D calls is part of D, and thus must beHH(DD,DD) does simulation HH that only generates 251 pages of text.
simulated.
H(D,D) can tell that D is calling itself and the proves non
terminating behavior.
Only if H isn't actually a computation, which has been shown.
I think that that HH(DD,DD) version may be more easily shown to be
computable than the H(D,D) version. Someone in this forum said that
a Turing Machine equivalent such as a RASP machine can somehow
compute its own machine address.
A RASP Machine is a Turing Equivalent system, in that all Turing
machine can be converted to a RASP machine, and all RASP Machine
COMPUTATIONS can be converted to a Turing Machine.
The point is that RASP machine don't have a seerate "input" and thus
you end up needing to be careful how you setup the program in them to
actually be the required computation.
This is mostly moot except for a backup plan:
Can a RASP machine determines its own machine address?
In any case HH(DD,DD) is simply a UTM sharing a portion of its own
Turing machine tape with the machine that it is simulating. This
provides complete access to HH to the internal state of DD.
The OUTER HH seeing the state of the DD it is simulating isn't a
problem. The issue is that DD, and the copy of HH that it is using
can't peek into the state of HH, or even know that it is there.
Only the outermost HH needs to see the trace that the inner ones
are producing never the other direction.
If DD (or the HH that it uses) reacts to the HH outside, then the HH
isn't doing a "correct" simulation/
The only reaction is when it stops being simulated then it stops running.
Your execution model is just incorrect and your decider just isn't a
compuation, and your input D isn't a computation, so you can make
the required Turing Machines out of them.
Because I have been a software engineer for thirty years
and learned the 86 language back when it was new this
may be much easier for me than for you.
So, in 30 years, y=ou have never used a "CALL" instruction, or
never new what it did?
That is a ridiculous statement.
But either that is true, or you are just a liar, or both to claim
that your trace of H's simulation of D(D) is "correct".
One cannot actually provide a correct rebuttal to a sequence
of steps that correspond to the specified x86 machine code.
One can foolishly disagree with this as if they were foolishly
disagreeing with first grade arithmetic.
Except that we KNOW, that if H is actually a computation, then a call
to H(D,D) WILL always return the same value, and that is the value
that H(D,D) returns when directly called.
Thus, if H(D,D) returns 0, then so must the call to H(D,D) inside D,
and any other result is an error.
Only HH(DD,DD) can simulate itself simulating DD.
HH simulates DD(DD)
that calls HH(DD,DD)
that simulates DD(DD)
that calls HH(DD,DD)
without ever returning.
At this point the outermost HH aborts DD
which aborts the whole simulation chain.
<snip>
The logic used ADMITS that it isn't looking at the actual input,
because it says that "IF" H doesn't abort then D is non-halting, but
H DOES abort, so H was looking at the wrong D.
None of this matters now. H/D are equivalent to Linz Ĥ and
HI is equivalent to Linz H.
But then H/D needs to use H1,
That is the same as saying that Ĥ must call H and is not
allowed to have its own copy Ĥ.H. You already acknowledged
that this is not the way that Turing machines work.\
<snip>
I have no idea what you mean by this.
The ultimate measure of correct simulation is that a simulator
correctly simulates the actual x86 steps that its input specifies
in the order that they are specified.
Right, and the actual steps the program takes when it is run doesn't
matter on the simulator that will try to simulate it.
H can see that it must abort its simulation of D or itself will never
halt. H1 can see that it need not abort its simulation of D.
The only difference is that D references the machine address of
H and does not reference the machine address of H1.
Thus "Correct Simulation" of an input that is a computation doesn't
depend on the simulator simulating it.
When H can see that D is calling itself this is a different
execution trace than when H1 does not see that H is calling
itself. H must act on this and H1 can simply wait and see.
Any other criterion seems like some kind of double talk.
Yes, sounds about right for you. You can parrot the correct answer and
not understand it, and not see how an equivalent statement matches.
The ONLY correct simulations for a "Call H" instruction are tracing
the instrucions that the calls go to, or abridging it and contuing
past the call with the value the call generates (if you know it, if
you don't, and can't correctly figure it out)
There really is not need to bury a one page of execution interspersed
over 7019 pages. When we see what H sees (based on knowing its own
machine address) or what HH sees based on recognizing the ordinary
infinite recursion behavior pattern then we know that H and HH are
required to abort the simulation of their input to prevent their own non-termination.
Thus if H and H1 show different results, one of them is wrong.
Not when it is empirically verified that H1(D,D) and H(D,D)
are correctly simulating the machine language of D in the
order that D specifies its steps to H1 and H.
HOW CAN THAT BE?
*Find a mistake in the execution trace*
*Find a mistake in the execution trace*
*Find a mistake in the execution trace*
*Find a mistake in the execution trace*
The simulation is of the same input, so the correct simu;lation MUST
be the same.
You are just asserting that HALTING == NOM-HALTING
And thus admitting to being a LIAR.
Which instruction in the direct execution of D changed by it being
simulated by H verse H1?
Remember, H and H1 are DEFINED programs, with DEFINED behavior.
_D()
[00001d12] 55 push ebp
[00001d13] 8bec mov ebp,esp
[00001d15] 51 push ecx
[00001d16] 8b4508 mov eax,[ebp+08]
[00001d19] 50 push eax
[00001d1a] 8b4d08 mov ecx,[ebp+08]
[00001d1d] 51 push ecx
[00001d1e] e81ff8ffff call 00001542
[00001d23] 83c408 add esp,+08
[00001d26] 8945fc mov [ebp-04],eax
[00001d29] 837dfc00 cmp dword [ebp-04],+00
[00001d2d] 7402 jz 00001d31
[00001d2f] ebfe jmp 00001d2f
[00001d31] 8b45fc mov eax,[ebp-04]
[00001d34] 8be5 mov esp,ebp
[00001d36] 5d pop ebp
[00001d37] c3 ret
Size in bytes:(0038) [00001d37]
_main()
[00001d42] 55 push ebp
[00001d43] 8bec mov ebp,esp
[00001d45] 68121d0000 push 00001d12
[00001d4a] 68121d0000 push 00001d12
[00001d4f] e8eef6ffff call 00001442
[00001d54] 83c408 add esp,+08
[00001d57] 50 push eax
[00001d58] 6863070000 push 00000763
[00001d5d] e820eaffff call 00000782
[00001d62] 83c408 add esp,+08
[00001d65] 33c0 xor eax,eax
[00001d67] 5d pop ebp
[00001d68] c3 ret
Size in bytes:(0039) [00001d68]
machine stack stack machine assembly
address address data code language >>>>> ======== ======== ======== ========= =============
[00001d42][00102fe9][00000000] 55 push ebp
[00001d43][00102fe9][00000000] 8bec mov ebp,esp
[00001d45][00102fe5][00001d12] 68121d0000 push 00001d12
[00001d4a][00102fe1][00001d12] 68121d0000 push 00001d12
[00001d4f][00102fdd][00001d54] e8eef6ffff call 00001442
H1: Begin Simulation Execution Trace Stored at:113095
Address_of_H1:1442
[00001d12][00113081][00113085] 55 push ebp
[00001d13][00113081][00113085] 8bec mov ebp,esp
[00001d15][0011307d][00103051] 51 push ecx
[00001d16][0011307d][00103051] 8b4508 mov eax,[ebp+08]
[00001d19][00113079][00001d12] 50 push eax
[00001d1a][00113079][00001d12] 8b4d08 mov ecx,[ebp+08]
[00001d1d][00113075][00001d12] 51 push ecx
[00001d1e][00113071][00001d23] e81ff8ffff call 00001542
H: Begin Simulation Execution Trace Stored at:15dabd
Address_of_H:1542
[00001d12][0015daa9][0015daad] 55 push ebp
[00001d13][0015daa9][0015daad] 8bec mov ebp,esp
[00001d15][0015daa5][0014da79] 51 push ecx
[00001d16][0015daa5][0014da79] 8b4508 mov eax,[ebp+08]
[00001d19][0015daa1][00001d12] 50 push eax
[00001d1a][0015daa1][00001d12] 8b4d08 mov ecx,[ebp+08]
[00001d1d][0015da9d][00001d12] 51 push ecx
[00001d1e][0015da99][00001d23] e81ff8ffff call 00001542
H: Recursive Simulation Detected Simulation Stopped
Which isn't a correct answer.
It is a correct answer to the question:
Do you have to abort your simulation to prevent your
own non-termination?
This is not the halting question so H gets the wrong answer to that.
Since H here returnd 0 to the call to H(D,D) at the top level, the
correct simulation of the call H instruction above is to continue with
the answer of 0'
H just lied to itself, and you lied to the world making your claim
H answered the most important question correctly:
Do you have to abort your simulation to prevent your
own non-termination?
This is not the halting question so H gets the wrong answer to that.
You beleive your own lies, so you have made yourself STUPID.
[00001d23][0011307d][00103051] 83c408 add esp,+08
[00001d26][0011307d][00000000] 8945fc mov [ebp-04],eax
[00001d29][0011307d][00000000] 837dfc00 cmp dword [ebp-04],+00
[00001d2d][0011307d][00000000] 7402 jz 00001d31
[00001d31][0011307d][00000000] 8b45fc mov eax,[ebp-04]
[00001d34][00113081][00113085] 8be5 mov esp,ebp
[00001d36][00113085][00001541] 5d pop ebp
[00001d37][00113089][00001d12] c3 ret
H1: End Simulation Input Terminated Normally
[00001d54][00102fe9][00000000] 83c408 add esp,+08
[00001d57][00102fe5][00000001] 50 push eax
[00001d58][00102fe1][00000763] 6863070000 push 00000763
[00001d5d][00102fe1][00000763] e820eaffff call 00000782
Input_Halts = 1
[00001d62][00102fe9][00000000] 83c408 add esp,+08
[00001d65][00102fe9][00000000] 33c0 xor eax,eax
[00001d67][00102fed][00000018] 5d pop ebp
[00001d68][00102ff1][00000000] c3 ret
Number of Instructions Executed(470247) == 7019 Pages
I don't show the 7,019 pages of H1 and H
Do you understand this?
The x86 execution trace is this exact same thing in terms
of objectively verifiable truth yet much more difficult
to understand.
Right, and it shows that H didn't correctly simulate the input
(because H doesn't actually simulate call instructions)
What I showed above proves that H1 and H simulate D correctly.
After all, yoru x86UTM shows that D(D) will Halt when run, and
thus when correctly simulated.
H1(D,D) is the equivalent of Linz H and H(D,D) is the
equivalent of Linz Ĥ. H1(D,D) does show that D(D) halts.
H(D,D) gets its output contradicted so cannot correctly
report on the behavior of D(D).
Except that the D calls H instead of H1, and they act differently so
not identical computations, so D was built wrong.
Yet you could find no mistake in the execution traces of D
simulated by H and D simulated by H1 thus proving that you
are using dogma instead of reasoning.
But I DID.
I corrected you on that:
H answered the most important question correctly:
Do you have to abort your simulation to prevent your
own non-termination?
This is not the halting question so H gets the wrong answer to that.
On 3/1/2024 8:44 PM, Richard Damon wrote:
On 3/1/24 7:57 PM, olcott wrote:
On 3/1/2024 2:51 PM, Richard Damon wrote:
On 3/1/24 1:59 PM, olcott wrote:
On 3/1/2024 9:05 AM, Richard Damon wrote:
On 3/1/24 12:54 AM, olcott wrote:
On 2/29/2024 10:14 PM, Richard Damon wrote:How is it simulated "CORRECTLY"
On 2/29/24 10:27 PM, olcott wrote:
On 2/29/2024 8:16 PM, Richard Damon wrote:
On 2/29/24 8:20 PM, olcott wrote:
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 12:02 PM, olcott wrote:
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:I did and you ignored them.
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does not halt
Because H is required to always halt we can know that >>>>>>>>>>>>>>>>> Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0 >>>>>>>>>>>>>>>>> // Ĥ.Hq0 is the first state of The Linz hypothetical halt decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for does not halt
// ∞ means an infinite loop has been appended to the Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever value that Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? You might be
automatically spam-filtered if you keep posting the same post so many
times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect? >>>>>>>>>>>>>
Nope, you just made more incorrect claims.
The scope of my current work has changed It is not that the >>>>>>>>>>> halting problem can be solved, it is that the halting problem >>>>>>>>>>> proofs were always wrong about the undecidability of the >>>>>>>>>>> halting problem.
So, you are admitting that you are confused.
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
If you admit that we can't "solve" the Halting Problem, meaning making
an H that gets the right answer to all input, then BY DEFINITION, that
means the Halting Problem is uncomputable, which means it is >>>>>>>>>> undecidable.
No what I am saying is that H always could correctly determine >>>>>>>>> the halt status of the incorrectly presumed impossible input. >>>>>>>>>
That term MEANS, that there does not exist a Turing Machine that can >>>>>>>>>> correct compute that result for all inputs, which is EXACTLY what youThe outer H can see that the inner Ĥ.H has already aborted
are conceeding.
One fundamental change that we can make to my prior presentations >>>>>>>>>>> is that we can now say that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer
because it is not reporting on the behavior of the direct execution >>>>>>>>>>> of Ĥ ⟨Ĥ⟩.
Right, and if H is a computation, which it must be if it is a Turing >>>>>>>>>> Machine or Equivalent, then NO COPY of H can get the right answer. >>>>>>>>>
its simulation or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so
H sees things that Ĥ.H cannot see.
I could never have understood this until I made the halting
problem 100% concrete. This was the only possible way for
me to see gaps in the reasoning that could not possibly
be otherwise uncovered.
The correct common assumption that two identical machines >>>>>>>>>>> operating on the same input will necessarily derive the
same result does not apply to
H ⟨Ĥ⟩ ⟨Ĥ⟩ versus Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ or H1(D,D) versions H(D,D)
H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must give the same answer, as must H1 if it
is actually a copy of H.
The verifiably correct execution trace of H1(D,D) that
includes as a part of it the verifiably correct execution >>>>>>>>>>> trace of H(D,D).
Nope. Your "trace" just shows that H1 and H are not the same >>>>>>>>>> computation, and imply that H never was one in the first place, so not
a Turing Machine Equivalent.
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
Nope, because it doesn't simulate the CALL H correctly.
That you don't know the x86 language well enough to verify that
the call is simulated correctly provide zero basis for that claim. >>>>>>
The results of the call need to be the execution of the code that the >>>>>> call goes to.
(a) For HH(DD,DD) is does do this yet does not show the
251 pages of text
But from what I remember of HH(DD,DD) it wasn't an actual computation
that did it, but used hidden channels between layers of simulation.
Mike just verified my original design of H(DD,DD) in that
a UTM can pass a portion of its own tape down to its simulated
machines so that they can pass their execution trace data
back up to this UTM.
Nope, proving your utter stupidity again.
It can USE its tape for its own information (and in fact, since it is
the only memory, it likely needs to). This could be some trace history
of what THIS SIMULATOR has done.
No need for that. It needs to see the execution trace of its simulated input's simulated input recursively on down.
Or it only needs to know its own RASP machine address.
It will use another part of the tape to store the description of the
Turing Machine it is simulating, and another part to store the current
contents of the simulated machines tape.
The simulated machine can not detect that it is being simulated,
otherwise the simulation is not "correct" as the unsimulated machine
has nothing to talk to.
Termination analysis never seems to need this.
On 3/1/2024 8:44 PM, Richard Damon wrote:
On 3/1/24 7:57 PM, olcott wrote:
This is mostly moot except for a backup plan:
Can a RASP machine determines its own machine address?
Might depend on the actual instruction set. Of course, any
"sub-program" that does this is likely no longer a Computation.
RASP machine do not have the property that all sub-programs built on
them are computations.
I thought that this was a necessarily entailed by Turing Equivalence.
two computers P and Q are called equivalent if P can simulate Q and Q
can simulate P. https://en.wikipedia.org/wiki/Turing_completeness
This is the strongest position of equivalence that I found.
In any case HH(DD,DD) is simply a UTM sharing a portion of its own
Turing machine tape with the machine that it is simulating. This
provides complete access to HH to the internal state of DD.
The OUTER HH seeing the state of the DD it is simulating isn't a
problem. The issue is that DD, and the copy of HH that it is using
can't peek into the state of HH, or even know that it is there.
Only the outermost HH needs to see the trace that the inner ones
are producing never the other direction.
Right, but DD uses its HH which looks at the next level in.
If outer HH waits for something, then the next level in will wait for
the same thing, this might cause the whole thing to never halt, or the
outer one will run out of its time before the next one in get to there/
The way that it has always worked is that H or HH would simulate D or DD until the outermost H or HH saw that its halt status criteria has been
met.
Only the outermost one knows its own execution trace of D or DD. This
causes it to meet its halt status criteria before any of the inner simulations.
H or HH either needs to know its own machine address or somehow have
access to all of the execution traces of D or DD by every nested
simulator. H has its own machine address HH has the complete list
of the simulations of DD.
On 3/2/2024 7:52 AM, Richard Damon wrote:
On 3/2/24 12:56 AM, olcott wrote:
<big snip>
Every C function that in any way simulates another C
function must use:
u32 DebugStep(Registers* master_state,
Registers* slave_state,
Decoded_Line_Of_Code* decoded) { return 0; }
Thus when the outermost decider aborts the simulation
of its input everything else that this virtual machine
invoked at ever recursive depth is no longer pumped by
DebugStep().
It is an empirically verified fact that when a simulator
stops simulating its own input that ever machine that
this machine invoked including recursive simulations
no longer has a process that is pumping each simulated
step.
of nested simulations that the inner ones no longer have
any machine simulating them.
In other words, you are admitting that your system mixes up the
address space of the programs and doesn't actually create a computation.
Note, a simulator aborting a simulation may stop the progress of the
simulation, but not of the actual behavior of the program it is
simulating.
THAT ALWAYS continues (mathematically) until it reaches a final state.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
When Ĥ.H sees that itself would never stop running unless
it aborts its simulation of ⟨Ĥ⟩ ⟨Ĥ⟩ the directly executed
version of Ĥ.H sees this same thing. They both transition
to Ĥ.Hqn correctly preventing their own non-termination
and incorrectly deciding halting for ⟨Ĥ⟩ ⟨Ĥ⟩.
On 3/2/2024 7:52 AM, Richard Damon wrote:
On 3/2/24 12:02 AM, olcott wrote:
On 3/1/2024 8:44 PM, Richard Damon wrote:
On 3/1/24 7:57 PM, olcott wrote:
On 3/1/2024 2:51 PM, Richard Damon wrote:
On 3/1/24 1:59 PM, olcott wrote:
On 3/1/2024 9:05 AM, Richard Damon wrote:
On 3/1/24 12:54 AM, olcott wrote:
On 2/29/2024 10:14 PM, Richard Damon wrote:How is it simulated "CORRECTLY"
On 2/29/24 10:27 PM, olcott wrote:
On 2/29/2024 8:16 PM, Richard Damon wrote:
On 2/29/24 8:20 PM, olcott wrote:
On 2/29/2024 5:32 PM, Richard Damon wrote:
On 2/29/24 12:02 PM, olcott wrote:
On 2/29/2024 10:00 AM, immibis wrote:
On 29/02/24 16:49, olcott wrote:I did and you ignored them.
On 2/29/2024 4:38 AM, immibis wrote:
On 29/02/24 01:03, olcott wrote:
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qy // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ halts
H.q0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* H.qn // H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does
not halt
Because H is required to always halt we can know that >>>>>>>>>>>>>>>>>>> Ĥ.Hq0 applied to ⟨Ĥ⟩ ⟨Ĥ⟩ transitions to Ĥ.Hqy or Ĥ.Hqn
thus H merely needs to report on that.
// Ĥ.q0 ⟨Ĥ⟩ copies its input then transitions to Ĥ.Hq0
// Ĥ.Hq0 is the first state of The Linz hypothetical >>>>>>>>>>>>>>>>>>> halt decider
// H transitions to Ĥ.Hqy for halts and Ĥ.Hqn for >>>>>>>>>>>>>>>>>>> does not halt
// ∞ means an infinite loop has been appended to the >>>>>>>>>>>>>>>>>>> Ĥ.Hqy state
//
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to
⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to
⟨Ĥ⟩ does not halt
When Ĥ is applied to ⟨Ĥ⟩ it contradicts whatever >>>>>>>>>>>>>>>>>>> value that Ĥ.H
returns making Ĥ self-contradictory.
was there a purpose to posting this nonsense again? >>>>>>>>>>>>>>>>>> You might be automatically spam-filtered if you keep >>>>>>>>>>>>>>>>>> posting the same post so many times.
All of the rebuttals have been incorrect.
Then why don't you explain how each one is incorrect? >>>>>>>>>>>>>>>
Nope, you just made more incorrect claims.
The scope of my current work has changed It is not that the >>>>>>>>>>>>> halting problem can be solved, it is that the halting problem >>>>>>>>>>>>> proofs were always wrong about the undecidability of the >>>>>>>>>>>>> halting problem.
So, you are admitting that you are confused.
When someone examines things much more deeply than anyone >>>>>>>>>>> else every has and they start from complete scratch utterly >>>>>>>>>>> ignoring every prior assumption one gets a progressively >>>>>>>>>>> deeper view than anyone else every has had.
If you admit that we can't "solve" the Halting Problem, >>>>>>>>>>>> meaning making an H that gets the right answer to all input, >>>>>>>>>>>> then BY DEFINITION, that means the Halting Problem is
uncomputable, which means it is undecidable.
No what I am saying is that H always could correctly determine >>>>>>>>>>> the halt status of the incorrectly presumed impossible input. >>>>>>>>>>>
That term MEANS, that there does not exist a Turing Machine >>>>>>>>>>>> that can correct compute that result for all inputs, which >>>>>>>>>>>> is EXACTLY what you are conceeding.
One fundamental change that we can make to my prior
presentations
is that we can now say that Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ gets the wrong answer
because it is not reporting on the behavior of the direct >>>>>>>>>>>>> execution
of Ĥ ⟨Ĥ⟩.
Right, and if H is a computation, which it must be if it is >>>>>>>>>>>> a Turing Machine or Equivalent, then NO COPY of H can get >>>>>>>>>>>> the right answer.
The outer H can see that the inner Ĥ.H has already aborted >>>>>>>>>>> its simulation or otherwise already has transitioned to
either Ĥ.Hqy or Ĥ.Hqn. The inner Ĥ.H cannot see this so >>>>>>>>>>> H sees things that Ĥ.H cannot see.
I could never have understood this until I made the halting >>>>>>>>>>> problem 100% concrete. This was the only possible way for >>>>>>>>>>> me to see gaps in the reasoning that could not possibly
be otherwise uncovered.
The correct common assumption that two identical machines >>>>>>>>>>>>> operating on the same input will necessarily derive the >>>>>>>>>>>>> same result does not apply to
H ⟨Ĥ⟩ ⟨Ĥ⟩ versus Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ or H1(D,D) versions H(D,D)
H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ must give the same answer, as must
H1 if it is actually a copy of H.
The verifiably correct execution trace of H1(D,D) that >>>>>>>>>>>>> includes as a part of it the verifiably correct execution >>>>>>>>>>>>> trace of H(D,D).
Nope. Your "trace" just shows that H1 and H are not the same >>>>>>>>>>>> computation, and imply that H never was one in the first >>>>>>>>>>>> place, so not a Turing Machine Equivalent.
It you study it very carefully you will see that the x86 >>>>>>>>>>> execution trace does emulate the x86 instructions of
D correctly.
Nope, because it doesn't simulate the CALL H correctly.
That you don't know the x86 language well enough to verify that >>>>>>>>> the call is simulated correctly provide zero basis for that claim. >>>>>>>>
The results of the call need to be the execution of the code
that the call goes to.
(a) For HH(DD,DD) is does do this yet does not show the
251 pages of text
But from what I remember of HH(DD,DD) it wasn't an actual
computation that did it, but used hidden channels between layers
of simulation.
Mike just verified my original design of H(DD,DD) in that
a UTM can pass a portion of its own tape down to its simulated
machines so that they can pass their execution trace data
back up to this UTM.
Nope, proving your utter stupidity again.
It can USE its tape for its own information (and in fact, since it
is the only memory, it likely needs to). This could be some trace
history of what THIS SIMULATOR has done.
No need for that. It needs to see the execution trace of its
simulated input's simulated input recursively on down.
Or it only needs to know its own RASP machine address.
WHich makes it Not a computation.
The second one may make it not a computation the first
one is still a computation.
The upward information flow from the simulated to the
simulator provides HH its criterion measure to know that
it must abort its simulation of DD to prevent its own
non-termination.
I reviewed this again and HH only needs a single upward
flow from a single simulated simulator.
On 3/1/2024 12:41 PM, Mike Terry wrote:
On 01/03/2024 17:55, olcott wrote:
... The original H was renamed to HH.
Because a UTM actually can share a portion of its own
tape with the machine it is simulating HH may actually
be the preferred version.
Obviously a simulator has access to the internal state
(tape contents etc.) of the simulated machine. No
problem there.
What isn't allowed is the simulated machine altering its
own behaviour by accessing data outside of its own state.
(I.e. accessing data from its parent simulators state.)
It will use another part of the tape to store the description of the
Turing Machine it is simulating, and another part to store the
current contents of the simulated machines tape.
The simulated machine can not detect that it is being simulated,
otherwise the simulation is not "correct" as the unsimulated machine
has nothing to talk to.
Termination analysis never seems to need this.
It seems the problem is you don't make them out of computations, so
never even started on the probelm.
Again proving that you are nothing more that a pathological lying idiot.
Thus, not valid.
(b) For H(D,D) it does not do this because H correctly
determines that this call results in nested simulation
before it even simulates this call.
But the answer isn't correct as D(D) will halt even though H(D,D)
says it won't.
Now that I have a better alternative I can say that you are correct
H does get the wrong answer in the same way and for the same reason
that Ĥ.H gets the wrong answer.
The H/D combination is isomorphic to the single Ĥ machine and
according
to Turing machine conventions that is the only way that the H/D
combination can actually be implemented.
H1 is equivalent to Linz H and does get the correct answer.
But Linz H^ is built on Linz H
No. Ĥ.H is a part of that Ĥ contradicts.
This has no effect on the external H at all.
Except that it must be the exact same computation.
You just can't seem to understand that, likely because you are to
mentally deficient due to your gaslighting yourself into the ignorant
pathological lying idiot you made yourself.
, so if H1 is a different machine, and it needs to be if it gives a
diffferent answer, thus the H/D combination is NOT the Linz H^, it
would need to be a H1/D pairing.
When we understand that the combination of H/D is equivalent to Linz Ĥ
and we understand that H1 is equivalent to Linz H, then we the execution >>> trace of H1(D,D) that then D calls H(D,D) shows isomorphic behavior to
H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.
LIES, as explained.
Linz H^ must be built on Linz H, not something sort of like. And they
must be actual computations.
Yours are not, so you have proven yourself to be a PATHOLOGICAL LIAR.
If you are correct that I am wrong and actually do know assembly
language then you could find an error in this execution trace
if there is one.
call H doesn't follow the execution path,
call HH(DD,DD) does follow the execution path
I only created H(D,D) because I initially thought
that HH(DD,DD) was not computable.
Turing machine Ĥ.H can implement H(D,D) by finite
string comparison instead of needing to know its
own machine address.
I appreciate your feedback.
On 3/1/24 7:57 PM, olcott wrote:
Mike just verified my original design of H(DD,DD) in that
a UTM can pass a portion of its own tape down to its simulated
machines so that they can pass their execution trace data
back up to this UTM.
The upward information flow from the simulated to the
simulator provides HH its criterion measure to know that
it must abort its simulation of DD to prevent its own
non-termination.
On 3/1/2024 5:28 AM, immibis wrote:
On 1/03/24 04:27, olcott wrote:
When someone examines things much more deeply than anyone
else every has and they start from complete scratch utterly
ignoring every prior assumption one gets a progressively
deeper view than anyone else every has had.
It also means you are ignoring much accumulated knowledge and known
mistakes and the answers to those mistakes. For example, assuming that
x86utm has anything to do with Turing machines.
If that was true then mistakes in my reasoning could be pointed
out by reasoning instead of dogma.
No what I am saying is that H always could correctly determine
the halt status of the incorrectly presumed impossible input.
If you change what the problem is, then maybe, but then you solved a
different problem that is not the halting problem.
I have now reverted back to the original problem in its
entirety and found that the halting problem proofs do not
show that halting is actually undecidable.
All of the proofs where the original H is directly called
by D are not the way that Turing Machines actually work.
For the Linz proof the counter-example input can fool its
own embedded Ĥ.H yet cannot fool the actual Linz H.
If it was 100% concrete then every machine that can possibly beI could never have understood this until I made the halting
problem 100% concrete.
The Turing machine halting problem is already 100% concrete. By making
it about x86utm, which has nothing to do with Turing machines, you
made a mistake.
encoded by the second ⊢* wildcard state transition would have
all of its steps actually listed.
My x86utm shows a 100% complete example that is isomorphic to
the Linz proof.
The correct common assumption that two identical machines
operating on the same input will necessarily derive the
same result does not apply to
It always applies unless you cheat. Or else please show me two Turing
machine execution traces where the same machine with the same initial
tape computes two different answers.
The execution trace of H1(D,D) such that D calls H(D,D)
already shows an isomorphic example.
Whether or not these are computable is moot at this point because
numerous alternative criteria would work equally well. All that we
really need is some computable criteria for Ĥ.H to transition to
Ĥ.Hqy or Ĥ.Hqn.
It you study it very carefully you will see that the x86
execution trace does emulate the x86 instructions of
D correctly.
x86 execution traces have nothing to do with Turing machines unless
you can prove they do.
The Church-Turing thesis already sufficiently proves that they do.
As long as some computable criteria exists for Ĥ.H to transition to
Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
When one person asserts first grade arithmetic and the other
disagrees the one that disagrees is necessarily incorrect.
We assert first grade computer science: the same program with the same
input always makes the same output. You disagree. You are necessarily
incorrect.
*That is correct when all things are exactly equal*
On 3/3/2024 6:15 AM, Richard Damon wrote:
On 3/2/24 9:58 PM, olcott wrote:
On 3/2/2024 3:53 PM, Richard Damon wrote:
On 3/2/24 1:00 PM, olcott wrote:
On 3/2/2024 7:52 AM, Richard Damon wrote:
On 3/2/24 12:56 AM, olcott wrote:
<big snip>
Every C function that in any way simulates another C
function must use:
u32 DebugStep(Registers* master_state,
Registers* slave_state,
Decoded_Line_Of_Code* decoded) { return 0; }
Thus when the outermost decider aborts the simulation
of its input everything else that this virtual machine
invoked at ever recursive depth is no longer pumped by
DebugStep().
It is an empirically verified fact that when a simulator
stops simulating its own input that ever machine that
this machine invoked including recursive simulations
no longer has a process that is pumping each simulated
step.
of nested simulations that the inner ones no longer have
any machine simulating them.
In other words, you are admitting that your system mixes up the
address space of the programs and doesn't actually create a
computation.
Note, a simulator aborting a simulation may stop the progress of
the simulation, but not of the actual behavior of the program it
is simulating.
THAT ALWAYS continues (mathematically) until it reaches a final
state.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
When Ĥ.H sees that itself would never stop running unless
it aborts its simulation of ⟨Ĥ⟩ ⟨Ĥ⟩ the directly executed
version of Ĥ.H sees this same thing. They both transition
to Ĥ.Hqn correctly preventing their own non-termination
and incorrectly deciding halting for ⟨Ĥ⟩ ⟨Ĥ⟩.
Right, so H^ (H^) will "determine" with H^.Hq0 (H^) (H^) that its
input is non-halting and go to qn and halt, thus H, which made that
decision, is wrong.
Remember, the question is does the computation described by the
input Halt, and it DOES, so the correct answer should have been
HALTING, and thus the non-halting answer was just WRONG and INCORRECT
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Humans can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by Ĥ.H
cannot possibly terminate unless this simulation is aborted.
Humans can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does
abort its simulation then Ĥ will halt.
It seems quite foolish to believe that computers
cannot possibly ever see this too.
We are not "Computations", and in particular, we are not H.
And Yes, (if we are smart) we can see that there is no answer that H
can give and be correct.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
We can see that there is no answer that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ that corresponds to the actual behavior of Ĥ applied to ⟨Ĥ⟩.
H merely needs to correctly simulate ⟨Ĥ⟩ ⟨Ĥ⟩ to see that Ĥ
applied to ⟨Ĥ⟩ halts.
Both H and Ĥ.H use the same algorithm that correctly detects
whether or not a correct simulation of their input would cause
their own infinite execution unless aborted.
Humans can see that this criteria derives different answers
for Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ than for H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.
We can also see that for evvery possible program that could be put in
as an (incorrect) H, that H^ (H^) will have a specific behavior, just
one that H doesn't give as its answer, thus the question about what it
does is valid.
Most Humans can also tell that you logic is just broken and you have
been nothing but an ignorant pathological liar.
Your "Logic" just shows how little you understand about what you talk
about, and thus no one (with any intelegence) is apt to look into your
ideas about truth, as clearly you don't understand what truth actually
is.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (0 / 16) |
Uptime: | 120:10:28 |
Calls: | 6,704 |
Calls today: | 4 |
Files: | 12,235 |
Messages: | 5,349,614 |