On 1/26/22 9:42 PM, olcott wrote:
On 1/26/2022 8:29 PM, Richard Damon wrote:
On 1/26/22 9:18 PM, olcott wrote:
On 1/26/2022 7:25 PM, Richard Damon wrote:
On 1/26/22 8:07 PM, olcott wrote:
On 1/26/2022 6:46 PM, Richard Damon wrote:
On 1/26/22 7:09 PM, olcott wrote:
On 1/26/2022 5:54 PM, Richard Damon wrote:
On 1/26/22 9:39 AM, olcott wrote:YOU ARE MUCH DUMBER THAN A BOX OF ROCKS BECAUSE
On 1/26/2022 6:03 AM, Richard Damon wrote:
_Infinite_Loop()
[00000d9a](01) 55 push ebp >>>>>>>>>>>> [00000d9b](02) 8bec mov ebp,esp >>>>>>>>>>>> [00000d9d](02) ebfe jmp 00000d9d >>>>>>>>>>>> [00000d9f](01) 5d pop ebp
[00000da0](01) c3 ret
Size in bytes:(0007) [00000da0]
You keep coming back to the idea that only an infinite >>>>>>>>>>>> simulation of an infinite sequence of configurations can >>>>>>>>>>>> recognize an infinite sequence of configurations.
That is ridiculously stupid.
You can detect SOME (not all) infinite execution in finite >>>>>>>>>>> time due to patterns.
There is no finite pattern in the H^ based on an H that at >>>>>>>>>>> some point goest to H.Qn that correctly detects the infinite >>>>>>>>>>> behavior.
THAT is the point you miss, SOME infinite patterns are only >>>>>>>>>>> really infinite when you work them out to infinitity.
Part of your problem is that the traces you look at are
wrong. When H simulates H^, it needs to trace out the actual >>>>>>>>>>> execution path of the H that part of H^, not switch to
tracing what it was tracing.
You simply lack the intellectual capacity to understand that >>>>>>>>>> when embedded_H simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩ this is the pattern:
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Which only happens if H NEVER aborts its simulation and thus >>>>>>>>> can't give an answer.
If H DOES abort its simulation at ANY point, then the above is >>>>>>>>> NOT the accurate trace of the behavior of the input.
_Infinite_Loop()
[00000d9a](01) 55 push ebp
[00000d9b](02) 8bec mov ebp,esp
[00000d9d](02) ebfe jmp 00000d9d
[00000d9f](01) 5d pop ebp
[00000da0](01) c3 ret
Size in bytes:(0007) [00000da0]
You exactly same jackass point equally applies to this case:
Unless H simulates the infinite loop infinitely it is not an
accurate simulation.
So, no rubbutal just red herring sushi.
The key point you miss is that if H does abort its simulation,
then it needs to take into account that the machine it is
simulating will do so too.
As long as H correctly determines that its simulated input cannot
possibly reach its final state in any finite number of steps it
has conclusively proved that this input never halts according to
the Linz definition:
But it needs to prove that the UTM of its input never halts, and
for H^, that means even if the H insisde H^ goes to H.Qn which
means that H^ goes to H^.Qn, which of course Halts.
As soon as embedded_H (not H) determines that its simulated input
⟨Ĥ⟩ applied to ⟨Ĥ⟩ cannot possibly reach its final state in any >>>> finite number of steps it terminates this simulation immediately
stopping every element of the entire chain of nested simulations.
If you are claiming that embedded_H and H behave differently then you
have been lying that you built H^ by the instruction of Linz, as the
copy of H inside H^ is IDENTICAL (except what happens AFTER getting
to H.Qy)
Now, IF H could make that proof, then it would be correct to go to
H.Qn, but it would need to take into account that H^ halts if its
copy of H goes to H.Qn, so this is NEVER possible.
FAIL
Then embedded_H transitions to Ĥ.qn which causes the original Ĥ
applied to ⟨Ĥ⟩ to halt. Since Ĥ applied to ⟨Ĥ⟩ is not an input to
embedded_H and a decider is only accountable for computing the
mapping from its actual inputs to an accept or reject state it makes
no difference that Ĥ applied to ⟨Ĥ⟩ halts.
Thus you have admitted to LYING about working on the Halting problem
as if you were the embedded_H would be the same algorithm as H, and
the requirement on H was that is IS accoutable for the machine its
input represents,
You are simply too freaking stupid to understand that deciders thus
halt deciders are only accountable for computing the mapping from
their actual inputs (nothing else in the whole freaking universe
besides their actual inputs) to an accept or reject state.
An actual computer scientist would know this.
It seems you don't understand the difference between capabilities and requirements.
H is only CAPABLE of deciding based on what it can do. It can only
computate a mapping based on what it actually can do.
It is REQUIRED to meet its requirements, which is to decide on the
behavior of what its input would do if given to a UTM.
On 1/26/22 10:59 PM, olcott wrote:
On 1/26/2022 9:18 PM, Richard Damon wrote:
On 1/26/22 9:42 PM, olcott wrote:
On 1/26/2022 8:29 PM, Richard Damon wrote:
On 1/26/22 9:18 PM, olcott wrote:
On 1/26/2022 7:25 PM, Richard Damon wrote:
On 1/26/22 8:07 PM, olcott wrote:
On 1/26/2022 6:46 PM, Richard Damon wrote:
On 1/26/22 7:09 PM, olcott wrote:
On 1/26/2022 5:54 PM, Richard Damon wrote:
On 1/26/22 9:39 AM, olcott wrote:YOU ARE MUCH DUMBER THAN A BOX OF ROCKS BECAUSE
On 1/26/2022 6:03 AM, Richard Damon wrote:
_Infinite_Loop()
[00000d9a](01) 55 push ebp >>>>>>>>>>>>>> [00000d9b](02) 8bec mov ebp,esp >>>>>>>>>>>>>> [00000d9d](02) ebfe jmp 00000d9d >>>>>>>>>>>>>> [00000d9f](01) 5d pop ebp >>>>>>>>>>>>>> [00000da0](01) c3 ret
Size in bytes:(0007) [00000da0]
You keep coming back to the idea that only an infinite >>>>>>>>>>>>>> simulation of an infinite sequence of configurations can >>>>>>>>>>>>>> recognize an infinite sequence of configurations.
That is ridiculously stupid.
You can detect SOME (not all) infinite execution in finite >>>>>>>>>>>>> time due to patterns.
There is no finite pattern in the H^ based on an H that at >>>>>>>>>>>>> some point goest to H.Qn that correctly detects the
infinite behavior.
THAT is the point you miss, SOME infinite patterns are only >>>>>>>>>>>>> really infinite when you work them out to infinitity. >>>>>>>>>>>>>
Part of your problem is that the traces you look at are >>>>>>>>>>>>> wrong. When H simulates H^, it needs to trace out the >>>>>>>>>>>>> actual execution path of the H that part of H^, not switch >>>>>>>>>>>>> to tracing what it was tracing.
You simply lack the intellectual capacity to understand that >>>>>>>>>>>> when embedded_H simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩ this is the >>>>>>>>>>>> pattern:
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates ⟨Ĥ⟩
⟨Ĥ⟩...
Which only happens if H NEVER aborts its simulation and thus >>>>>>>>>>> can't give an answer.
If H DOES abort its simulation at ANY point, then the above >>>>>>>>>>> is NOT the accurate trace of the behavior of the input.
_Infinite_Loop()
[00000d9a](01) 55 push ebp
[00000d9b](02) 8bec mov ebp,esp
[00000d9d](02) ebfe jmp 00000d9d
[00000d9f](01) 5d pop ebp
[00000da0](01) c3 ret
Size in bytes:(0007) [00000da0]
You exactly same jackass point equally applies to this case: >>>>>>>>>>
Unless H simulates the infinite loop infinitely it is not an >>>>>>>>>> accurate simulation.
So, no rubbutal just red herring sushi.
The key point you miss is that if H does abort its simulation, >>>>>>>>> then it needs to take into account that the machine it is
simulating will do so too.
As long as H correctly determines that its simulated input
cannot possibly reach its final state in any finite number of
steps it has conclusively proved that this input never halts
according to the Linz definition:
But it needs to prove that the UTM of its input never halts, and >>>>>>> for H^, that means even if the H insisde H^ goes to H.Qn which
means that H^ goes to H^.Qn, which of course Halts.
As soon as embedded_H (not H) determines that its simulated input
⟨Ĥ⟩ applied to ⟨Ĥ⟩ cannot possibly reach its final state in any
finite number of steps it terminates this simulation immediately
stopping every element of the entire chain of nested simulations.
If you are claiming that embedded_H and H behave differently then
you have been lying that you built H^ by the instruction of Linz,
as the copy of H inside H^ is IDENTICAL (except what happens AFTER
getting to H.Qy)
Now, IF H could make that proof, then it would be correct to go to
H.Qn, but it would need to take into account that H^ halts if its
copy of H goes to H.Qn, so this is NEVER possible.
FAIL
Then embedded_H transitions to Ĥ.qn which causes the original Ĥ
applied to ⟨Ĥ⟩ to halt. Since Ĥ applied to ⟨Ĥ⟩ is not an input to
embedded_H and a decider is only accountable for computing the
mapping from its actual inputs to an accept or reject state it
makes no difference that Ĥ applied to ⟨Ĥ⟩ halts.
Thus you have admitted to LYING about working on the Halting
problem as if you were the embedded_H would be the same algorithm
as H, and the requirement on H was that is IS accoutable for the
machine its input represents,
You are simply too freaking stupid to understand that deciders thus
halt deciders are only accountable for computing the mapping from
their actual inputs (nothing else in the whole freaking universe
besides their actual inputs) to an accept or reject state.
An actual computer scientist would know this.
It seems you don't understand the difference between capabilities and
requirements.
H is only CAPABLE of deciding based on what it can do. It can only
computate a mapping based on what it actually can do.
It is REQUIRED to meet its requirements, which is to decide on the
behavior of what its input would do if given to a UTM.
embedded_H must only determine whether or not is simulated input can
ever reach its final state in any finite number of steps.
Again, you seem to be lying about working on the Halting Problem and
Linz proof.
If you were working on the Halting Problem and Linz proof then
embedded_H would be identical to H, as required by Linz, and the correct answer for the 'behavior' of the input to embedded_H <H^> <H^> would be
the behavior of UTM(<H^>,<H^>) which if embedded_H goes to H.Qn then we
know that H^ will go to H^.Qn and Halt, and thus H/embedded_H going to
H.Qn is incorrect.
So, you are just admitting that you are lying or are too stupid to
understan what you are talking about.
Which is it?
On 1/26/22 11:37 PM, olcott wrote:
On 1/26/2022 10:07 PM, Richard Damon wrote:
On 1/26/22 10:59 PM, olcott wrote:
On 1/26/2022 9:18 PM, Richard Damon wrote:
On 1/26/22 9:42 PM, olcott wrote:
On 1/26/2022 8:29 PM, Richard Damon wrote:
On 1/26/22 9:18 PM, olcott wrote:
On 1/26/2022 7:25 PM, Richard Damon wrote:
On 1/26/22 8:07 PM, olcott wrote:
On 1/26/2022 6:46 PM, Richard Damon wrote:
On 1/26/22 7:09 PM, olcott wrote:
On 1/26/2022 5:54 PM, Richard Damon wrote:
On 1/26/22 9:39 AM, olcott wrote:YOU ARE MUCH DUMBER THAN A BOX OF ROCKS BECAUSE
On 1/26/2022 6:03 AM, Richard Damon wrote:
_Infinite_Loop()
[00000d9a](01) 55 push ebp >>>>>>>>>>>>>>>> [00000d9b](02) 8bec mov ebp,esp >>>>>>>>>>>>>>>> [00000d9d](02) ebfe jmp 00000d9d >>>>>>>>>>>>>>>> [00000d9f](01) 5d pop ebp >>>>>>>>>>>>>>>> [00000da0](01) c3 ret >>>>>>>>>>>>>>>> Size in bytes:(0007) [00000da0]
You keep coming back to the idea that only an infinite >>>>>>>>>>>>>>>> simulation of an infinite sequence of configurations can >>>>>>>>>>>>>>>> recognize an infinite sequence of configurations. >>>>>>>>>>>>>>>>
That is ridiculously stupid.
You can detect SOME (not all) infinite execution in >>>>>>>>>>>>>>> finite time due to patterns.
There is no finite pattern in the H^ based on an H that >>>>>>>>>>>>>>> at some point goest to H.Qn that correctly detects the >>>>>>>>>>>>>>> infinite behavior.
THAT is the point you miss, SOME infinite patterns are >>>>>>>>>>>>>>> only really infinite when you work them out to infinitity. >>>>>>>>>>>>>>>
Part of your problem is that the traces you look at are >>>>>>>>>>>>>>> wrong. When H simulates H^, it needs to trace out the >>>>>>>>>>>>>>> actual execution path of the H that part of H^, not >>>>>>>>>>>>>>> switch to tracing what it was tracing.
You simply lack the intellectual capacity to understand >>>>>>>>>>>>>> that when embedded_H simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩ this is
the pattern:
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Which only happens if H NEVER aborts its simulation and >>>>>>>>>>>>> thus can't give an answer.
If H DOES abort its simulation at ANY point, then the above >>>>>>>>>>>>> is NOT the accurate trace of the behavior of the input. >>>>>>>>>>>>>
_Infinite_Loop()
[00000d9a](01) 55 push ebp >>>>>>>>>>>> [00000d9b](02) 8bec mov ebp,esp >>>>>>>>>>>> [00000d9d](02) ebfe jmp 00000d9d >>>>>>>>>>>> [00000d9f](01) 5d pop ebp
[00000da0](01) c3 ret
Size in bytes:(0007) [00000da0]
You exactly same jackass point equally applies to this case: >>>>>>>>>>>>
Unless H simulates the infinite loop infinitely it is not an >>>>>>>>>>>> accurate simulation.
So, no rubbutal just red herring sushi.
The key point you miss is that if H does abort its
simulation, then it needs to take into account that the
machine it is simulating will do so too.
As long as H correctly determines that its simulated input >>>>>>>>>> cannot possibly reach its final state in any finite number of >>>>>>>>>> steps it has conclusively proved that this input never halts >>>>>>>>>> according to the Linz definition:
But it needs to prove that the UTM of its input never halts, >>>>>>>>> and for H^, that means even if the H insisde H^ goes to H.Qn >>>>>>>>> which means that H^ goes to H^.Qn, which of course Halts.
As soon as embedded_H (not H) determines that its simulated
input ⟨Ĥ⟩ applied to ⟨Ĥ⟩ cannot possibly reach its final state
in any finite number of steps it terminates this simulation
immediately stopping every element of the entire chain of nested >>>>>>>> simulations.
If you are claiming that embedded_H and H behave differently then >>>>>>> you have been lying that you built H^ by the instruction of Linz, >>>>>>> as the copy of H inside H^ is IDENTICAL (except what happens
AFTER getting to H.Qy)
Now, IF H could make that proof, then it would be correct to go
to H.Qn, but it would need to take into account that H^ halts if >>>>>>> its copy of H goes to H.Qn, so this is NEVER possible.
FAIL
Then embedded_H transitions to Ĥ.qn which causes the original Ĥ >>>>>>>> applied to ⟨Ĥ⟩ to halt. Since Ĥ applied to ⟨Ĥ⟩ is not an input
to embedded_H and a decider is only accountable for computing
the mapping from its actual inputs to an accept or reject state >>>>>>>> it makes no difference that Ĥ applied to ⟨Ĥ⟩ halts.
Thus you have admitted to LYING about working on the Halting
problem as if you were the embedded_H would be the same algorithm >>>>>>> as H, and the requirement on H was that is IS accoutable for the >>>>>>> machine its input represents,
You are simply too freaking stupid to understand that deciders
thus halt deciders are only accountable for computing the mapping
from their actual inputs (nothing else in the whole freaking
universe besides their actual inputs) to an accept or reject state. >>>>>>
An actual computer scientist would know this.
It seems you don't understand the difference between capabilities
and requirements.
H is only CAPABLE of deciding based on what it can do. It can only
computate a mapping based on what it actually can do.
It is REQUIRED to meet its requirements, which is to decide on the
behavior of what its input would do if given to a UTM.
embedded_H must only determine whether or not is simulated input can
ever reach its final state in any finite number of steps.
Again, you seem to be lying about working on the Halting Problem and
Linz proof.
If you were working on the Halting Problem and Linz proof then
embedded_H would be identical to H, as required by Linz, and the
correct answer for the 'behavior' of the input to embedded_H <H^>
<H^> would be the behavior of UTM(<H^>,<H^>) which if embedded_H goes
to H.Qn then we know that H^ will go to H^.Qn and Halt, and thus
H/embedded_H going to H.Qn is incorrect.
So, you are just admitting that you are lying or are too stupid to
understan what you are talking about.
Which is it?
I will not tolerate any digression from the point at hand until we
have mutual agreement. This is verified as completely true entirely on
the basis of the meaning of its words:
embedded_H must only determine whether or not its simulated input can
ever reach its final state in any finite number of steps.
Translation: You will ignroe any disagreement with your incorrect
statement because you need to get people to accept your falso premise
for your unsound argument to work.
The problem with your statement is that you are showing that you atually
mean something different than the true meaning of the words.
H (and thus embedded_H) need to determine whether or not its simuleted
input will ever reach its final state for EVERY POSSIBLE finite number
of steps, i.e. as determine by a UTM.
On 1/27/2022 5:56 AM, Richard Damon wrote:
On 1/26/22 11:37 PM, olcott wrote:
On 1/26/2022 10:07 PM, Richard Damon wrote:
On 1/26/22 10:59 PM, olcott wrote:
On 1/26/2022 9:18 PM, Richard Damon wrote:
On 1/26/22 9:42 PM, olcott wrote:
On 1/26/2022 8:29 PM, Richard Damon wrote:
On 1/26/22 9:18 PM, olcott wrote:
On 1/26/2022 7:25 PM, Richard Damon wrote:
On 1/26/22 8:07 PM, olcott wrote:
On 1/26/2022 6:46 PM, Richard Damon wrote:
On 1/26/22 7:09 PM, olcott wrote:
On 1/26/2022 5:54 PM, Richard Damon wrote:
On 1/26/22 9:39 AM, olcott wrote:YOU ARE MUCH DUMBER THAN A BOX OF ROCKS BECAUSE
On 1/26/2022 6:03 AM, Richard Damon wrote:
_Infinite_Loop()
[00000d9a](01) 55 push ebp >>>>>>>>>>>>>>>>> [00000d9b](02) 8bec mov ebp,esp >>>>>>>>>>>>>>>>> [00000d9d](02) ebfe jmp 00000d9d >>>>>>>>>>>>>>>>> [00000d9f](01) 5d pop ebp >>>>>>>>>>>>>>>>> [00000da0](01) c3 ret >>>>>>>>>>>>>>>>> Size in bytes:(0007) [00000da0]
You keep coming back to the idea that only an infinite >>>>>>>>>>>>>>>>> simulation of an infinite sequence of configurations >>>>>>>>>>>>>>>>> can recognize an infinite sequence of configurations. >>>>>>>>>>>>>>>>>
That is ridiculously stupid.
You can detect SOME (not all) infinite execution in >>>>>>>>>>>>>>>> finite time due to patterns.
There is no finite pattern in the H^ based on an H that >>>>>>>>>>>>>>>> at some point goest to H.Qn that correctly detects the >>>>>>>>>>>>>>>> infinite behavior.
THAT is the point you miss, SOME infinite patterns are >>>>>>>>>>>>>>>> only really infinite when you work them out to infinitity. >>>>>>>>>>>>>>>>
Part of your problem is that the traces you look at are >>>>>>>>>>>>>>>> wrong. When H simulates H^, it needs to trace out the >>>>>>>>>>>>>>>> actual execution path of the H that part of H^, not >>>>>>>>>>>>>>>> switch to tracing what it was tracing.
You simply lack the intellectual capacity to understand >>>>>>>>>>>>>>> that when embedded_H simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩ this is
the pattern:
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Ĥ copies its input ⟨Ĥ⟩ to ⟨Ĥ⟩ then embedded_H simulates
⟨Ĥ⟩ ⟨Ĥ⟩...
Which only happens if H NEVER aborts its simulation and >>>>>>>>>>>>>> thus can't give an answer.
If H DOES abort its simulation at ANY point, then the >>>>>>>>>>>>>> above is NOT the accurate trace of the behavior of the input. >>>>>>>>>>>>>>
_Infinite_Loop()
[00000d9a](01) 55 push ebp >>>>>>>>>>>>> [00000d9b](02) 8bec mov ebp,esp >>>>>>>>>>>>> [00000d9d](02) ebfe jmp 00000d9d >>>>>>>>>>>>> [00000d9f](01) 5d pop ebp >>>>>>>>>>>>> [00000da0](01) c3 ret
Size in bytes:(0007) [00000da0]
You exactly same jackass point equally applies to this case: >>>>>>>>>>>>>
Unless H simulates the infinite loop infinitely it is not >>>>>>>>>>>>> an accurate simulation.
So, no rubbutal just red herring sushi.
The key point you miss is that if H does abort its
simulation, then it needs to take into account that the >>>>>>>>>>>> machine it is simulating will do so too.
As long as H correctly determines that its simulated input >>>>>>>>>>> cannot possibly reach its final state in any finite number of >>>>>>>>>>> steps it has conclusively proved that this input never halts >>>>>>>>>>> according to the Linz definition:
But it needs to prove that the UTM of its input never halts, >>>>>>>>>> and for H^, that means even if the H insisde H^ goes to H.Qn >>>>>>>>>> which means that H^ goes to H^.Qn, which of course Halts.
As soon as embedded_H (not H) determines that its simulated
input ⟨Ĥ⟩ applied to ⟨Ĥ⟩ cannot possibly reach its final state
in any finite number of steps it terminates this simulation
immediately stopping every element of the entire chain of
nested simulations.
If you are claiming that embedded_H and H behave differently
then you have been lying that you built H^ by the instruction of >>>>>>>> Linz, as the copy of H inside H^ is IDENTICAL (except what
happens AFTER getting to H.Qy)
Now, IF H could make that proof, then it would be correct to go >>>>>>>> to H.Qn, but it would need to take into account that H^ halts if >>>>>>>> its copy of H goes to H.Qn, so this is NEVER possible.
FAIL
Then embedded_H transitions to Ĥ.qn which causes the original Ĥ >>>>>>>>> applied to ⟨Ĥ⟩ to halt. Since Ĥ applied to ⟨Ĥ⟩ is not an input
to embedded_H and a decider is only accountable for computing >>>>>>>>> the mapping from its actual inputs to an accept or reject state >>>>>>>>> it makes no difference that Ĥ applied to ⟨Ĥ⟩ halts.
Thus you have admitted to LYING about working on the Halting
problem as if you were the embedded_H would be the same
algorithm as H, and the requirement on H was that is IS
accoutable for the machine its input represents,
You are simply too freaking stupid to understand that deciders
thus halt deciders are only accountable for computing the mapping >>>>>>> from their actual inputs (nothing else in the whole freaking
universe besides their actual inputs) to an accept or reject state. >>>>>>>
An actual computer scientist would know this.
It seems you don't understand the difference between capabilities
and requirements.
H is only CAPABLE of deciding based on what it can do. It can only >>>>>> computate a mapping based on what it actually can do.
It is REQUIRED to meet its requirements, which is to decide on the >>>>>> behavior of what its input would do if given to a UTM.
embedded_H must only determine whether or not is simulated input
can ever reach its final state in any finite number of steps.
Again, you seem to be lying about working on the Halting Problem and
Linz proof.
If you were working on the Halting Problem and Linz proof then
embedded_H would be identical to H, as required by Linz, and the
correct answer for the 'behavior' of the input to embedded_H <H^>
<H^> would be the behavior of UTM(<H^>,<H^>) which if embedded_H
goes to H.Qn then we know that H^ will go to H^.Qn and Halt, and
thus H/embedded_H going to H.Qn is incorrect.
So, you are just admitting that you are lying or are too stupid to
understan what you are talking about.
Which is it?
I will not tolerate any digression from the point at hand until we
have mutual agreement. This is verified as completely true entirely
on the basis of the meaning of its words:
embedded_H must only determine whether or not its simulated input can
ever reach its final state in any finite number of steps.
Translation: You will ignroe any disagreement with your incorrect
statement because you need to get people to accept your falso premise
for your unsound argument to work.
The problem with your statement is that you are showing that you
atually mean something different than the true meaning of the words.
H (and thus embedded_H) need to determine whether or not its simuleted
input will ever reach its final state for EVERY POSSIBLE finite number
of steps, i.e. as determine by a UTM.
∃N such that the pure simulation of the input to embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩
reaches ⟨Ĥ⟩.qy or ⟨Ĥ⟩.qn in N steps.
Richard Damon <Richard@Damon-Family.org> writes:
And the 'actual behavior of its actual inputs' is DEFINED to be what
the computation the input actually does when run as an independent
machine, or what a UTM will do when simulating that input.
If that isn't the meaning you are using, then you are just lying that
you are working on the halting problem, which is what seems to be the
case. (That you are lying that is).
It is certainly true that PO is not addressing the halting problem. He
has been 100% clear that false is, in his "opinion", the correct result
for at least one halting computation. This is not in dispute (unless
he's retracted that and I missed it).
To you and I, this means that he's not working on the halting problem,
but I am not sure you can say he is lying about that. For one thing,
how can he be intending to deceive (a core part of lying) when he's been clear the he accepts the wrong answer as being the right one? If
someone claims to be working on "the addition problem", and also claims
that 2+2=5 is correct, it's hard to consider either claim to be a lie.
The person is just deeply confused.
But what sort of confused can explain this nonsense? I think the answer
lies in PO's background. The "binary square root" function is not
computable as far as a mathematician is concerned because no TM can halt with, say, sqrt(0b10) on the tape. But to an engineer, the function
poses no problem because we can get as close as we like. If
0b1.01101010000 is not good enough, just add more digits.
The point is I think PO does not know what a formal, mathematical
problem really is. To him, anything about code, machines or programs is about solving an engineering problem "well enough" -- with "well enough"
open to be defined by PO himself.
More disturbing to me is that he is not even talking about Turing
machines, again as evidenced by his own plain words. It is not in
dispute that he claims that two (deterministic) TMs, one an identical
copy of the other, can transition to different states despite both being presented with identical input. These are not Turing machines but Magic machines, and I can't see how any discussion can be had while the action
of the things being considered is not a simple function of the input and
the state transition graph.
This is why I stopped replying. While there are things to say about
PO's Other Halting problem (principally that even the POOH problem can't
be solved), I had nothing more to say while the "machines" being
discussed are magic.
On 1/30/22 8:13 AM, olcott wrote:
On 1/29/2022 11:34 AM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
And the 'actual behavior of its actual inputs' is DEFINED to be what
the computation the input actually does when run as an independent
machine, or what a UTM will do when simulating that input.
If that isn't the meaning you are using, then you are just lying that
you are working on the halting problem, which is what seems to be the
case. (That you are lying that is).
It is certainly true that PO is not addressing the halting problem. He >>> has been 100% clear that false is, in his "opinion", the correct result
for at least one halting computation. This is not in dispute (unless
he's retracted that and I missed it).
To you and I, this means that he's not working on the halting problem,
but I am not sure you can say he is lying about that. For one thing,
how can he be intending to deceive (a core part of lying) when he's been >>> clear the he accepts the wrong answer as being the right one? If
someone claims to be working on "the addition problem", and also claims
that 2+2=5 is correct, it's hard to consider either claim to be a lie.
The person is just deeply confused.
But what sort of confused can explain this nonsense? I think the answer >>> lies in PO's background. The "binary square root" function is not
computable as far as a mathematician is concerned because no TM can halt >>> with, say, sqrt(0b10) on the tape. But to an engineer, the function
poses no problem because we can get as close as we like. If
0b1.01101010000 is not good enough, just add more digits.
The point is I think PO does not know what a formal, mathematical
problem really is. To him, anything about code, machines or programs is >>> about solving an engineering problem "well enough" -- with "well enough" >>> open to be defined by PO himself.
More disturbing to me is that he is not even talking about Turing
machines, again as evidenced by his own plain words. It is not in
dispute that he claims that two (deterministic) TMs, one an identical
copy of the other, can transition to different states despite both being >>> presented with identical input. These are not Turing machines but Magic >>> machines, and I can't see how any discussion can be had while the action >>> of the things being considered is not a simple function of the input and >>> the state transition graph.
Although Turing machines might not be able to tell that two
computations differ on their basis of their machine address x86
machines can do this.
But the proof is on Turing Machines, and not all x86 'programs' are the equivalent to Turing Machines, only those that meet the requirements of
being a Computation.
This seems to be the core of your problem, you don't understand what a computation actually is, and want to use the WRONG definition of it
being anything a modern computer does. Wrong defintions, wrong results.
On 1/30/22 10:32 AM, olcott wrote:
On 1/30/2022 7:44 AM, Richard Damon wrote:
On 1/30/22 8:13 AM, olcott wrote:
On 1/29/2022 11:34 AM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
And the 'actual behavior of its actual inputs' is DEFINED to be what >>>>>> the computation the input actually does when run as an independent >>>>>> machine, or what a UTM will do when simulating that input.
If that isn't the meaning you are using, then you are just lying that >>>>>> you are working on the halting problem, which is what seems to be the >>>>>> case. (That you are lying that is).
It is certainly true that PO is not addressing the halting
problem. He
has been 100% clear that false is, in his "opinion", the correct
result
for at least one halting computation. This is not in dispute (unless >>>>> he's retracted that and I missed it).
To you and I, this means that he's not working on the halting problem, >>>>> but I am not sure you can say he is lying about that. For one thing, >>>>> how can he be intending to deceive (a core part of lying) when he's
been
clear the he accepts the wrong answer as being the right one? If
someone claims to be working on "the addition problem", and also
claims
that 2+2=5 is correct, it's hard to consider either claim to be a lie. >>>>> The person is just deeply confused.
But what sort of confused can explain this nonsense? I think the
answer
lies in PO's background. The "binary square root" function is not
computable as far as a mathematician is concerned because no TM can
halt
with, say, sqrt(0b10) on the tape. But to an engineer, the function >>>>> poses no problem because we can get as close as we like. If
0b1.01101010000 is not good enough, just add more digits.
The point is I think PO does not know what a formal, mathematical
problem really is. To him, anything about code, machines or
programs is
about solving an engineering problem "well enough" -- with "well
enough"
open to be defined by PO himself.
More disturbing to me is that he is not even talking about Turing
machines, again as evidenced by his own plain words. It is not in
dispute that he claims that two (deterministic) TMs, one an identical >>>>> copy of the other, can transition to different states despite both
being
presented with identical input. These are not Turing machines but
Magic
machines, and I can't see how any discussion can be had while the
action
of the things being considered is not a simple function of the
input and
the state transition graph.
Although Turing machines might not be able to tell that two
computations differ on their basis of their machine address x86
machines can do this.
But the proof is on Turing Machines, and not all x86 'programs' are
the equivalent to Turing Machines, only those that meet the
requirements of being a Computation.
This seems to be the core of your problem, you don't understand what
a computation actually is, and want to use the WRONG definition of it
being anything a modern computer does. Wrong defintions, wrong results.
When a halt decider bases its halt status decision on the behavior of
the correct simulation of a finite number of N steps of its input
there is nothing about this that is not a computation.
Except that that is the WRONG definition of Halting. You can NOT
accurate determine halting with only FIXED number N of steps.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
When the copy of H embedded at Ĥ.qx correctly recognizes this
repeating pattern:
When Ĥ is applied to ⟨Ĥ⟩
Ĥ copies its input ⟨Ĥ1⟩ to ⟨Ĥ2⟩ then embedded_H simulates ⟨Ĥ1⟩ ⟨Ĥ2⟩
Then these steps would keep repeating:
Ĥ1 copies its input ⟨Ĥ2⟩ to ⟨Ĥ3⟩ then embedded_H simulates ⟨Ĥ2⟩ ⟨Ĥ3⟩
Ĥ2 copies its input ⟨Ĥ3⟩ to ⟨Ĥ4⟩ then embedded_H simulates ⟨Ĥ3⟩ ⟨Ĥ4⟩
Ĥ3 copies its input ⟨Ĥ4⟩ to ⟨Ĥ5⟩ then embedded_H simulates ⟨Ĥ4⟩
⟨Ĥ5⟩...
https://www.researchgate.net/publication/358009319_Halting_problem_undecidability_and_infinitely_nested_simulation_V3
On 1/30/22 2:18 PM, olcott wrote:
On 1/30/2022 11:41 AM, Richard Damon wrote:
On 1/30/22 12:05 PM, olcott wrote:
On 1/30/2022 10:40 AM, Richard Damon wrote:
On 1/30/22 10:32 AM, olcott wrote:
On 1/30/2022 7:44 AM, Richard Damon wrote:
On 1/30/22 8:13 AM, olcott wrote:
On 1/29/2022 11:34 AM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
And the 'actual behavior of its actual inputs' is DEFINED to >>>>>>>>>> be what
the computation the input actually does when run as an
independent
machine, or what a UTM will do when simulating that input. >>>>>>>>>>
If that isn't the meaning you are using, then you are just >>>>>>>>>> lying that
you are working on the halting problem, which is what seems to >>>>>>>>>> be the
case. (That you are lying that is).
It is certainly true that PO is not addressing the halting
problem. He
has been 100% clear that false is, in his "opinion", the
correct result
for at least one halting computation. This is not in dispute >>>>>>>>> (unless
he's retracted that and I missed it).
To you and I, this means that he's not working on the halting >>>>>>>>> problem,
but I am not sure you can say he is lying about that. For one >>>>>>>>> thing,
how can he be intending to deceive (a core part of lying) when >>>>>>>>> he's been
clear the he accepts the wrong answer as being the right one? If >>>>>>>>> someone claims to be working on "the addition problem", and
also claims
that 2+2=5 is correct, it's hard to consider either claim to be >>>>>>>>> a lie.
The person is just deeply confused.
But what sort of confused can explain this nonsense? I think >>>>>>>>> the answer
lies in PO's background. The "binary square root" function is not >>>>>>>>> computable as far as a mathematician is concerned because no TM >>>>>>>>> can halt
with, say, sqrt(0b10) on the tape. But to an engineer, the >>>>>>>>> function
poses no problem because we can get as close as we like. If >>>>>>>>> 0b1.01101010000 is not good enough, just add more digits.
The point is I think PO does not know what a formal, mathematical >>>>>>>>> problem really is. To him, anything about code, machines or >>>>>>>>> programs is
about solving an engineering problem "well enough" -- with
"well enough"
open to be defined by PO himself.
More disturbing to me is that he is not even talking about Turing >>>>>>>>> machines, again as evidenced by his own plain words. It is not in >>>>>>>>> dispute that he claims that two (deterministic) TMs, one an
identical
copy of the other, can transition to different states despite >>>>>>>>> both being
presented with identical input. These are not Turing machines >>>>>>>>> but Magic
machines, and I can't see how any discussion can be had while >>>>>>>>> the action
of the things being considered is not a simple function of the >>>>>>>>> input and
the state transition graph.
Although Turing machines might not be able to tell that two
computations differ on their basis of their machine address x86 >>>>>>>> machines can do this.
But the proof is on Turing Machines, and not all x86 'programs'
are the equivalent to Turing Machines, only those that meet the
requirements of being a Computation.
This seems to be the core of your problem, you don't understand
what a computation actually is, and want to use the WRONG
definition of it being anything a modern computer does. Wrong
defintions, wrong results.
When a halt decider bases its halt status decision on the behavior >>>>>> of the correct simulation of a finite number of N steps of its
input there is nothing about this that is not a computation.
Except that that is the WRONG definition of Halting. You can NOT
accurate determine halting with only FIXED number N of steps.
This is a stupid mistake on your part.
It is dead obvious that the correct simulation of ⟨Ĥ⟩ applied to ⟨Ĥ⟩
by embedded_H shows an infinitely repeating pattern in less than
four simulation cycles.
That you deny things that are dead obvious is what I call your
mistakes stupid mistakes rather than simply mistakes.
You need to use the right definition, based on the Halting Problem.
computation that halts … the Turing machine will halt whenever it
enters a final state. (Linz:1990:234)
Right, NOT "When a halt decider bases its halt status decision on the behavior of the correct simulation of a finite number of N steps of its
input there is nothing about this that is not a computation."
We need to look at the ACTUAL TURING MACHINE, even if you want to call
that comming in the 'back door;
Because all simulating halt deciders are deciders they are only
The fact that you need to change it just proves that your haven't got
a prayer with the right definitions.
NOTHING in the actual definition mentions anything about the behavior
of the decider in determining if the computation actually halts.
In fact, if you knew the first thing of Computation Theory, you would
know that such a definition that includes that would actually be
IMPOSSIBLE, as Halting is a Property of the Computation itself, and
needs to be the same no matter what decider tries to decide on it.
The fact that you rely on things that seem 'dead obvious' to you
shows that you just don't understand how actual logic and proofs
work. You don't start with things that are 'obvious', you start with
the things DEFINED to be true, and the things that have been proven
to be true based on those definition.
Using the 'obvious' is one of the biggest sources of fallacies.
We can show that your 'claim' is not true, at least for a H that
aborts its simulation and goes to H.Qn,
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
So in other words you believe that when embedded_H aborts the
simulation of its input that this aborted input transitions to ⟨Ĥ⟩.qn >> even though it has been aborted?
You mincing your words again.
The DEFINITION of the operation that determines the 'Halting Behavior of
the Input', is the ACTUAL RUNNING OF THE MACHINE REPRESENTED BY THE INPUT.
That machine does not halt just because H/embedded_H aborts its
simulation of its input.
So, YES, when embedded_H aborts ITS PARTIAL simulation of its input,
that the ACTUAL MACHINE it represents will continue on to H^.Qn.
On 1/30/22 12:05 PM, olcott wrote:
On 1/30/2022 10:40 AM, Richard Damon wrote:
On 1/30/22 10:32 AM, olcott wrote:
On 1/30/2022 7:44 AM, Richard Damon wrote:
On 1/30/22 8:13 AM, olcott wrote:
On 1/29/2022 11:34 AM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
And the 'actual behavior of its actual inputs' is DEFINED to be >>>>>>>> what
the computation the input actually does when run as an independent >>>>>>>> machine, or what a UTM will do when simulating that input.
If that isn't the meaning you are using, then you are just lying >>>>>>>> that
you are working on the halting problem, which is what seems to >>>>>>>> be the
case. (That you are lying that is).
It is certainly true that PO is not addressing the halting
problem. He
has been 100% clear that false is, in his "opinion", the correct >>>>>>> result
for at least one halting computation. This is not in dispute
(unless
he's retracted that and I missed it).
To you and I, this means that he's not working on the halting
problem,
but I am not sure you can say he is lying about that. For one
thing,
how can he be intending to deceive (a core part of lying) when
he's been
clear the he accepts the wrong answer as being the right one? If >>>>>>> someone claims to be working on "the addition problem", and also >>>>>>> claims
that 2+2=5 is correct, it's hard to consider either claim to be a >>>>>>> lie.
The person is just deeply confused.
But what sort of confused can explain this nonsense? I think the >>>>>>> answer
lies in PO's background. The "binary square root" function is not >>>>>>> computable as far as a mathematician is concerned because no TM
can halt
with, say, sqrt(0b10) on the tape. But to an engineer, the function >>>>>>> poses no problem because we can get as close as we like. If
0b1.01101010000 is not good enough, just add more digits.
The point is I think PO does not know what a formal, mathematical >>>>>>> problem really is. To him, anything about code, machines or
programs is
about solving an engineering problem "well enough" -- with "well >>>>>>> enough"
open to be defined by PO himself.
More disturbing to me is that he is not even talking about Turing >>>>>>> machines, again as evidenced by his own plain words. It is not in >>>>>>> dispute that he claims that two (deterministic) TMs, one an
identical
copy of the other, can transition to different states despite
both being
presented with identical input. These are not Turing machines
but Magic
machines, and I can't see how any discussion can be had while the >>>>>>> action
of the things being considered is not a simple function of the
input and
the state transition graph.
Although Turing machines might not be able to tell that two
computations differ on their basis of their machine address x86
machines can do this.
But the proof is on Turing Machines, and not all x86 'programs' are
the equivalent to Turing Machines, only those that meet the
requirements of being a Computation.
This seems to be the core of your problem, you don't understand
what a computation actually is, and want to use the WRONG
definition of it being anything a modern computer does. Wrong
defintions, wrong results.
When a halt decider bases its halt status decision on the behavior
of the correct simulation of a finite number of N steps of its input
there is nothing about this that is not a computation.
Except that that is the WRONG definition of Halting. You can NOT
accurate determine halting with only FIXED number N of steps.
This is a stupid mistake on your part.
It is dead obvious that the correct simulation of ⟨Ĥ⟩ applied to ⟨Ĥ⟩
by embedded_H shows an infinitely repeating pattern in less than four
simulation cycles.
That you deny things that are dead obvious is what I call your
mistakes stupid mistakes rather than simply mistakes.
You need to use the right definition, based on the Halting Problem.
The fact that you need to change it just proves that your haven't got a prayer with the right definitions.
NOTHING in the actual definition mentions anything about the behavior of
the decider in determining if the computation actually halts.
In fact, if you knew the first thing of Computation Theory, you would
know that such a definition that includes that would actually be
IMPOSSIBLE, as Halting is a Property of the Computation itself, and
needs to be the same no matter what decider tries to decide on it.
The fact that you rely on things that seem 'dead obvious' to you shows
that you just don't understand how actual logic and proofs work. You
don't start with things that are 'obvious', you start with the things
DEFINED to be true, and the things that have been proven to be true
based on those definition.
Using the 'obvious' is one of the biggest sources of fallacies.
We can show that your 'claim' is not true, at least for a H that aborts
its simulation and goes to H.Qn,
On 1/30/22 6:35 PM, olcott wrote:
On 1/30/2022 5:30 PM, Richard Damon wrote:
On 1/30/22 6:12 PM, olcott wrote:
On 1/30/2022 4:39 PM, Richard Damon wrote:
On 1/30/22 4:21 PM, olcott wrote:
On 1/30/2022 2:54 PM, Richard Damon wrote:Right, and the correct answer for if H(wM, w) should report halting
On 1/30/22 3:09 PM, olcott wrote:
Because all simulating halt deciders are deciders they are only >>>>>>>> accountable for computing the mapping from their input finite
strings to an accept or reject state on the basis of whether or >>>>>>>> not their correct simulation of this input can possibly reach
the final state of this simulated input in any finite number of >>>>>>>> steps.
It is like you put a guard on the front door that is supposed to >>>>>>>> report anyone coming in the front door (the actual inputs). Then >>>>>>>> someone comes in the back door (non inputs) and the guard does >>>>>>>> not report this. Since the guard is only supposed to report
people coming in the front door it is incorrect to say that the >>>>>>>> guard made a mistake by not reporting people that came in the
back door.
embedded_H is not supposed to report on the halt status of the >>>>>>>> computation that it is contained within: Ĥ applied to ⟨Ĥ⟩. >>>>>>>>
So, you have just admitted that you aren't working on the Halting >>>>>>> Problem, so any claims therein are just lies.
Since the definition of the Halting Problem refers to the ACTUAL >>>>>>> behavior of the machine the input represents, and NOT the partial >>>>>>> simulation that some simulating halt decider might do, you are
admitting that you H is NOT using the Halting Problem definition >>>>>>> and thus your claims that your results apply to the Halting
problem are just lies.
For the Halting Problem, the correct results for the inputs is
based on the actual behavior of the machine, or its equivalent
the simulation of the input with a REAL UTM. Thus the 'Front
Door' to the problem s based on that, so either you your guards
are lying or, what seems more likely, you posted them to the
wrong door.
You have basically just proved that you have totally wasted the
last years of your life, as you have been working on the wrong
problem, because you just don't understand what the problem you
wanted to solve actually was.
FAIL.
Sum(int X, int Y) { return X + Y );
It is true that halt deciders must report on the actual behavior
of their actual inputs in the same way that Sum(2,5) must return 7. >>>>>
is if M x will reach a final state in a finite number of steps.
This is identical to if UTM(wM, w) will halt. Dosn't matter what
you think otherwise, that IS the definition of the actual behavior.
It is NOT something based on the partial simulation that H does.
The you cannot understand how all kinds of infinite behavior
patterns can be easily recognized in a finite number of steps is not
any mistake on my part:
Yes, MANY can, but not ALL.
If you need to change the definition, then you are not working on the
halting problem.
I don't have to change the definition I merely make it much more precise:
Except that the original definition IS exactly precise. The is a single
WELL DEFINED answer for any instance of the question. The fact that you
see some abiguity just shows you don't really understand the field.
(1) Halting is defined as reaching a final state.
But you change the 'of what'.
(2) Halt deciders like all deciders can and must ignore everything
that is not a direct input.
And the 'direct input' of <H^> <H^> directly refers to the computation
of H^ applied to <H^> by DEFINITION.
On 1/30/22 9:05 PM, olcott wrote:
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
These statements need the conditions, that H^ goes to H^.Qy/H^.Qn iff
H goes to that corresponding state.
⟨Ĥ⟩ ⟨Ĥ⟩ is syntactically specified as an input to embedded_H in the
same way that (5,3) is syntactically specified as an input to Sum(5,3)
Right, and the
Ĥ ⟨Ĥ⟩ is NOT syntactically specified as an input to embedded_H in the >> same way that (1,2) is NOT syntactically specified as an input to
Sum(5,3)
Right, but perhaps you don't understand that from you above statement
the right answer is based on if UTM(<H^>,<H^>) Halts which by the
definition of a UTM means if H^ applied to <H^> Halts.
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
On 1/30/22 9:05 PM, olcott wrote:
On 1/30/2022 7:45 PM, Richard Damon wrote:
On 1/30/22 7:21 PM, olcott wrote:
On 1/30/2022 6:01 PM, Richard Damon wrote:
On 1/30/22 6:35 PM, olcott wrote:
On 1/30/2022 5:30 PM, Richard Damon wrote:
On 1/30/22 6:12 PM, olcott wrote:
On 1/30/2022 4:39 PM, Richard Damon wrote:
On 1/30/22 4:21 PM, olcott wrote:
On 1/30/2022 2:54 PM, Richard Damon wrote:
On 1/30/22 3:09 PM, olcott wrote:
Because all simulating halt deciders are deciders they are >>>>>>>>>>>> only accountable for computing the mapping from their input >>>>>>>>>>>> finite strings to an accept or reject state on the basis of >>>>>>>>>>>> whether or not their correct simulation of this input can >>>>>>>>>>>> possibly reach the final state of this simulated input in >>>>>>>>>>>> any finite number of steps.
It is like you put a guard on the front door that is
supposed to report anyone coming in the front door (the >>>>>>>>>>>> actual inputs). Then someone comes in the back door (non >>>>>>>>>>>> inputs) and the guard does not report this. Since the guard >>>>>>>>>>>> is only supposed to report people coming in the front door >>>>>>>>>>>> it is incorrect to say that the guard made a mistake by not >>>>>>>>>>>> reporting people that came in the back door.
embedded_H is not supposed to report on the halt status of >>>>>>>>>>>> the computation that it is contained within: Ĥ applied to ⟨Ĥ⟩.
So, you have just admitted that you aren't working on the >>>>>>>>>>> Halting Problem, so any claims therein are just lies.
Since the definition of the Halting Problem refers to the >>>>>>>>>>> ACTUAL behavior of the machine the input represents, and NOT >>>>>>>>>>> the partial simulation that some simulating halt decider >>>>>>>>>>> might do, you are admitting that you H is NOT using the
Halting Problem definition and thus your claims that your >>>>>>>>>>> results apply to the Halting problem are just lies.
For the Halting Problem, the correct results for the inputs >>>>>>>>>>> is based on the actual behavior of the machine, or its
equivalent the simulation of the input with a REAL UTM. Thus >>>>>>>>>>> the 'Front Door' to the problem s based on that, so either >>>>>>>>>>> you your guards are lying or, what seems more likely, you >>>>>>>>>>> posted them to the wrong door.
You have basically just proved that you have totally wasted >>>>>>>>>>> the last years of your life, as you have been working on the >>>>>>>>>>> wrong problem, because you just don't understand what the >>>>>>>>>>> problem you wanted to solve actually was.
FAIL.
Sum(int X, int Y) { return X + Y );
It is true that halt deciders must report on the actual
behavior of their actual inputs in the same way that Sum(2,5) >>>>>>>>>> must return 7.
Right, and the correct answer for if H(wM, w) should report
halting is if M x will reach a final state in a finite number >>>>>>>>> of steps. This is identical to if UTM(wM, w) will halt. Dosn't >>>>>>>>> matter what you think otherwise, that IS the definition of the >>>>>>>>> actual behavior.
It is NOT something based on the partial simulation that H does. >>>>>>>>>
The you cannot understand how all kinds of infinite behavior
patterns can be easily recognized in a finite number of steps is >>>>>>>> not any mistake on my part:
Yes, MANY can, but not ALL.
If you need to change the definition, then you are not working on >>>>>>> the halting problem.
I don't have to change the definition I merely make it much more
precise:
Except that the original definition IS exactly precise. The is a
single WELL DEFINED answer for any instance of the question. The
fact that you see some abiguity just shows you don't really
understand the field.
(1) Halting is defined as reaching a final state.
But you change the 'of what'.
A directly executed TM halts when it reaches the final state of this
directly executed TM.
A simulated TM description halts when the simulated TM description
reaches it final state.
Right, but if the simulator isn't a real UTM and stops simulating,
the 'pure simulation' continues until it either halts or runs for ever.
H.q0 Wm W ⊢* H.qy
iff UTM Wm W reaches its final state
H.q0 Wm W ⊢* H.qn
iff UTM Wm W never reaches its final state
Right, and that is the REAL UTM, not H playing one on TV and stoppingAs your scatterbrained mind keep repeating....
when it thinks it has an answer.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 427 |
Nodes: | 16 (3 / 13) |
Uptime: | 34:12:39 |
Calls: | 9,029 |
Calls today: | 12 |
Files: | 13,384 |
Messages: | 6,008,751 |