On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote:
On 2/29/2024 2:48 PM, wij wrote:
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote:
On 2/29/2024 1:37 PM, Mikko wrote:
On 2024-02-29 15:51:56 +0000, olcott said:
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs to
report on
A Turing machine is not in any memory space.
That no memory space is specified because Turing machines >>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>> machines is the human memory where these ideas are located. >>>>>>>>>>>>>
The entire notion of undecidability when it depends on >>>>>>>>>>>>> epistemological antinomies is incoherent.
People that learn these things by rote never notice this. >>>>>>>>>>>>> Philosophers that examine these things looking for
incoherence find it.
...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, playing >>>>>>>>>>>> idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct
understanding of truth.
People believes GUR are going to survive.
People does not believe GUR are going to vanish.
What the Hell is GUR ?
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ >>>>>>>>
Basically, GUR says that no one even your god can defy that HP >>>>>>>> is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be used for
a similar undecidability proof...(Gödel 1931:43)
The general notion of decision problem undecidability is
fundamentally
flawed in all of those cases where a decider is required to
correctly
answer a self-contradictory (thus incorrect) question.
When we account for this then epistemological antinomies are always >>>>>>> excluded from the domain of every decision problem making all of >>>>>>> these decision problems decidable.
It seems you try to change what the halting problem again.
https://en.wikipedia.org/wiki/Halting_problem
In computability theory, the halting problem is the problem of
determining, from a description
of
an
arbitrary computer program and an input, whether the program will
finish running, or continue
to
run
forever....
This wiki definition had been shown many times. But, since your
English is
terrible, you often read it as something else (actually, deliberately >>>>>> interpreted it differently, so called 'lie')
If you want to refute Halting Problem, you must first understand
what the
problem is about, right? You never hit the target that every one
can see, but POOP.
Note: My email was delivered strangely. It swapped to sci.logic !!!
If we have the decision problem that no one can answer this question: >>>>> Is this sentence true or false: "What time is it?"
This is not the halting problem.
Someone has to point out that there is something wrong with it.
This is another problem (not the HP neither)
The halting problem is one of many problems that is
only "undecidable" because the notion of decidability
incorrectly requires a correct answer to a self-contradictory
(thus incorrect) question.
What is the 'correct answer' to all HP like problems ?
The correct answer to all undecidable decision problems
that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the
domain of any and all decision problems. This applies
to the Halting Problem and many others.
All incorrect questions are rejected as invalid input.
On 3/1/2024 5:19 AM, Mikko wrote:
On 2024-03-01 02:28:34 +0000, Richard Damon said:
On 2/29/24 5:29 PM, olcott wrote:
On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote:
On 2/29/2024 2:48 PM, wij wrote:
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
On 2024-02-29 15:51:56 +0000, olcott said:
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs to
report on
A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>
That no memory space is specified because Turing machines >>>>>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>> machines is the human memory where these ideas are located. >>>>>>>>>>>>>>>>
The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>> epistemological antinomies is incoherent.
People that learn these things by rote never notice this. >>>>>>>>>>>>>>>> Philosophers that examine these things looking for >>>>>>>>>>>>>>>> incoherence find it.
...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, playing >>>>>>>>>>>>>>> idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct
understanding of truth.
People believes GUR are going to survive.
People does not believe GUR are going to vanish.
What the Hell is GUR ?
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ
Basically, GUR says that no one even your god can defy that >>>>>>>>>>> HP is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be used for >>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)
The general notion of decision problem undecidability is
fundamentally
flawed in all of those cases where a decider is required to >>>>>>>>>> correctly
answer a self-contradictory (thus incorrect) question.
When we account for this then epistemological antinomies are >>>>>>>>>> always
excluded from the domain of every decision problem making all of >>>>>>>>>> these decision problems decidable.
It seems you try to change what the halting problem again.
https://en.wikipedia.org/wiki/Halting_problem
In computability theory, the halting problem is the problem of >>>>>>>>> determining, from a description
of
an
arbitrary computer program and an input, whether the program >>>>>>>>> will finish running, or continue
to
run
forever....
This wiki definition had been shown many times. But, since your >>>>>>>>> English is
terrible, you often read it as something else (actually,
deliberately
interpreted it differently, so called 'lie')
If you want to refute Halting Problem, you must first
understand what the
problem is about, right? You never hit the target that every >>>>>>>>> one can see, but POOP.
Note: My email was delivered strangely. It swapped to sci.logic !!! >>>>>>>
If we have the decision problem that no one can answer this
question:
Is this sentence true or false: "What time is it?"
This is not the halting problem.
Someone has to point out that there is something wrong with it. >>>>>>>>
This is another problem (not the HP neither)
The halting problem is one of many problems that is
only "undecidable" because the notion of decidability
incorrectly requires a correct answer to a self-contradictory
(thus incorrect) question.
What is the 'correct answer' to all HP like problems ?
The correct answer to all undecidable decision problems
that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the
domain of any and all decision problems. This applies
to the Halting Problem and many others.
In other words, just define that some Turing Machines aren't actually
Turing Machines, or aren't Turing Machines if they are given certain
inputs.
That is just admitting that the system isn't actually decidable, by
trying to outlaw the problems.
The issue then is, you can't tell if a thing that looks like and acts
lie a Turing Machine is actually a PO-Turing Machine, until you can
confirm that it doesn't have any of these contradictory properties.
My guess is that detecting that is probably non-computable, so you
can't tell for sure if what you have is actually a PO-Turing Machine
or not
If the restrictions on the acceptability of a Turing macine are
sufficiently
strong both the restricted halting problem and the membership or the
restricted domain are Turing solvable. For example, if the head can
only move
in one direction.
I have reverted to every detail of the original halting problem
thus now accept that a halt decider must report on the behavior
of the direct execution of its input.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
As long as some computable criteria exists for Ĥ.H to transition
to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
H simply looks for whatever wrong answer that Ĥ.H returns and
reports on the halting or not halting behavior of that.
On 3/1/2024 11:19 AM, Richard Damon wrote:And thus isn't Ĥ.H, and so you LIE that you are following "every detail"
You are just proving that you are a PATHOLOGICAL LIAR.
You (and everyone else here) knows that I honestly
believe what I say thus you lie when you call me a liar.
You have been called out on this by others before.
On 3/1/2024 5:19 AM, Mikko wrote:
On 2024-03-01 02:28:34 +0000, Richard Damon said:
On 2/29/24 5:29 PM, olcott wrote:
On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote:
On 2/29/2024 2:48 PM, wij wrote:
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
On 2024-02-29 15:51:56 +0000, olcott said:
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs to report on
A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>
That no memory space is specified because Turing machines >>>>>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>> machines is the human memory where these ideas are located. >>>>>>>>>>>>>>>>
The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>> epistemological antinomies is incoherent.
People that learn these things by rote never notice this. >>>>>>>>>>>>>>>> Philosophers that examine these things looking for >>>>>>>>>>>>>>>> incoherence find it.
...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, playing idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct understanding of truth.
People believes GUR are going to survive.
People does not believe GUR are going to vanish.
What the Hell is GUR ?
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ
Basically, GUR says that no one even your god can defy that HP is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be used for >>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)
The general notion of decision problem undecidability is fundamentally
flawed in all of those cases where a decider is required to correctly
answer a self-contradictory (thus incorrect) question.
When we account for this then epistemological antinomies are always >>>>>>>>>> excluded from the domain of every decision problem making all of >>>>>>>>>> these decision problems decidable.
It seems you try to change what the halting problem again.
https://en.wikipedia.org/wiki/Halting_problem
In computability theory, the halting problem is the problem of >>>>>>>>> determining, from a description
of
an
arbitrary computer program and an input, whether the program will >>>>>>>>> finish running, or continue
to
run
forever....
This wiki definition had been shown many times. But, since your English is
terrible, you often read it as something else (actually, deliberately >>>>>>>>> interpreted it differently, so called 'lie')
If you want to refute Halting Problem, you must first understand what the
problem is about, right? You never hit the target that every one can >>>>>>>>> see, but POOP.
Note: My email was delivered strangely. It swapped to sci.logic !!! >>>>>>>
If we have the decision problem that no one can answer this question: >>>>>>>> Is this sentence true or false: "What time is it?"
This is not the halting problem.
Someone has to point out that there is something wrong with it. >>>>>>>>
This is another problem (not the HP neither)
The halting problem is one of many problems that is
only "undecidable" because the notion of decidability
incorrectly requires a correct answer to a self-contradictory
(thus incorrect) question.
What is the 'correct answer' to all HP like problems ?
The correct answer to all undecidable decision problems
that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the
domain of any and all decision problems. This applies
to the Halting Problem and many others.
In other words, just define that some Turing Machines aren't actually
Turing Machines, or aren't Turing Machines if they are given certain
inputs.
That is just admitting that the system isn't actually decidable, by
trying to outlaw the problems.
The issue then is, you can't tell if a thing that looks like and acts
lie a Turing Machine is actually a PO-Turing Machine, until you can
confirm that it doesn't have any of these contradictory properties.
My guess is that detecting that is probably non-computable, so you
can't tell for sure if what you have is actually a PO-Turing Machine or
not
If the restrictions on the acceptability of a Turing macine are sufficiently >> strong both the restricted halting problem and the membership or the
restricted domain are Turing solvable. For example, if the head can only move
in one direction.
I have reverted to every detail of the original halting problem
thus now accept that a halt decider must report on the behavior
of the direct execution of its input.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
As long as some computable criteria exists for Ĥ.H to transition
to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
H simply looks for whatever wrong answer that Ĥ.H returns and
reports on the halting or not halting behavior of that.
On 3/2/2024 4:40 AM, Mikko wrote:
On 2024-03-01 17:03:39 +0000, olcott said:
On 3/1/2024 5:19 AM, Mikko wrote:
On 2024-03-01 02:28:34 +0000, Richard Damon said:
On 2/29/24 5:29 PM, olcott wrote:
On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs
to report on
A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>
That no memory space is specified because Turing machines >>>>>>>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>>>> machines is the human memory where these ideas are >>>>>>>>>>>>>>>>>> located.
The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>>>> epistemological antinomies is incoherent.
People that learn these things by rote never notice this. >>>>>>>>>>>>>>>>>> Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>> incoherence find it.
...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, playing >>>>>>>>>>>>>>>>> idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct >>>>>>>>>>>>>>>> understanding of truth.
People believes GUR are going to survive.
People does not believe GUR are going to vanish.
What the Hell is GUR ?
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ
Basically, GUR says that no one even your god can defy that >>>>>>>>>>>>> HP is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be used for >>>>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)
The general notion of decision problem undecidability is >>>>>>>>>>>> fundamentally
flawed in all of those cases where a decider is required to >>>>>>>>>>>> correctly
answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>
When we account for this then epistemological antinomies are >>>>>>>>>>>> always
excluded from the domain of every decision problem making >>>>>>>>>>>> all of
these decision problems decidable.
It seems you try to change what the halting problem again. >>>>>>>>>>>
https://en.wikipedia.org/wiki/Halting_problem
In computability theory, the halting problem is the problem >>>>>>>>>>> of determining, from a description
of
an
arbitrary computer program and an input, whether the program >>>>>>>>>>> will finish running, or continue
to
run
forever....
This wiki definition had been shown many times. But, since >>>>>>>>>>> your English is
terrible, you often read it as something else (actually, >>>>>>>>>>> deliberately
interpreted it differently, so called 'lie')
If you want to refute Halting Problem, you must first
understand what the
problem is about, right? You never hit the target that every >>>>>>>>>>> one can see, but POOP.
Note: My email was delivered strangely. It swapped to sci.logic >>>>>>>>> !!!
If we have the decision problem that no one can answer this >>>>>>>>>> question:
Is this sentence true or false: "What time is it?"
This is not the halting problem.
Someone has to point out that there is something wrong with it. >>>>>>>>>>
This is another problem (not the HP neither)
The halting problem is one of many problems that is
only "undecidable" because the notion of decidability
incorrectly requires a correct answer to a self-contradictory
(thus incorrect) question.
What is the 'correct answer' to all HP like problems ?
The correct answer to all undecidable decision problems
that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the
domain of any and all decision problems. This applies
to the Halting Problem and many others.
In other words, just define that some Turing Machines aren't
actually Turing Machines, or aren't Turing Machines if they are
given certain inputs.
That is just admitting that the system isn't actually decidable, by
trying to outlaw the problems.
The issue then is, you can't tell if a thing that looks like and
acts lie a Turing Machine is actually a PO-Turing Machine, until
you can confirm that it doesn't have any of these contradictory
properties.
My guess is that detecting that is probably non-computable, so you
can't tell for sure if what you have is actually a PO-Turing
Machine or not
If the restrictions on the acceptability of a Turing macine are
sufficiently
strong both the restricted halting problem and the membership or the
restricted domain are Turing solvable. For example, if the head can
only move
in one direction.
I have reverted to every detail of the original halting problem
thus now accept that a halt decider must report on the behavior
of the direct execution of its input.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
Hard to do if Ĥ.H says the same as H.
Hard to ensure that Ĥ.H does not say the same as H.
Both H and Ĥ.H simulate their inputs until they see that these
inputs must be aborted to prevent their own infinite execution.
When they find that they must abort the simulation of their
inputs they transition to their NO state.
This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning
to H.qy. I have already empirically proved that two identical
machines on identical input can transition to different final
states when one of these identical machines has a pathological
relationship with its input and the other does not
*This principle seems to be sound*
Two identical machines must derive the same result when
applied to the same input.
*Yet seems contradicted by the execution trace shown below*
Because D calls H and D does not call H1 the inputs are not
actually identical even though they have identical machine
code bytes.
H sees D(D) call itself; this forces H to abort D.
H1 does not see D(D) call itself; this does not force H1 to abort D.
int D(int (*x)())
{
int Halt_Status = H(x, x);
if (Halt_Status)
HERE: goto HERE;
return Halt_Status;
}
int main()
{
Output("Input_Halts = ", H1(D,D));
}
machine stack stack machine assembly
address address data code language
======== ======== ======== ========= ============= [00001d42][00102fe9][00000000] 55 push ebp ; begin main()
[00001d43][00102fe9][00000000] 8bec mov ebp,esp [00001d45][00102fe5][00001d12] 68121d0000 push 00001d12 ; push D [00001d4a][00102fe1][00001d12] 68121d0000 push 00001d12 ; push D [00001d4f][00102fdd][00001d54] e8eef6ffff call 00001442 ; call H1(D,D)
H1: Begin Simulation Execution Trace Stored at:113095
Address_of_H1:1442
[00001d12][00113081][00113085] 55 push ebp ; begin D
[00001d13][00113081][00113085] 8bec mov ebp,esp [00001d15][0011307d][00103051] 51 push ecx [00001d16][0011307d][00103051] 8b4508 mov eax,[ebp+08] [00001d19][00113079][00001d12] 50 push eax ; push D [00001d1a][00113079][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][00113075][00001d12] 51 push ecx ; push D [00001d1e][00113071][00001d23] e81ff8ffff call 00001542 ; call H(D,D)
H: Begin Simulation Execution Trace Stored at:15dabd
Address_of_H:1542
[00001d12][0015daa9][0015daad] 55 push ebp ; begin D
[00001d13][0015daa9][0015daad] 8bec mov ebp,esp [00001d15][0015daa5][0014da79] 51 push ecx [00001d16][0015daa5][0014da79] 8b4508 mov eax,[ebp+08] [00001d19][0015daa1][00001d12] 50 push eax ; push D [00001d1a][0015daa1][00001d12] 8b4d08 mov ecx,[ebp+08] [00001d1d][0015da9d][00001d12] 51 push ecx ; push D [00001d1e][0015da99][00001d23] e81ff8ffff call 00001542 ; call H(D,D)
H: Recursive Simulation Detected Simulation Stopped (return 0 to caller)
[00001d23][0011307d][00103051] 83c408 add esp,+08 ; returned to D [00001d26][0011307d][00000000] 8945fc mov [ebp-04],eax [00001d29][0011307d][00000000] 837dfc00 cmp dword [ebp-04],+00 [00001d2d][0011307d][00000000] 7402 jz 00001d31 [00001d31][0011307d][00000000] 8b45fc mov eax,[ebp-04] [00001d34][00113081][00113085] 8be5 mov esp,ebp [00001d36][00113085][00001541] 5d pop ebp [00001d37][00113089][00001d12] c3 ret ; exit D
H1: End Simulation Input Terminated Normally (return 1 to caller)
[00001d54][00102fe9][00000000] 83c408 add esp,+08 [00001d57][00102fe5][00000001] 50 push eax ; H1 return value
[00001d58][00102fe1][00000763] 6863070000 push 00000763 ; string address [00001d5d][00102fe1][00000763] e820eaffff call 00000782 ; call Output Input_Halts = 1
[00001d62][00102fe9][00000000] 83c408 add esp,+08 [00001d65][00102fe9][00000000] 33c0 xor eax,eax [00001d67][00102fed][00000018] 5d pop ebp [00001d68][00102ff1][00000000] c3 ret ; exit main()
Number of Instructions Executed(470247) == 7019 Pages
As long as some computable criteria exists for Ĥ.H to transition
to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
That is not very long.
H simply looks for whatever wrong answer that Ĥ.H returns and
reports on the halting or not halting behavior of that.
On 3/2/2024 3:53 PM, Richard Damon wrote:
On 3/2/24 11:24 AM, olcott wrote:
On 3/2/2024 4:40 AM, Mikko wrote:
On 2024-03-01 17:03:39 +0000, olcott said:
On 3/1/2024 5:19 AM, Mikko wrote:
On 2024-03-01 02:28:34 +0000, Richard Damon said:
On 2/29/24 5:29 PM, olcott wrote:
On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:What the Hell is GUR ?
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>>>
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely >>>>>>>>>>>>>>>>>>>>>> needs to report on
A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>>>
That no memory space is specified because Turing >>>>>>>>>>>>>>>>>>>> machines
are imaginary fictions does not entail that they >>>>>>>>>>>>>>>>>>>> have no
memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>>>>>> machines is the human memory where these ideas are >>>>>>>>>>>>>>>>>>>> located.
The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>>>>>> epistemological antinomies is incoherent. >>>>>>>>>>>>>>>>>>>>
People that learn these things by rote never notice >>>>>>>>>>>>>>>>>>>> this.
Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>>>> incoherence find it.
...14 Every epistemological antinomy can likewise be >>>>>>>>>>>>>>>>>>>> used
for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>>>
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, >>>>>>>>>>>>>>>>>>> playing idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct >>>>>>>>>>>>>>>>>> understanding of truth.
People believes GUR are going to survive.
People does not believe GUR are going to vanish. >>>>>>>>>>>>>>>>
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ
Basically, GUR says that no one even your god can defy >>>>>>>>>>>>>>> that HP is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be used for >>>>>>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)
The general notion of decision problem undecidability is >>>>>>>>>>>>>> fundamentally
flawed in all of those cases where a decider is required >>>>>>>>>>>>>> to correctly
answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>>>
When we account for this then epistemological antinomies >>>>>>>>>>>>>> are always
excluded from the domain of every decision problem making >>>>>>>>>>>>>> all of
these decision problems decidable.
It seems you try to change what the halting problem again. >>>>>>>>>>>>>
https://en.wikipedia.org/wiki/Halting_problem
In computability theory, the halting problem is the problem >>>>>>>>>>>>> of determining, from a description
of
an
arbitrary computer program and an input, whether the >>>>>>>>>>>>> program will finish running, or continue
to
run
forever....
This wiki definition had been shown many times. But, since >>>>>>>>>>>>> your English is
terrible, you often read it as something else (actually, >>>>>>>>>>>>> deliberately
interpreted it differently, so called 'lie')
If you want to refute Halting Problem, you must first >>>>>>>>>>>>> understand what the
problem is about, right? You never hit the target that >>>>>>>>>>>>> every one can see, but POOP.
Note: My email was delivered strangely. It swapped to
sci.logic !!!
If we have the decision problem that no one can answer this >>>>>>>>>>>> question:
Is this sentence true or false: "What time is it?"
This is not the halting problem.
Someone has to point out that there is something wrong with it. >>>>>>>>>>>>
This is another problem (not the HP neither)
The halting problem is one of many problems that is
only "undecidable" because the notion of decidability
incorrectly requires a correct answer to a self-contradictory >>>>>>>>>> (thus incorrect) question.
What is the 'correct answer' to all HP like problems ?
The correct answer to all undecidable decision problems
that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the
domain of any and all decision problems. This applies
to the Halting Problem and many others.
In other words, just define that some Turing Machines aren't
actually Turing Machines, or aren't Turing Machines if they are
given certain inputs.
That is just admitting that the system isn't actually decidable, >>>>>>> by trying to outlaw the problems.
The issue then is, you can't tell if a thing that looks like and >>>>>>> acts lie a Turing Machine is actually a PO-Turing Machine, until >>>>>>> you can confirm that it doesn't have any of these contradictory
properties.
My guess is that detecting that is probably non-computable, so
you can't tell for sure if what you have is actually a PO-Turing >>>>>>> Machine or not
If the restrictions on the acceptability of a Turing macine are
sufficiently
strong both the restricted halting problem and the membership or the >>>>>> restricted domain are Turing solvable. For example, if the head
can only move
in one direction.
I have reverted to every detail of the original halting problem
thus now accept that a halt decider must report on the behavior
of the direct execution of its input.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
Hard to do if Ĥ.H says the same as H.
Hard to ensure that Ĥ.H does not say the same as H.
Both H and Ĥ.H simulate their inputs until they see that these
inputs must be aborted to prevent their own infinite execution.
When they find that they must abort the simulation of their
inputs they transition to their NO state.
This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning
to H.qy. I have already empirically proved that two identical
machines on identical input can transition to different final
states when one of these identical machines has a pathological
relationship with its input and the other does not
Why did they differ?
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Execution trace of Ĥ applied to ⟨Ĥ⟩
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
(b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process
Simulation invariant: ⟨Ĥ⟩ correctly simulated by Ĥ.H never
reaches its own simulated final state of ⟨Ĥ.qn⟩
Humans can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by Ĥ.H
cannot possibly terminate unless this simulation is aborted.
Humans can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does abort
its simulation then Ĥ will halt.
It seems quite foolish to believe that computers cannot
possibly ever see this too.
On 3/3/2024 6:15 AM, Richard Damon wrote:
On 3/2/24 10:15 PM, olcott wrote:
On 3/2/2024 3:53 PM, Richard Damon wrote:
On 3/2/24 11:24 AM, olcott wrote:
On 3/2/2024 4:40 AM, Mikko wrote:
On 2024-03-01 17:03:39 +0000, olcott said:
On 3/1/2024 5:19 AM, Mikko wrote:
On 2024-03-01 02:28:34 +0000, Richard Damon said:
On 2/29/24 5:29 PM, olcott wrote:
On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:What the Hell is GUR ?
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote: >>>>>>>>>>>>>>>>>>>>>>> On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>>>>>
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely
needs to report on
A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>>>>>
That no memory space is specified because Turing >>>>>>>>>>>>>>>>>>>>>> machines
are imaginary fictions does not entail that they >>>>>>>>>>>>>>>>>>>>>> have no
memory space. The actual memory space of actual >>>>>>>>>>>>>>>>>>>>>> Turing
machines is the human memory where these ideas are >>>>>>>>>>>>>>>>>>>>>> located.
The entire notion of undecidability when it >>>>>>>>>>>>>>>>>>>>>> depends on
epistemological antinomies is incoherent. >>>>>>>>>>>>>>>>>>>>>>
People that learn these things by rote never >>>>>>>>>>>>>>>>>>>>>> notice this.
Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>>>>>> incoherence find it.
...14 Every epistemological antinomy can likewise >>>>>>>>>>>>>>>>>>>>>> be used
for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>>>>>
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, >>>>>>>>>>>>>>>>>>>>> playing idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct >>>>>>>>>>>>>>>>>>>> understanding of truth.
People believes GUR are going to survive. >>>>>>>>>>>>>>>>>>> People does not believe GUR are going to vanish. >>>>>>>>>>>>>>>>>>
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ
Basically, GUR says that no one even your god can defy >>>>>>>>>>>>>>>>> that HP is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be >>>>>>>>>>>>>>>> used for
a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>
The general notion of decision problem undecidability is >>>>>>>>>>>>>>>> fundamentally
flawed in all of those cases where a decider is required >>>>>>>>>>>>>>>> to correctly
answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>>>>>
When we account for this then epistemological antinomies >>>>>>>>>>>>>>>> are always
excluded from the domain of every decision problem >>>>>>>>>>>>>>>> making all of
these decision problems decidable.
It seems you try to change what the halting problem again. >>>>>>>>>>>>>>>
https://en.wikipedia.org/wiki/Halting_problem
In computability theory, the halting problem is the >>>>>>>>>>>>>>> problem of determining, from a description
of
an
arbitrary computer program and an input, whether the >>>>>>>>>>>>>>> program will finish running, or continue
to
run
forever....
This wiki definition had been shown many times. But, >>>>>>>>>>>>>>> since your English is
terrible, you often read it as something else (actually, >>>>>>>>>>>>>>> deliberately
interpreted it differently, so called 'lie')
If you want to refute Halting Problem, you must first >>>>>>>>>>>>>>> understand what the
problem is about, right? You never hit the target that >>>>>>>>>>>>>>> every one can see, but POOP.
Note: My email was delivered strangely. It swapped to >>>>>>>>>>>>> sci.logic !!!
If we have the decision problem that no one can answer >>>>>>>>>>>>>> this question:
Is this sentence true or false: "What time is it?"
This is not the halting problem.
Someone has to point out that there is something wrong >>>>>>>>>>>>>> with it.
This is another problem (not the HP neither)
The halting problem is one of many problems that is
only "undecidable" because the notion of decidability
incorrectly requires a correct answer to a self-contradictory >>>>>>>>>>>> (thus incorrect) question.
What is the 'correct answer' to all HP like problems ?
The correct answer to all undecidable decision problems
that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the
domain of any and all decision problems. This applies
to the Halting Problem and many others.
In other words, just define that some Turing Machines aren't >>>>>>>>> actually Turing Machines, or aren't Turing Machines if they are >>>>>>>>> given certain inputs.
That is just admitting that the system isn't actually
decidable, by trying to outlaw the problems.
The issue then is, you can't tell if a thing that looks like >>>>>>>>> and acts lie a Turing Machine is actually a PO-Turing Machine, >>>>>>>>> until you can confirm that it doesn't have any of these
contradictory properties.
My guess is that detecting that is probably non-computable, so >>>>>>>>> you can't tell for sure if what you have is actually a
PO-Turing Machine or not
If the restrictions on the acceptability of a Turing macine are >>>>>>>> sufficiently
strong both the restricted halting problem and the membership or >>>>>>>> the
restricted domain are Turing solvable. For example, if the head >>>>>>>> can only move
in one direction.
I have reverted to every detail of the original halting problem
thus now accept that a halt decider must report on the behavior
of the direct execution of its input.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not
halt
Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to >>>>>>> correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
Hard to do if Ĥ.H says the same as H.
Hard to ensure that Ĥ.H does not say the same as H.
Both H and Ĥ.H simulate their inputs until they see that these
inputs must be aborted to prevent their own infinite execution.
When they find that they must abort the simulation of their
inputs they transition to their NO state.
This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning
to H.qy. I have already empirically proved that two identical
machines on identical input can transition to different final
states when one of these identical machines has a pathological
relationship with its input and the other does not
Why did they differ?
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Execution trace of Ĥ applied to ⟨Ĥ⟩
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
(b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process
Simulation invariant: ⟨Ĥ⟩ correctly simulated by Ĥ.H never
reaches its own simulated final state of ⟨Ĥ.qn⟩
So?
Humans can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by Ĥ.H
cannot possibly terminate unless this simulation is aborted.
Humans can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does abort
its simulation then Ĥ will halt.
It seems quite foolish to believe that computers cannot
possibly ever see this too.
We are not "Computations", and in particular, we are not H.
And Yes, (if we are smart) we can see that there is no answer that H
can give and be correct.
That there is no Ĥ.H that can correctly decide halting for ⟨Ĥ⟩ ⟨Ĥ⟩ does not actual entail that there is no H that can do this.
The key distinction that I recently realized the significance
of was that with actual Turing Machines the hypothetical halt
decider must be embedded within its input.
*The means that the input can only contradict itself*
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
We can see that there is no answer that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
can derive that corresponds to the actual behavior of Ĥ applied to ⟨Ĥ⟩.
Both H and Ĥ.H use the same algorithm that correctly detects
whether or not a correct simulation of their input would cause
their own infinite execution unless aborted.
Humans can see that this criteria derives different answers
for Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ than for H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.
H merely needs to correctly simulate ⟨Ĥ⟩ ⟨Ĥ⟩ to see that Ĥ
applied to ⟨Ĥ⟩ halts.
On 3/3/2024 2:40 PM, Richard Damon wrote:
On 3/3/24 2:05 PM, olcott wrote:
On 3/3/2024 6:15 AM, Richard Damon wrote:
On 3/2/24 10:15 PM, olcott wrote:
On 3/2/2024 3:53 PM, Richard Damon wrote:
On 3/2/24 11:24 AM, olcott wrote:
On 3/2/2024 4:40 AM, Mikko wrote:
On 2024-03-01 17:03:39 +0000, olcott said:
On 3/1/2024 5:19 AM, Mikko wrote:
On 2024-03-01 02:28:34 +0000, Richard Damon said:
On 2/29/24 5:29 PM, olcott wrote:
On 2/29/2024 4:24 PM, wij wrote:
On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
On 2/29/2024 4:06 PM, wij wrote:
On Thu, 2024-02-29 at 15:59 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 3:50 PM, wij wrote:
On Thu, 2024-02-29 at 15:27 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 3:15 PM, wij wrote:
On Thu, 2024-02-29 at 15:07 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>> On 2/29/2024 3:00 PM, wij wrote:
On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:What the Hell is GUR ?
On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>>>>>>>
H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely
needs to report on
A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>>>>>>>
That no memory space is specified because Turing >>>>>>>>>>>>>>>>>>>>>>>> machines
are imaginary fictions does not entail that they >>>>>>>>>>>>>>>>>>>>>>>> have no
memory space. The actual memory space of actual >>>>>>>>>>>>>>>>>>>>>>>> Turing
machines is the human memory where these ideas >>>>>>>>>>>>>>>>>>>>>>>> are located.
The entire notion of undecidability when it >>>>>>>>>>>>>>>>>>>>>>>> depends on
epistemological antinomies is incoherent. >>>>>>>>>>>>>>>>>>>>>>>>
People that learn these things by rote never >>>>>>>>>>>>>>>>>>>>>>>> notice this.
Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>>>>>>>> incoherence find it.
...14 Every epistemological antinomy can >>>>>>>>>>>>>>>>>>>>>>>> likewise be used
for a similar undecidability proof...(Gödel >>>>>>>>>>>>>>>>>>>>>>>> 1931:43)
So, do you agree what GUR says?
People believes GUR. Why struggle so painfully, >>>>>>>>>>>>>>>>>>>>>>> playing idiot everyday ?
Give in, my friend.
Graphical User Robots?
The survival of the species depends on a correct >>>>>>>>>>>>>>>>>>>>>> understanding of truth.
People believes GUR are going to survive. >>>>>>>>>>>>>>>>>>>>> People does not believe GUR are going to vanish. >>>>>>>>>>>>>>>>>>>>
Selective memory?
https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ
Basically, GUR says that no one even your god can >>>>>>>>>>>>>>>>>>> defy that HP is undecidable.
I simplify that down to this.
...14 Every epistemological antinomy can likewise be >>>>>>>>>>>>>>>>>> used for
a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>
The general notion of decision problem undecidability >>>>>>>>>>>>>>>>>> is fundamentally
flawed in all of those cases where a decider is >>>>>>>>>>>>>>>>>> required to correctly
answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>>>>>>>
When we account for this then epistemological >>>>>>>>>>>>>>>>>> antinomies are always
excluded from the domain of every decision problem >>>>>>>>>>>>>>>>>> making all of
these decision problems decidable.
It seems you try to change what the halting problem again. >>>>>>>>>>>>>>>>>
https://en.wikipedia.org/wiki/Halting_problem >>>>>>>>>>>>>>>>> In computability theory, the halting problem is the >>>>>>>>>>>>>>>>> problem of determining, from a description
of
an
arbitrary computer program and an input, whether the >>>>>>>>>>>>>>>>> program will finish running, or continue
to
run
forever....
This wiki definition had been shown many times. But, >>>>>>>>>>>>>>>>> since your English is
terrible, you often read it as something else >>>>>>>>>>>>>>>>> (actually, deliberately
interpreted it differently, so called 'lie') >>>>>>>>>>>>>>>>>
If you want to refute Halting Problem, you must first >>>>>>>>>>>>>>>>> understand what the
problem is about, right? You never hit the target that >>>>>>>>>>>>>>>>> every one can see, but POOP.
Note: My email was delivered strangely. It swapped to >>>>>>>>>>>>>>> sci.logic !!!
If we have the decision problem that no one can answer >>>>>>>>>>>>>>>> this question:This is not the halting problem.
Is this sentence true or false: "What time is it?" >>>>>>>>>>>>>>>
Someone has to point out that there is something wrong >>>>>>>>>>>>>>>> with it.
This is another problem (not the HP neither)
The halting problem is one of many problems that is >>>>>>>>>>>>>> only "undecidable" because the notion of decidability >>>>>>>>>>>>>> incorrectly requires a correct answer to a self-contradictory >>>>>>>>>>>>>> (thus incorrect) question.
What is the 'correct answer' to all HP like problems ? >>>>>>>>>>>>>
The correct answer to all undecidable decision problems >>>>>>>>>>>> that rely on self-contradictory input to determine
undecidability is to reject this input as outside of the >>>>>>>>>>>> domain of any and all decision problems. This applies
to the Halting Problem and many others.
In other words, just define that some Turing Machines aren't >>>>>>>>>>> actually Turing Machines, or aren't Turing Machines if they >>>>>>>>>>> are given certain inputs.
That is just admitting that the system isn't actually
decidable, by trying to outlaw the problems.
The issue then is, you can't tell if a thing that looks like >>>>>>>>>>> and acts lie a Turing Machine is actually a PO-Turing
Machine, until you can confirm that it doesn't have any of >>>>>>>>>>> these contradictory properties.
My guess is that detecting that is probably non-computable, >>>>>>>>>>> so you can't tell for sure if what you have is actually a >>>>>>>>>>> PO-Turing Machine or not
If the restrictions on the acceptability of a Turing macine >>>>>>>>>> are sufficiently
strong both the restricted halting problem and the membership >>>>>>>>>> or the
restricted domain are Turing solvable. For example, if the >>>>>>>>>> head can only move
in one direction.
I have reverted to every detail of the original halting problem >>>>>>>>> thus now accept that a halt decider must report on the behavior >>>>>>>>> of the direct execution of its input.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does
not halt
Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to >>>>>>>>> correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.
Hard to do if Ĥ.H says the same as H.
Hard to ensure that Ĥ.H does not say the same as H.
Both H and Ĥ.H simulate their inputs until they see that these
inputs must be aborted to prevent their own infinite execution.
When they find that they must abort the simulation of their
inputs they transition to their NO state.
This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning >>>>>>> to H.qy. I have already empirically proved that two identical
machines on identical input can transition to different final
states when one of these identical machines has a pathological
relationship with its input and the other does not
Why did they differ?
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Execution trace of Ĥ applied to ⟨Ĥ⟩
(a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
(b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
(c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process >>>>>
Simulation invariant: ⟨Ĥ⟩ correctly simulated by Ĥ.H never
reaches its own simulated final state of ⟨Ĥ.qn⟩
So?
Humans can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by Ĥ.H
cannot possibly terminate unless this simulation is aborted.
Humans can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does abort >>>>> its simulation then Ĥ will halt.
It seems quite foolish to believe that computers cannot
possibly ever see this too.
We are not "Computations", and in particular, we are not H.
And Yes, (if we are smart) we can see that there is no answer that H
can give and be correct.
That there is no Ĥ.H that can correctly decide halting for ⟨Ĥ⟩ ⟨Ĥ⟩
does not actual entail that there is no H that can do this.
Since they are the EXACT SAME ALGORITHM, it does.
Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.
Humans can see that this criteria derives different answers
for Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ than for H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.
*This principle seems to be sound*
Two identical machines must derive the same result when
applied to the same input.
On 3/4/2024 4:12 AM, Mikko wrote:
On 2024-03-02 16:24:45 +0000, olcott said:
*This principle seems to be sound*
Two identical machines must derive the same result when
applied to the same input.
It quite self-evidently is, as it follows from the meanings of
"identical" and "same" and other words.
Of course, two physical machines are never exactly identical
so one may malfunction in a way the other doesn't.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.
This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in
a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not
embedded in such a machine. The infinite loop appended to
Ĥ.H has no effect on this.
On 3/4/2024 6:16 PM, Richard Damon wrote:
On 3/4/24 2:05 PM, olcott wrote:
On 3/4/2024 4:12 AM, Mikko wrote:
On 2024-03-02 16:24:45 +0000, olcott said:
*This principle seems to be sound*
Two identical machines must derive the same result when
applied to the same input.
It quite self-evidently is, as it follows from the meanings of
"identical" and "same" and other words.
Of course, two physical machines are never exactly identical
so one may malfunction in a way the other doesn't.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.
This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in
a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not
embedded in such a machine. The infinite loop appended to
Ĥ.H has no effect on this.
How does it have diffferent results?
They are (or at least are claimed to be) the EXACT same algorithm, and
thus the exact same set of deterministic instructions, processing the
exact same input.
The input to Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can cause it to fail to halt.
The input to Ĥ ⟨Ĥ⟩ ⟨Ĥ⟩ cannot possible cause it to fail to halt. Can you see this?
I guess you are just admitting that you are either a total idiot
thinking that the impossible is going to happen, or are just a
ignorant pathological lying idiot (or likely BOTH).
You are just proving your STUPIDITY, and that you have ZERO reguard
for what is TRUE.
Not at all. Correcting the incorrect foundation of the
notion of analytic truth is my whole reason for pursuing
these things.
When you take the incorrect foundation as your basis
you cannot see its error.
Correcting the incorrect foundation of the notion of analytic
truth is my whole reason for pursuing these things.
On 3/4/2024 8:12 PM, Richard Damon wrote:
On 3/4/24 7:56 PM, olcott wrote:
On 3/4/2024 6:16 PM, Richard Damon wrote:
On 3/4/24 2:05 PM, olcott wrote:
On 3/4/2024 4:12 AM, Mikko wrote:
On 2024-03-02 16:24:45 +0000, olcott said:
*This principle seems to be sound*
Two identical machines must derive the same result when
applied to the same input.
It quite self-evidently is, as it follows from the meanings of
"identical" and "same" and other words.
Of course, two physical machines are never exactly identical
so one may malfunction in a way the other doesn't.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.
This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in >>>>> a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not >>>>> embedded in such a machine. The infinite loop appended to
Ĥ.H has no effect on this.
How does it have diffferent results?
They are (or at least are claimed to be) the EXACT same algorithm,
and thus the exact same set of deterministic instructions,
processing the exact same input.
The input to Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can cause it to fail to halt.
The input to Ĥ ⟨Ĥ⟩ ⟨Ĥ⟩ cannot possible cause it to fail to halt. >>> Can you see this?
IF H wait to see what H^.H does, then H^.H will also wait to see what
its simulated (H^) (H^) does when it gets to the simualte H^.H (H^)
(H^) and NOBODY every halts to give an answer.
Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.
When we much more clearly understand that H and Ĥ are in
separate memory addresses of a RASP machine where every
P knows its own address then it is much easier to see
that H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ will meet their identical criteria differently.
You seem to conviently forget this fact, which is just a form of LYING.
I guess you are just admitting that you are either a total idiot
thinking that the impossible is going to happen, or are just a
ignorant pathological lying idiot (or likely BOTH).
You are just proving your STUPIDITY, and that you have ZERO reguard
for what is TRUE.
Not at all. Correcting the incorrect foundation of the
notion of analytic truth is my whole reason for pursuing
these things.
The DO SO, and not try to work inside a system you claim is incorrect.
That is why I am tentatively switching to RASP machines
where every P knows its own address.
When you take the incorrect foundation as your basis
you cannot see its error.
And trying to change the foundation while keeping what was built on it
is impossibe.
As I have said, you are WELCOME to start at your new foundation and
built up just remember, you can't just use anything that was built on
the foundation you rejected. You need to start ALL OVER.
I only reject the limitations of Turing Machines compared
to RASP machines where every P knows its own address.
I don't think you understand this, because you just don't understand
how logic works. This is what has turned you into the ignorant
pathological lying idiot you have made yourself.
Or I understand that the foundations of logic have errors
that cause my views to diverge from the herd.
HINT: This means start by listing out ALL of the basic truths you are
going to accept, and the rules of logic you are going to allow, and
then see what you can actually prove from it.
For computer science I only need a RASP machine where
every P knows its own address.
When we do this then H1 is the decider and H/D is
the counter-example input.
Of course, this means you may need to study the systems you are
rejecting to understand what parts you might want to keep and what
parts you are rejecting
If a TM can do what H1(D,D) can do then my refutation
of the halting problem does not refute Church/Turing
otherwise it does refute Church/Turing.
On 3/5/2024 5:33 AM, Richard Damon wrote:
On 3/5/24 12:06 AM, olcott wrote:
On 3/4/2024 8:12 PM, Richard Damon wrote:
On 3/4/24 7:56 PM, olcott wrote:
On 3/4/2024 6:16 PM, Richard Damon wrote:
On 3/4/24 2:05 PM, olcott wrote:
On 3/4/2024 4:12 AM, Mikko wrote:
On 2024-03-02 16:24:45 +0000, olcott said:
*This principle seems to be sound*
Two identical machines must derive the same result when
applied to the same input.
It quite self-evidently is, as it follows from the meanings of >>>>>>>> "identical" and "same" and other words.
Of course, two physical machines are never exactly identical
so one may malfunction in a way the other doesn't.
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt
Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite >>>>>>> execution and otherwise transition to their YES state.
This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in >>>>>>> a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not >>>>>>> embedded in such a machine. The infinite loop appended to
Ĥ.H has no effect on this.
How does it have diffferent results?
They are (or at least are claimed to be) the EXACT same algorithm, >>>>>> and thus the exact same set of deterministic instructions,
processing the exact same input.
The input to Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can cause it to fail to halt.
The input to Ĥ ⟨Ĥ⟩ ⟨Ĥ⟩ cannot possible cause it to fail to halt.
Can you see this?
IF H wait to see what H^.H does, then H^.H will also wait to see
what its simulated (H^) (H^) does when it gets to the simualte H^.H
(H^) (H^) and NOBODY every halts to give an answer.
Both H and Ĥ.H transition to their NO state when a correct and
complete simulation of their input would cause their own infinite
execution and otherwise transition to their YES state.
When we much more clearly understand that H and Ĥ are in
separate memory addresses of a RASP machine where every
P knows its own address then it is much easier to see
that H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ will meet their identical
criteria differently.
A single RASP machine doesn't have multiple memory spaces.
No machine has multiple memory spaces.
A single RASP machine is just one singe program
A point of confusion: two sets of instructions: Unlike the UTM,
the RASP model has two sets of instructions – the state machine
table of instructions (the "interpreter") and the "program" in
the holes. The two sets do not have to be drawn from the same set. https://en.wikipedia.org/wiki/Random-access_stored-program_machine
When P also implements an interpreter then itself and
its slave are not at the exact same physical location.
You are just proving that you are just a stupid ignorant pathological
lying idiot.
*By saying that you are proving that you*
*are biased against an honest dialogue*
You seem to conviently forget this fact, which is just a form of LYING. >>>>
I guess you are just admitting that you are either a total idiot
thinking that the impossible is going to happen, or are just a
ignorant pathological lying idiot (or likely BOTH).
You are just proving your STUPIDITY, and that you have ZERO
reguard for what is TRUE.
Not at all. Correcting the incorrect foundation of the
notion of analytic truth is my whole reason for pursuing
these things.
The DO SO, and not try to work inside a system you claim is incorrect.
That is why I am tentatively switching to RASP machines
where every P knows its own address.
Which means you "programs" are no longer necessarily Computations,
unless you have been careful to include ALL their inputs in their
definition.
Alternatively every program has an implied
input that cannot possibly be forbidden to it.
I am using your excellent feedback to continuously
refine my position.
When you take the incorrect foundation as your basis
you cannot see its error.
And trying to change the foundation while keeping what was built on
it is impossibe.
As I have said, you are WELCOME to start at your new foundation and
built up just remember, you can't just use anything that was built
on the foundation you rejected. You need to start ALL OVER.
I only reject the limitations of Turing Machines compared
to RASP machines where every P knows its own address.
Because you need your P to not be the required computation, so you can
lie about it.
Alternatively every program has an implied
input that cannot possibly be forbidden to it.
I don't think you understand this, because you just don't understand
how logic works. This is what has turned you into the ignorant
pathological lying idiot you have made yourself.
Or I understand that the foundations of logic have errors
that cause my views to diverge from the herd.
So, why are you using it?
If I started from scratch I would be long since
dead of old age before making any progress.
I am only redefining tiny key elements of the
foundations that are causing the errors.
For my immediate purposes in this dialogue I only
need machines to always be able to know their own
machine address.
This can be implemented as simply as every TM is only
executed by a master UTM that simulates the Turing
Machine Description of this machine and the machine
cannot be executed in any other way.
The master UTM becomes an operating system (like x86utm)
for all of its Turing machines. If this Olcott machine
can solve problems that Turing machines cannot solve
then Church/Turing would seem to be refuted.
You have a choice, use the system as it is defined, or create a
totally new system. Yu
HINT: This means start by listing out ALL of the basic truths you
are going to accept, and the rules of logic you are going to allow,
and then see what you can actually prove from it.
For computer science I only need a RASP machine where
every P knows its own address.
When we do this then H1 is the decider and H/D is
the counter-example input.
Not if the "Decider" used the RASP structure to be a non-computation
(i.e., use a hidden input, like its address).
If every machine always has access to its own address
and this cannot be denied to any machine then Olcott
Machines would still be computations that are possibly
more powerful than Turing machines.
Of course, this means you may need to study the systems you are
rejecting to understand what parts you might want to keep and what
parts you are rejecting
If a TM can do what H1(D,D) can do then my refutation
of the halting problem does not refute Church/Turing
otherwise it does refute Church/Turing.
Nope, because your "Machines" are NOT "Computations", since they use a
"hidden input".
It simply becomes construed as an input to every machine.
Make it clear that your H is actually a function of its own address,
and suddenly Church/Turing shows that right result, and your "Counter
Example" is proven to be a lie.
Or Olcott machines simply refute Church/Turing.
You don't seem to understand that LYING about what you are doing (by
giving the functions a hidden input) does't prove anything.
When I propose alternatives to the dogma that you
memorized I am not lying.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 00:00:55 |
Calls: | 6,705 |
Calls today: | 5 |
Files: | 12,235 |
Messages: | 5,349,756 |