*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
[Subject: Does the halting problem
actually limit what computers can do?]
The inability to correctly answer
an incorrect question places
no actual limit on anyone or anything.
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such thatFirst, that ISN'T necessarily a true statement, unless you are stating
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
Every H of the infinite set of all Turing machines gets the wrong
answer on their corresponding input D because this input D
essentially derives a self-contradictory thus incorrect question
for this H.
Like the question: What time is it (yes or no)?
the blame for the lack of a correct answer goes to the question
and not the one attempting to answer it.
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
Every H of the infinite set of all Turing machines gets the wrong
answer on their corresponding input D because this input D
essentially derives a self-contradictory thus incorrect question
for this H.
Like the question: What time is it (yes or no)?
the blame for the lack of a correct answer goes to the question
and not the one attempting to answer it.
On 10/29/2023 1:44 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
Every H of the infinite set of all Turing machines gets the wrong
answer on their corresponding input D because this input D
essentially derives a self-contradictory thus incorrect question
for this H.
Changing the subject to a different H for this same input D is
the strawman deception.
Ignoring the context of who is asked the question deceptively
changes the meaning of the question.
Like the question: What time is it (yes or no)?
the blame for the lack of a correct answer goes to the question
and not the one attempting to answer it.
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
Every H of the infinite set of all Turing machines gets the wrong
answer
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
because this input D essentially derives a self-contradictory thus
incorrect question for this H.
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
Every H of the infinite set of all Turing machines gets the wrong
answer
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
because this input D
because this input D
because this input D
because this input D
because this input D
essentially derives a self-contradictory thus
incorrect question for this H.
incorrect question for this H.
incorrect question for this H.
incorrect question for this H.
incorrect question for this H.
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
The only rebuttals to this in the last two years rely
on one form of the strawman deception of another.
*Stupid or dishonest people may say otherwise*
That every D has a halt decider has nothing to do with
the claim that every H has an undecidable input.
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>>> answering it.
When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no >>>>>>> actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>>> answering it.
When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no >>>>>>> actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
On 10/29/23 6:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Maybe in a non-formal system or setting, but in Computability Theory, it means, and EXACTLY means that there does not exist a Turing Machine that
can compute the "function".
What "nuances" are you claiming?
Remember also, that the "Function" mentioned is nothing more than a mathematical mapping of input objects to output values, defined for all elements of the input domain.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
Nope. You still don't understand the meaning of the words.
Completeness, means PRECISELY and nothing more, that all true statements
in the system can be proven in the system.
Incompleteness, thus, means that there exists, at least ONE true
statement in the system that can not be proven in that system.
For Godels proof, that statement is "that there does not exist a natural number g that satisfies a particular Primative Recursive Relationship"
that was derived in a meta-system of the system, but said PRR is fully defined in that system.
What is "self-contradictory" of that statement?
Remeber, all the arguments about provability doen't exist in the system,
and "self-contrdiction" is a property in the system being discussed.
Your problem is you don't understand the logic of the proof enough to understand what the statement actually is.
Go ahead, try to actually answer one of the questions with an actual
logical answer based on FACTS,
My guess is you are going to again, just restate your FALSE claims and
thus prove that you don't actually have any true basis for your claims.
DARE YOU to try to answer.
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value
from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value
from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
We had to boil it down to its sound bite form to
sharply focus attention on a single point so that
rebuttals based on the strawman deception or ad
hominem are easily seen as having no basis what-so-ever.
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
On 10/29/2023 10:01 PM, olcott wrote:
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another >>>>>>>>>>> computer
program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question
places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite >>>>>>>>>> set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
Except that you haven't show how it CAN be true,
since there actually is no "self-reference" to
lead to the "self-contradictory" question.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
No computer program H can correctly predict what
another computer program D will do when D has been
programmed to do the opposite of whatever H says.
The fact that D contradicts both values that every
corresponding H can possibly return proves that input
D is isomorphic to a self-contradictory question for H.
If D would only contradict one of these values then D
would be a contradictory question. Since D contradicts
both of these values that makes D self-contradictory.
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/29/2023 10:01 PM, olcott wrote:
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means* >>>>>>>>>> The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these >>>>>>>>>> pathological
inputs the same way that ZFC handled Russell's Paradox. >>>>>>>>>>
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite >>>>>>>>> set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory >>>>>>>>> thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS >>>>
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
*The halting problem proofs merely show that**A self-contradictory question is defined as*
*self-contradictory questions have no correct answer*
Any yes/no question that contradicts both yes/no answers.
Every D derives a self-contradictory question for every
corresponding H in that:
(a) when each H says that its D will halt, D loops
(b) when each H that says its D will loop it halts.
--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
On 10/29/2023 10:01 PM, olcott wrote:
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another >>>>>>>>>>> computer
program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question
places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite >>>>>>>>>> set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that:
(a) When each H says that its D will halt, D loops
(b) When each H that says its D will loop it halts.
On 10/29/2023 10:01 PM, olcott wrote:
On 10/29/2023 9:27 PM, olcott wrote:
On 10/29/2023 9:12 PM, olcott wrote:
On 10/29/2023 8:44 PM, olcott wrote:
On 10/29/2023 8:19 PM, olcott wrote:
On 10/29/2023 7:57 PM, olcott wrote:
On 10/29/2023 6:43 PM, olcott wrote:
On 10/29/2023 5:38 PM, olcott wrote:
On 10/29/2023 3:58 PM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another >>>>>>>>>>> computer
program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question
places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite >>>>>>>>>> set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.
I now have two University professors that agree with this.
My words may need some technical improvement...
[problem specification] is unsatisfiable
The idea is to convey the essence of many technical
papers in a single sound bite:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.
The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.
Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.
All of my related work in the last twenty years
has focused on these foundational underpinnings.
In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY
The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.
This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.
I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.
That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.
The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".
Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.
That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.
Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Once you pay enough attention to see that the reasoning does
entail this then you will know that I and the two professors are
correct.
If you only want to provide a rebuttal no matter what the actual truth
is then you will continue to pretend that you don't see this.
When you just glance at my words to form a superficial basis
for an incorrect rebuttal you won't see this.
When we hypothesize that this <is> literally true then it
has enormous consequences:
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:*proving that this is literally true*
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H* >>>>
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:*proving that this is literally true*
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.
H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>> answering it.
When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no
actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H* >>>>
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>>> answering it.
When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no >>>>>>> actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for
each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one >>>>>>> answering it.
When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places no >>>>>>> actual
limit on anyone or anything.
This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for
each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
*for every Turing Machine of the infinite set of all Turing machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for
each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H* >>>>
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
For the H1/D pair D does not refer to the specific
machine being asked: Does your input halt?
D does not know about or reference H1.
If these things were not extremely difficult to
understand they would have been addressed before
publication in 1936.
On 10/30/2023 5:46 PM, olcott wrote:
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value
from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers. >>>>>>>>
For every H in the set of all Turing Machines there exists a D >>>>>>>> that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for >>>>>>>> each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>>
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H* >>>>>
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.
*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*
The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
isomorphic to a question that has been defined to have no correct >>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers.
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for
each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H* >>>>
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
On 10/30/2023 5:46 PM, olcott wrote:
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.
H(D) is functional notation that specifies the return value
from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers. >>>>>>>>
For every H in the set of all Turing Machines there exists a D >>>>>>>> that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for >>>>>>>> each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>>
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for each H* >>>>>
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.
*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*
The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.
On 10/30/2023 6:17 PM, olcott wrote:
On 10/30/2023 5:46 PM, olcott wrote:
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers. >>>>>>>>>
For every H in the set of all Turing Machines there exists a D >>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>> (a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for >>>>>>>>> each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing
machines*
*for every Turing Machine of the infinite set of all Turing
machines*
*for every Turing Machine of the infinite set of all Turing
machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for
each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.
*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*
The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.
"CALLING H doesn't REFER to the decider deciding it."
Sure it does with H(D,D) D is calling the decider deciding it.
On 10/30/2023 6:17 PM, olcott wrote:
On 10/30/2023 5:46 PM, olcott wrote:
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers. >>>>>>>>>
For every H in the set of all Turing Machines there exists a D >>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>> (a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for >>>>>>>>> each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because
*for every Turing Machine of the infinite set of all Turing
machines*
*for every Turing Machine of the infinite set of all Turing
machines*
*for every Turing Machine of the infinite set of all Turing
machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for
each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.
*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*
The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.
"CALLING H doesn't REFER to the decider deciding it."
Sure it does with H(D,D) D is calling the decider deciding it.
On 10/30/2023 7:04 PM, olcott wrote:
On 10/30/2023 6:17 PM, olcott wrote:
On 10/30/2023 5:46 PM, olcott wrote:
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another >>>>>>>>>>> computer
program D will do when D has been programmed to do the
opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
*No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question
places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these
pathological
inputs the same way that ZFC handled Russell's Paradox.
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers. >>>>>>>>>>
For every H in the set of all Turing Machines there exists a D >>>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>>> (a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for >>>>>>>>>> each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because >>>>>>>> *for every Turing Machine of the infinite set of all Turing
machines*
*for every Turing Machine of the infinite set of all Turing
machines*
*for every Turing Machine of the infinite set of all Turing
machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D
that derives a self-contradictory question for this H in that
(a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for
each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.
*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*
The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.
"CALLING H doesn't REFER to the decider deciding it."
Sure it does with H(D,D) D is calling the decider deciding it.
Nope, D is calling the original H, no matter
WHAT decider is deciding it.
Duh? calling the original decider when
the original decider is deciding it
Because the halting problem and Tarski Undefinability
(attempting to formalize the notion of truth itself)
are different aspects of the same problem:
My same ideas can be used to automatically divide
truth from disinformation so that climate change
denial does not cause humans to become extinct.
Are you going to perpetually play head games?
On 10/30/23 5:39 PM, olcott wrote:
On 10/30/2023 7:04 PM, olcott wrote:
On 10/30/2023 6:17 PM, olcott wrote:
On 10/30/2023 5:46 PM, olcott wrote:
On 10/30/2023 5:10 PM, olcott wrote:
On 10/30/2023 3:11 PM, olcott wrote:
On 10/30/2023 1:08 PM, olcott wrote:
On 10/30/2023 12:23 PM, olcott wrote:
On 10/30/2023 11:57 AM, olcott wrote:
On 10/30/2023 11:29 AM, olcott wrote:
On 10/29/2023 12:30 PM, olcott wrote:
*Everyone agrees that this is impossible*
No computer program H can correctly predict what another >>>>>>>>>>> computer
program D will do when D has been programmed to do the >>>>>>>>>>> opposite of
whatever H says.
H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false >>>>>>>>>>>
*No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
specification thus
isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
answer.
What time is it (yes or no)?
has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
answering it.
When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
question: Does your input halt? is essentially a
self-contradictory
(thus incorrect) question in these cases.
The inability to correctly answer an incorrect question >>>>>>>>>>> places no actual
limit on anyone or anything.
This insight opens up an alternative treatment of these >>>>>>>>>>> pathological
inputs the same way that ZFC handled Russell's Paradox. >>>>>>>>>>>
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
*A self-contradictory question is defined as*
Any yes/no question that contradicts both yes/no answers. >>>>>>>>>>
For every H in the set of all Turing Machines there exists a D >>>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>>> (a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for >>>>>>>>>> each H*
*proving that this is literally true*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
Nope, since each specific question HAS
a correct answer, it shows that, by your
own definition, it isn't "Self-Contradictory"
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
*That is a deliberate strawman deception paraphrase*
There does not exist a solution to the halting problem because >>>>>>>> *for every Turing Machine of the infinite set of all Turing >>>>>>>> machines*
*for every Turing Machine of the infinite set of all Turing >>>>>>>> machines*
*for every Turing Machine of the infinite set of all Turing >>>>>>>> machines*
there exists a D that makes the question:
Does your input halt?
a self-contradictory thus incorrect question.
Where does it say that a Turing
Machine must exsit to do it?
*The only reason that no such Turing Machine exists is*
For every H in the set of all Turing Machines there exists a D >>>>>>> that derives a self-contradictory question for this H in that >>>>>>> (a) If this H says that its D will halt, D loops
(b) If this H that says its D will loop it halts.
*Thus the question: Does D halt? is contradicted by some D for >>>>>>> each H*
*therefore*
*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*
The issue that you ignore is that you are
confalting a set of questions with a question,
and are baseing your logic on a strawman,
It is not my mistake. Linguists understand that the
context of who is asked a question changes the meaning
of the question.
This can easily be shown to apply to decision problem
instances as follows:
In that H.true and H.false are the wrong answer when
D calls H to do the opposite of whatever value that
either H returns.
Whereas exactly one of H1.true or H1.false is correct
for this exact same D.
This proves that the question: "Does your input halt?"
has a different meaning across the H and H1 pairs.
It *CAN* if the question ask something about
the person being questioned.
But it *CAN'T* if the question doesn't in any
way reffer to who you ask.
D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1
The QUESTION doesn't refer to the person
being asked?
That D calls H doesn't REFER to the asker,
but to a specific machine.
For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.
Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.
*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*
The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.
"CALLING H doesn't REFER to the decider deciding it."
Sure it does with H(D,D) D is calling the decider deciding it.
Nope, D is calling the original H, no matter
WHAT decider is deciding it.
Duh? calling the original decider whenWhich doesn't mean the problem has a REFERENCE, because code it uses
the original decider is deciding it
doesn't change.
I guess you DO think that the following code make Y a referece to X
x = 1;
y = 1;
Which proves your stupidity.
Because the halting problem and Tarski UndefinabilitySo? Where does "Because" apply here.
(attempting to formalize the notion of truth itself)
are different aspects of the same problem:
My same ideas can be used to automatically divide
truth from disinformation so that climate change
denial does not cause humans to become extinct.
But clearly it isn't as you are spreading disinformation, as has been proven.
Are you going to perpetually play head games?
No, I will continue to point out actual Truth.
YOU are the one playing Head Games.
TO be a "Reference" it needs to always end up using the thing
referenced, which ish't what happens here.
You are just showing how IGNORANT you are of basic facts.
How can you possible think you can determine what is truth when you
continue to base you arguments on LIES.
Or, is your intent to get rid of "Disinformation" by just saying it
doesn't exist because anything we want to be true we can make true.
That seems to be the basis of your logic.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 297 |
Nodes: | 16 (2 / 14) |
Uptime: | 100:42:19 |
Calls: | 6,659 |
Calls today: | 1 |
Files: | 12,209 |
Messages: | 5,334,854 |