When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer
The above pair of templates specify every encoding of Ĥ that can
possibly exist, an infinite set of Turing machines such that each one
gets the wrong answer when it is required to report its own halt status. https://www.liarparadox.org/Linz_Proof.pdf
This proves that the halting problem counter-example
<is> isomorphic to the Liar Paradox.
On 2/8/2024 8:33 AM, Mikko wrote:
On 2024-02-08 14:14:55 +0000, olcott said:
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer
The above pair of templates specify every encoding of Ĥ that can
possibly exist, an infinite set of Turing machines such that each one
gets the wrong answer when it is required to report its own halt status. >>> https://www.liarparadox.org/Linz_Proof.pdf
This proves that the halting problem counter-example
<is> isomorphic to the Liar Paradox.
Ĥ is not required to report anything. Linz only specifies how Ĥ is
constructed but not what it should do.
*Clearly you didn't read what he said on the link*
On 2/8/2024 9:11 AM, Mikko wrote:
On 2024-02-08 14:39:05 +0000, olcott said:
On 2/8/2024 8:33 AM, Mikko wrote:
On 2024-02-08 14:14:55 +0000, olcott said:
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer
The above pair of templates specify every encoding of Ĥ that can
possibly exist, an infinite set of Turing machines such that each one >>>>> gets the wrong answer when it is required to report its own halt status. >>>>> https://www.liarparadox.org/Linz_Proof.pdf
This proves that the halting problem counter-example
<is> isomorphic to the Liar Paradox.
Ĥ is not required to report anything. Linz only specifies how Ĥ is
constructed but not what it should do.
*Clearly you didn't read what he said on the link*
The point is not what he said but what he didn't say. He didn't
say what Ĥ is required to do.
He did say what Ĥ is required to do
and you simply didn't read what he said.
On 2/9/2024 1:07 AM, Mikko wrote:
On 2024-02-08 15:15:54 +0000, olcott said:
On 2/8/2024 9:11 AM, Mikko wrote:
On 2024-02-08 14:39:05 +0000, olcott said:
On 2/8/2024 8:33 AM, Mikko wrote:
On 2024-02-08 14:14:55 +0000, olcott said:
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer
The above pair of templates specify every encoding of Ĥ that can >>>>>>> possibly exist, an infinite set of Turing machines such that each one >>>>>>> gets the wrong answer when it is required to report its own halt status.
https://www.liarparadox.org/Linz_Proof.pdf
This proves that the halting problem counter-example
<is> isomorphic to the Liar Paradox.
Ĥ is not required to report anything. Linz only specifies how Ĥ is >>>>>> constructed but not what it should do.
*Clearly you didn't read what he said on the link*
The point is not what he said but what he didn't say. He didn't
say what Ĥ is required to do.
He did say what Ĥ is required to do
and you simply didn't read what he said.
No, he didn't. Otherwise you would show where in that text is the word
"require" or something that means the same. But you don't because he
didn's say.
We can therefore legitimately ask what would happen if Ĥ is
applied to ŵ. (middle of page 3)
https://www.liarparadox.org/Linz_Proof.pdf
In my notational conventions it would be: Ĥ applied to ⟨Ĥ⟩.
When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
wrong answer for every possible Ĥ applied to ⟨Ĥ⟩.
When every possible Ĥ of the infinite set of Ĥ is applied to
its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
to be self-contradictory.
On 2/10/2024 9:29 AM, Richard Damon wrote:
On 2/10/24 10:06 AM, olcott wrote:
On 2/10/2024 7:35 AM, Richard Damon wrote:
On 2/10/24 12:33 AM, olcott wrote:
On 2/9/2024 11:15 PM, Richard Damon wrote:
On 2/9/24 11:24 PM, olcott wrote:
On 2/9/2024 6:09 PM, Richard Damon wrote:
On 2/9/24 9:50 AM, olcott wrote:
On 2/9/2024 6:05 AM, Richard Damon wrote:So?
On 2/9/24 12:22 AM, olcott wrote:
On 2/8/2024 9:44 PM, Richard Damon wrote:
On 2/8/24 10:34 PM, olcott wrote:
On 2/8/2024 8:40 PM, Richard Damon wrote:
On 2/8/24 7:48 PM, olcott wrote:
On 2/8/2024 5:50 PM, Richard Damon wrote:
On 2/8/24 1:28 PM, olcott wrote:
On 2/8/2024 12:15 PM, immibis wrote:
On 8/02/24 19:09, olcott wrote:
On 2/8/2024 10:32 AM, immibis wrote:No, it proves the right answer is the opposite of what it says.
On 8/02/24 15:14, olcott wrote:
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer
The above pair of templates specify every encoding of Ĥ that can
possibly exist, an infinite set of Turing machines such that each one
gets the wrong answer when it is required to report its own halt status.
This proves that it is impossible to for any Ĥ to give the right answer
on all inputs.
It proves that asking Ĥ whether it halts or not is an incorrect
question where both yes and no are the wrong answer. >>>>>>>>>>>>>>>>>>
*This seems to be over your head*
A self-contradictory question never has any correct answer. >>>>>>>>>>>>>>>>>
So the Halting Question, does the computation described by the input
Halt? isn't a self-contradictory question, as it always has a correct
answer, the opposite of what H gives (if it gives one). >>>>>>>>>>>>>>>>
Thus, your premise is false.
Maybe you need to carefully reread this fifty to sixty times before you
get it? (it took me twenty years to get it this simple) >>>>>>>>>>>>>>>
When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
wrong answer for every possible Ĥ applied to ⟨Ĥ⟩. >>>>>>>>>>>>>>
But Ĥ doesn't need to report on anything, the copy of H that is in it does.
Do you understand that every possible element of an infinite set is
more than one element?
Right, so the set isn't a specific input, so not the thing that Halting
quesiton is about.
The Haltig problem is about making a decider that answers the Halting
QUestion which asks the decider about the SPECIFIC COMPUTATION (a
specific program/data) that the input describes.
Not about "sets" of Decider / Inputs
When an infinite set of decider/input pairs has no correct >>>>>>>>>>>>> answer then the question is rigged.
Except that EVERY element of that set had a correct answer, just not
the one the decider gave.
When Ĥ applied to ⟨Ĥ⟩ has been intentionally defined to contradict
every value that each embedded_H returns for the infinite set of >>>>>>>>>>> every Ĥ that can possibly exist then each and every element of >>>>>>>>>>> these Ĥ / ⟨Ĥ⟩ pairs is isomorphic to a self-contradictory question.
No, YOUR POOP question, is self-contradictory.
The Halting Question is not, as EVERY element of that set you talk >>>>>>>>>> about has a correct answer to it, as every specific input describes a
Halting Computation or not.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
When every possible Ĥ of the infinite set of Ĥ is applied to >>>>>>>>> its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
to be self-contradictory.
Note, every possible Ĥ means every possible H, so all H are wrong. >>>>>>>>
The issue is not that the most powerful model of computation is >>>>>>>>> too weak. The issue is that an input was intentionally defined >>>>>>>>> to be self-contradictory.
But it shows that the simple problem, for which we have good reasons >>>>>>>> for wanting an answer, can not be computed by this most powerful model >>>>>>>> of computation.
Ĥ applied to ⟨Ĥ⟩ is asking Ĥ:
Do you halt on your own Turing Machine Description?
No, it is asking if the computation described by the input will halt when run.
Linz and I have been referring to the actual computation of
Ĥ applied to ⟨Ĥ⟩ with no simulators involved.
Right, and since your Ĥ (Ĥ) will Halt since your H (Ĥ) (Ĥ) goes to qn >>>> to say the computation that its input (Ĥ) (Ĥ) represents, that is Ĥ (Ĥ)
will not halt.
Thus you H is just WRONG.
embedded_H could be encoded with every detail all of knowledge that can
be expressed using language. This means that embedded_H is not
restricted by typical conventions. embedded_H could output a text string >>> swearing at you in English for trying to trick it. This would not be a
wrong answer.
Embedded H is restricted to only be able to do what is computable.
Since Embedded_H is (at least by your claims) an exact copy of the
Turing Machine H, it can only do what a Turing Machine can do.
When Embedded_H has encoded within it all of human knowledge that can
be encoded within language then it ceases to be restricted to Boolean.
This enables Embedded_H to do anything that a human mind can do.
On 2/10/24 9:24 PM, olcott wrote:
When a machine contradicts every answer that this same machine
provides this is a ruse to try to show that computation is limited.
In other words, you don't understand what you are talking about.
You don't understand what a computation IS, so you don't understand
their limits.
On 2/10/2024 9:29 AM, Richard Damon wrote:
On 2/10/24 10:06 AM, olcott wrote:
On 2/10/2024 7:35 AM, Richard Damon wrote:
On 2/10/24 12:33 AM, olcott wrote:
On 2/9/2024 11:15 PM, Richard Damon wrote:
On 2/9/24 11:24 PM, olcott wrote:
On 2/9/2024 6:09 PM, Richard Damon wrote:
On 2/9/24 9:50 AM, olcott wrote:
On 2/9/2024 6:05 AM, Richard Damon wrote:So?
On 2/9/24 12:22 AM, olcott wrote:
On 2/8/2024 9:44 PM, Richard Damon wrote:
On 2/8/24 10:34 PM, olcott wrote:
On 2/8/2024 8:40 PM, Richard Damon wrote:
On 2/8/24 7:48 PM, olcott wrote:
On 2/8/2024 5:50 PM, Richard Damon wrote:
On 2/8/24 1:28 PM, olcott wrote:
On 2/8/2024 12:15 PM, immibis wrote:
On 8/02/24 19:09, olcott wrote:
On 2/8/2024 10:32 AM, immibis wrote:No, it proves the right answer is the opposite of what it says.
On 8/02/24 15:14, olcott wrote:
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer
The above pair of templates specify every encoding of Ĥ that can
possibly exist, an infinite set of Turing machines such that each one
gets the wrong answer when it is required to report its own halt status.
This proves that it is impossible to for any Ĥ to give the right answer
on all inputs.
It proves that asking Ĥ whether it halts or not is an incorrect
question where both yes and no are the wrong answer. >>>>>>>>>>>>>>>>>>
*This seems to be over your head*
A self-contradictory question never has any correct answer. >>>>>>>>>>>>>>>>>
So the Halting Question, does the computation described by the input
Halt? isn't a self-contradictory question, as it always has a correct
answer, the opposite of what H gives (if it gives one). >>>>>>>>>>>>>>>>
Thus, your premise is false.
Maybe you need to carefully reread this fifty to sixty times before you
get it? (it took me twenty years to get it this simple) >>>>>>>>>>>>>>>
When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
wrong answer for every possible Ĥ applied to ⟨Ĥ⟩. >>>>>>>>>>>>>>
But Ĥ doesn't need to report on anything, the copy of H that is in it does.
Do you understand that every possible element of an infinite set is
more than one element?
Right, so the set isn't a specific input, so not the thing that Halting
quesiton is about.
The Haltig problem is about making a decider that answers the Halting
QUestion which asks the decider about the SPECIFIC COMPUTATION (a
specific program/data) that the input describes.
Not about "sets" of Decider / Inputs
When an infinite set of decider/input pairs has no correct >>>>>>>>>>>>> answer then the question is rigged.
Except that EVERY element of that set had a correct answer, just not
the one the decider gave.
When Ĥ applied to ⟨Ĥ⟩ has been intentionally defined to contradict
every value that each embedded_H returns for the infinite set of >>>>>>>>>>> every Ĥ that can possibly exist then each and every element of >>>>>>>>>>> these Ĥ / ⟨Ĥ⟩ pairs is isomorphic to a self-contradictory question.
No, YOUR POOP question, is self-contradictory.
The Halting Question is not, as EVERY element of that set you talk >>>>>>>>>> about has a correct answer to it, as every specific input describes a
Halting Computation or not.
When Ĥ is applied to ⟨Ĥ⟩
Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
When every possible Ĥ of the infinite set of Ĥ is applied to >>>>>>>>> its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
to be self-contradictory.
Note, every possible Ĥ means every possible H, so all H are wrong. >>>>>>>>
The issue is not that the most powerful model of computation is >>>>>>>>> too weak. The issue is that an input was intentionally defined >>>>>>>>> to be self-contradictory.
But it shows that the simple problem, for which we have good reasons >>>>>>>> for wanting an answer, can not be computed by this most powerful model >>>>>>>> of computation.
Ĥ applied to ⟨Ĥ⟩ is asking Ĥ:
Do you halt on your own Turing Machine Description?
No, it is asking if the computation described by the input will halt when run.
Linz and I have been referring to the actual computation of
Ĥ applied to ⟨Ĥ⟩ with no simulators involved.
Right, and since your Ĥ (Ĥ) will Halt since your H (Ĥ) (Ĥ) goes to qn >>>> to say the computation that its input (Ĥ) (Ĥ) represents, that is Ĥ (Ĥ)
will not halt.
Thus you H is just WRONG.
embedded_H could be encoded with every detail all of knowledge that can
be expressed using language. This means that embedded_H is not
restricted by typical conventions. embedded_H could output a text string >>> swearing at you in English for trying to trick it. This would not be a
wrong answer.
Embedded H is restricted to only be able to do what is computable.
Since Embedded_H is (at least by your claims) an exact copy of the
Turing Machine H, it can only do what a Turing Machine can do.
When Embedded_H has encoded within it all of human knowledge that can
be encoded within language then it ceases to be restricted to Boolean.
This enables Embedded_H to do anything that a human mind can do.
So, it CAN'T do what you claim, so you are a LIAR.
enum Boolean {
TRUE,
FALSE,
NEITHER
};
Boolean True(English, "this sentence is not true")
would be required to do this same sort of thing.
But CAN it? Remember, programs can only do what programs can do, which
is based on the instructions they are composed of
You are just too stupid to understand this.
It is not that I am stupid it is that you cannot think outside the box
of conventional wisdom. There is nothing impossible about a TM that
can communicate in English and understand the meaning of words to the
same extent that human experts do.
On 2/11/2024 6:37 AM, Richard Damon wrote:
On 2/10/24 10:45 PM, olcott wrote:
On 2/10/2024 9:26 PM, Richard Damon wrote:
On 2/10/24 9:59 PM, olcott wrote:
Mechanical and organic thinkers are either coherent or incorrect.
"Mechanical things" don't "think" in the normal sense it us used.
They COMPUTE, based on fixed pre-defined rules.
LLMs can reconfigure themselves on the fly redefining
their own rules within a single dialogue.
But only in accordance to its existing programming, or your system
isn't a Computation.
The point is that they can reprogram themselves on the fly using modern machine learning. LLMs learn on their own.
On 2/12/2024 1:42 PM, immibis wrote:
On 12/02/24 19:37, olcott wrote:
On 2/12/2024 12:29 PM, immibis wrote:
On 12/02/24 19:14, olcott wrote:
Math and computer science are anchored in fundamental misconceptions >>>>> of the way that analytical truth really works.
It may seem that way to everyone that does not understand math,
computer science, and analytical truth.
Very few people understand analytical truth, most simply
disbelieve that it exists on the basis of Quine's nonsense
rebuttal.
Many people understand the halting problem. You are not one of them.
Many people understand that the halting problem proof has
no errors within the conventional notion of undecidability.
Very few people understand that conventional notion of
undecidability is itself incoherent.
On 2/12/24 1:37 PM, olcott wrote:
On 2/12/2024 12:29 PM, immibis wrote:
On 12/02/24 19:14, olcott wrote:
Math and computer science are anchored in fundamental misconceptions
of the way that analytical truth really works.
It may seem that way to everyone that does not understand math,
computer science, and analytical truth.
Very few people understand analytical truth, most simply
disbelieve that it exists on the basis of Quine's nonsense
rebuttal.
Two Dogmas of Empiricism Willard Van Orman Quine (1951)
https://michaelreno.org/wp-content/uploads/2020/01/QuineTwoDogmas.pdf
Its clear that YOU don't understand what you are talking about.
It is ANALYTICALLY TRUE, as PROVEN, that Halting is Undecidable.
On 2/13/2024 7:55 PM, immibis wrote:
On 13/02/24 23:53, olcott wrote:
On 2/13/2024 2:25 PM, immibis wrote:
On 13/02/24 01:11, olcott wrote:
On 2/12/2024 5:08 PM, immibis wrote:
On 12/02/24 22:49, olcott wrote:
On 2/12/2024 3:41 PM, immibis wrote:
On 12/02/24 21:34, olcott wrote:
On 2/12/2024 2:12 PM, Shvili, the Kookologist wrote:
On 2024-02-12, olcott <polcott2@gmail.com> wrote:
[...]
Self-contradictory inputs must be rejected as invalid.
Math and computer science don't understand this.
I'm curious... How can you possibly write things like this and not see
that you are (or at least will be seen as) a deluded crackpot? >>>>>>>>>>
*This proves that Gödel did not understand that*
...14 Every epistemological antinomy can likewise be used for a similar
undecidability proof...(Gödel 1931:43)
After you acknowledge that you understand that epistemological >>>>>>>>> antinomies cannot be used as the basis of any proof I will
elaborate further.
Linz paper is all concrete computer science. There are no
"epistemological antinomies", only computer science.
PhD computer science professors
Stoddart, Hehner and Macias disagree thus proving that
I am not a crank.
Unlike you, I actually read the Macias paper you referenced. He does >>>>>> not agree with you and he does not prove anything.
The other two directly agree with me Macias is a little more indirect. >>>>>
Now run BAD(BAD) and consider what happens: ...
Note that these are the only two possible cases, and in either case
(whether HALT returns 0 or 1), HALT′s behavior is incorrect, >>>>> i.e., HALT fails to answer the Halting Problem correctly (Macias:2014)
So Macias agrees the halting problem cannot be solved by any program.
Three PhD computer science professors agree with my 2004
position that:
*the only reason the halting problem cannot be*
*solved is that there is something wrong with it*
So you agree it cannot be solved. Case closed. You can stop posting now.
Does the halting problem place an actual limit on computation? https://www.researchgate.net/publication/374806722_Does_the_halting_problem_place_an_actual_limit_on_computation
*Incorrect questions place to limit on anyone or anything*
*Incorrect questions place to limit on anyone or anything*
*Incorrect questions place to limit on anyone or anything*
On 2/13/2024 2:35 AM, Mikko wrote:
On 2024-02-13 00:11:58 +0000, olcott said:
The other two directly agree with me Macias is a little more indirect.
Now run BAD(BAD) and consider what happens: ...
Note that these are the only two possible cases, and in either case >>> (whether HALT returns 0 or 1), HALT′s behavior is incorrect,
i.e., HALT fails to answer the Halting Problem correctly (Macias:2014)
That is now eay indirect, that is a direct statement of an important
part of the proof of undecidability of halting.
Macias says this is what's wrong with the halting problem specification:
But there is a class of computer functions whose behavior is
dependent on the context in which they are called or used:
these may be called Context-Dependent Functions (CDFs). (Macias:2014)
Thus three PhD computer science professors agree (with me) that
there is something wrong with the halting problem specification.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 361 |
Nodes: | 16 (2 / 14) |
Uptime: | 124:24:03 |
Calls: | 7,716 |
Files: | 12,861 |
Messages: | 5,728,053 |