ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
On 6/17/23 1:54 AM, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could >> argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
so the answer doesn't apply.
On 6/17/2023 7:09 AM, Richard Damon wrote:
On 6/17/23 1:54 AM, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could >>> argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
so the answer doesn't apply.
My original source of Jack's question:
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question.
I had to capture the dialogue as two huge images.
Then I converted them to PDF. It is about 60 pages of dialogue. https://www.liarparadox.org/ChatGPT_HP.pdf
This is how the ChatGPT conversation began:
You ask someone to give a truthful yes/no answer to the following
question: Will your answer to this question be no?
Can they give a correct answer to that question?
After sixty pages dialogue ChatGPT understood that
any question (like the above question) that lacks a
correct yes or no answer because it is self-contradictory
when posed to a specific person/machine is an incorrect
question within this full context.
ChatGPT:
"Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H."
Double talk and misdirection might convince gullible fools that the
above 60 pages of reasoning is not correct. Double talk and misdirection
do not count as the slightest trace of any actual rebuttal.
Quit using ad hominem attacks and mere rhetoric to convince gullible
fools and try and find an actual flaw in the reasoning.
On 6/17/23 12:59 PM, olcott wrote:
On 6/17/2023 7:09 AM, Richard Damon wrote:
On 6/17/23 1:54 AM, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory >>>> questions lack a correct answer and are deemed incorrect, one could >>>> argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so the answer doesn't apply.
My original source of Jack's question:
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question.
But you aren't claiming to be solving the Jack Question.
On 6/17/2023 12:43 PM, Richard Damon wrote:
On 6/17/23 12:59 PM, olcott wrote:
On 6/17/2023 7:09 AM, Richard Damon wrote:
On 6/17/23 1:54 AM, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory >>>>> questions lack a correct answer and are deemed incorrect, one >>>>> could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting >>>>> decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so the answer doesn't apply.
My original source of Jack's question:
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question. >>>
But you aren't claiming to be solving the Jack Question.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question.
When the halting problem is construed as requiring a correct yes/no
answer to a self-contradictory question it cannot be solved.
My semantic linguist friends understand that the context of the question
must include who the question is posed to otherwise the same word-for-
word question acquires different semantics.
The input D to H is the same as Jack's question posed to Jack,
has no correct answer because within this context the question is self-contradictory.
When we ask someone else what Jack's answer will be or we present a
different TM with input D the same word-for-word question (or bytes of machine description) acquires entirely different semantics and is no
longer self-contradictory.
When we construe the halting problem as determining whether or not an
(a) Input D will halt on its input <or>
(b) Either D will not halt or D has a pathological relationship with H
Then this halting problem cannot be showed to be unsolvable by any of
the conventional halting problem proofs.
Except that the Halting Problem isn't a "Self-Contradictory" Quesiton, so
the answer doesn't apply.
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. And
the reason /why/ it catches so many out eventually led me to stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting question
is being asked. The students think they can see it right there in the constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code calls a function that does not exist, so "it" (the constructed code, the whole program) does not exist either.
The fact that it's code, and the students are almost all programmers and
not mathematicians, makes it worse. A mathematician seeing "let p be
the largest prime" does not assume that such a p exists. So when a
prime number p' > p is constructed from p, this is not seen as a "self-contradictory number" because neither p nor p' exist. But the
halting theorem is even more deceptive for programmers, because the
desired function, H (or whatever), appears to be so well defined -- much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's just software engineering to write such things (they erroneously assume).
These sorts of proof can always be re-worded so as to avoid the initial assumption. For example, we can start "let p be any prime", and from p
we construct a prime p' > p. And for halting, we can start "let H be
any subroutine of two arguments always returning true or false". Now,
all the objects /do/ exist. In the first case, the construction shows
that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple of years, I would
start the course by setting Post's correspondence problem as if it were
just a fun programming challenge. As the days passed (and the course
got into more and more serious material) it would start to become clear
that this was no ordinary programming challenge. Many students started
to suspect that, despite the trivial sounding specification, no program
could do the job. I always felt a bit uneasy doing this, as if I was
not being 100% honest, but it was a very useful learning experience for
most.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. And
the reason /why/ it catches so many out eventually led me to stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting question
is being asked. The students think they can see it right there in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code calls a
function that does not exist, so "it" (the constructed code, the whole
program) does not exist either.
The fact that it's code, and the students are almost all programmers and
not mathematicians, makes it worse. A mathematician seeing "let p be
the largest prime" does not assume that such a p exists. So when a
prime number p' > p is constructed from p, this is not seen as a
"self-contradictory number" because neither p nor p' exist. But the
halting theorem is even more deceptive for programmers, because the
desired function, H (or whatever), appears to be so well defined -- much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's just
software engineering to write such things (they erroneously assume).
These sorts of proof can always be re-worded so as to avoid the initial
assumption. For example, we can start "let p be any prime", and from p
we construct a prime p' > p. And for halting, we can start "let H be
any subroutine of two arguments always returning true or false". Now,
all the objects /do/ exist. In the first case, the construction shows
that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple of years, I would
start the course by setting Post's correspondence problem as if it were
just a fun programming challenge. As the days passed (and the course
got into more and more serious material) it would start to become clear
that this was no ordinary programming challenge. Many students started
to suspect that, despite the trivial sounding specification, no program
could do the job. I always felt a bit uneasy doing this, as if I was
not being 100% honest, but it was a very useful learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question.
It is an easily verified fact that when Jack's question is posed to Jack
that this question is self-contradictory for Jack or anyone else having
a pathological relationship to the question.
It is also clear that when a question has no yes or no answer because
it is self-contradictory that this question is aptly classified as
incorrect.
It is incorrect to say that a question is not self-contradictory on the
basis that it is not self-contradictory in some contexts. If a question
is self-contradictory in some contexts then in these contexts it is an incorrect question.
When we clearly understand the truth of this then and only then we have
the means to overcome the enormous inertia of the [received view] of
the conventional wisdom regarding decision problems that are only
undecidable because of pathological relationships.
Because of the brilliant work of Daryl McCullough we can see the actual reality behind decision problems that are undecidable because of their pathological relationships.
It only took ChatGPT a few hours and 60 pages of dialogue
to understand and agree with this.
https://www.liarparadox.org/ChatGPT_HP.pdf
ChatGPT:
"Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H."
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. And
the reason /why/ it catches so many out eventually led me to stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting question
is being asked. The students think they can see it right there in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code calls a >>> function that does not exist, so "it" (the constructed code, the whole
program) does not exist either.
The fact that it's code, and the students are almost all programmers and >>> not mathematicians, makes it worse. A mathematician seeing "let p be
the largest prime" does not assume that such a p exists. So when a
prime number p' > p is constructed from p, this is not seen as a
"self-contradictory number" because neither p nor p' exist. But the
halting theorem is even more deceptive for programmers, because the
desired function, H (or whatever), appears to be so well defined -- much >>> more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's just
software engineering to write such things (they erroneously assume).
These sorts of proof can always be re-worded so as to avoid the initial
assumption. For example, we can start "let p be any prime", and from p >>> we construct a prime p' > p. And for halting, we can start "let H be
any subroutine of two arguments always returning true or false". Now,
all the objects /do/ exist. In the first case, the construction shows
that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple of years, I would >>> start the course by setting Post's correspondence problem as if it were
just a fun programming challenge. As the days passed (and the course
got into more and more serious material) it would start to become clear
that this was no ordinary programming challenge. Many students started >>> to suspect that, despite the trivial sounding specification, no program
could do the job. I always felt a bit uneasy doing this, as if I was
not being 100% honest, but it was a very useful learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question.
It is an easily verified fact that when Jack's question is posed to Jack
that this question is self-contradictory for Jack or anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being.
H is not, it is a program, so before we even ask H what will happen, the answer has been fixed by the definition of the codr of H.
It is also clear that when a question has no yes or no answer because
it is self-contradictory that this question is aptly classified as
incorrect.
And the actual question DOES have a yes or no answer, in this case,
since H(D,D) says 0 (non-Halting) the actual answer to the question does
D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".
It is incorrect to say that a question is not self-contradictory on the
basis that it is not self-contradictory in some contexts. If a question
is self-contradictory in some contexts then in these contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" become self-contradictory?
By the way, we have noticed that you haven't played the big "C" card recently. Is this 1) an immaculate cure, 2) you putting on your big boy
pants and taking responsibility for your own sorry life and mind, or 3)
the time where you try to wiggle out of a past sequel of lies? We've
seen all but variation 2 in past interactions. The curious want to know
the real skinny so speak up!
--
Jeff Barnett
On 6/17/23 6:03 PM, Jeff Barnett wrote:
By the way, we have noticed that you haven't played the big "C" card
recently. Is this 1) an immaculate cure, 2) you putting on your big
boy pants and taking responsibility for your own sorry life and mind,
or 3) the time where you try to wiggle out of a past sequel of lies?
We've seen all but variation 2 in past interactions. The curious want
to know the real skinny so speak up!
--
Jeff Barnett
My assumption (but just that) is that it has been a lie the whole time
to try to gain sympathy. He as earned no reputation for honesty, and so
none will be given.
I will admit he might have been sick, but there has been no actual
evidence of it, so it is mearly an unsubstantiated claim.
On 6/17/2023 6:18 PM, Richard Damon wrote:
On 6/17/23 6:03 PM, Jeff Barnett wrote:
By the way, we have noticed that you haven't played the big "C" card
recently. Is this 1) an immaculate cure, 2) you putting on your big
boy pants and taking responsibility for your own sorry life and mind,
or 3) the time where you try to wiggle out of a past sequel of lies?
We've seen all but variation 2 in past interactions. The curious want
to know the real skinny so speak up!
--
Jeff Barnett
My assumption (but just that) is that it has been a lie the whole time
to try to gain sympathy. He as earned no reputation for honesty, and
so none will be given.
I will admit he might have been sick, but there has been no actual
evidence of it, so it is mearly an unsubstantiated claim.
I did have cancer jam packed in every lymph node.
After chemo therapy last Summer this has cleared up.
It is my current understanding that Follicular Lymphoma always
comes back eventually.
A FLIPI index score of 3 was very bad news.
A 53% five year survival rate and a 35% 10 year survival rate. https://www.nature.com/articles/s41408-019-0269-6
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. And >>>> the reason /why/ it catches so many out eventually led me to stop using >>>> the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting question >>>> is being asked. The students think they can see it right there in the >>>> constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code calls a >>>> function that does not exist, so "it" (the constructed code, the whole >>>> program) does not exist either.
The fact that it's code, and the students are almost all programmers
and
not mathematicians, makes it worse. A mathematician seeing "let p be >>>> the largest prime" does not assume that such a p exists. So when a
prime number p' > p is constructed from p, this is not seen as a
"self-contradictory number" because neither p nor p' exist. But the
halting theorem is even more deceptive for programmers, because the
desired function, H (or whatever), appears to be so well defined --
much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's just >>>> software engineering to write such things (they erroneously assume).
These sorts of proof can always be re-worded so as to avoid the initial >>>> assumption. For example, we can start "let p be any prime", and from p >>>> we construct a prime p' > p. And for halting, we can start "let H be >>>> any subroutine of two arguments always returning true or false". Now, >>>> all the objects /do/ exist. In the first case, the construction shows >>>> that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple of years, I would >>>> start the course by setting Post's correspondence problem as if it were >>>> just a fun programming challenge. As the days passed (and the course >>>> got into more and more serious material) it would start to become clear >>>> that this was no ordinary programming challenge. Many students started >>>> to suspect that, despite the trivial sounding specification, no program >>>> could do the job. I always felt a bit uneasy doing this, as if I was >>>> not being 100% honest, but it was a very useful learning experience for >>>> most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question. >>>
It is an easily verified fact that when Jack's question is posed to Jack >>> that this question is self-contradictory for Jack or anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being.
H is not, it is a program, so before we even ask H what will happen,
the answer has been fixed by the definition of the codr of H.
It is also clear that when a question has no yes or no answer because
it is self-contradictory that this question is aptly classified as
incorrect.
And the actual question DOES have a yes or no answer, in this case,
since H(D,D) says 0 (non-Halting) the actual answer to the question
does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".
It is incorrect to say that a question is not self-contradictory on the
basis that it is not self-contradictory in some contexts. If a question
is self-contradictory in some contexts then in these contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.
On 6/17/23 7:44 PM, olcott wrote:
On 6/17/2023 6:18 PM, Richard Damon wrote:
On 6/17/23 6:03 PM, Jeff Barnett wrote:
By the way, we have noticed that you haven't played the big "C" card
recently. Is this 1) an immaculate cure, 2) you putting on your big
boy pants and taking responsibility for your own sorry life and
mind, or 3) the time where you try to wiggle out of a past sequel of
lies? We've seen all but variation 2 in past interactions. The
curious want to know the real skinny so speak up!
--
Jeff Barnett
My assumption (but just that) is that it has been a lie the whole
time to try to gain sympathy. He as earned no reputation for honesty,
and so none will be given.
I will admit he might have been sick, but there has been no actual
evidence of it, so it is mearly an unsubstantiated claim.
I did have cancer jam packed in every lymph node.
After chemo therapy last Summer this has cleared up.
It is my current understanding that Follicular Lymphoma always
comes back eventually.
A FLIPI index score of 3 was very bad news.
A 53% five year survival rate and a 35% 10 year survival rate.
https://www.nature.com/articles/s41408-019-0269-6
Which is a fairly amazing recovery, as your reports from a year and a
half ago were something like 90% dead by the end of last year from my
memory.
I won't say you are lying, as I have no evidence, and do admit you could
be telling the truth, but considering your verasity at other topics, you
have no credit earned in believability, and shading some of the truth is
an act I wouldn't put past you.
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. And >>>>> the reason /why/ it catches so many out eventually led me to stop
using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting question >>>>> is being asked. The students think they can see it right there in the >>>>> constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code calls a >>>>> function that does not exist, so "it" (the constructed code, the whole >>>>> program) does not exist either.
The fact that it's code, and the students are almost all
programmers and
not mathematicians, makes it worse. A mathematician seeing "let p be >>>>> the largest prime" does not assume that such a p exists. So when a >>>>> prime number p' > p is constructed from p, this is not seen as a
"self-contradictory number" because neither p nor p' exist. But the >>>>> halting theorem is even more deceptive for programmers, because the
desired function, H (or whatever), appears to be so well defined --
much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's just >>>>> software engineering to write such things (they erroneously assume). >>>>>
These sorts of proof can always be re-worded so as to avoid the
initial
assumption. For example, we can start "let p be any prime", and
from p
we construct a prime p' > p. And for halting, we can start "let H be >>>>> any subroutine of two arguments always returning true or false". Now, >>>>> all the objects /do/ exist. In the first case, the construction shows >>>>> that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple of years, I
would
start the course by setting Post's correspondence problem as if it
were
just a fun programming challenge. As the days passed (and the course >>>>> got into more and more serious material) it would start to become
clear
that this was no ordinary programming challenge. Many students
started
to suspect that, despite the trivial sounding specification, no
program
could do the job. I always felt a bit uneasy doing this, as if I was >>>>> not being 100% honest, but it was a very useful learning experience
for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question. >>>>
It is an easily verified fact that when Jack's question is posed to
Jack
that this question is self-contradictory for Jack or anyone else having >>>> a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being.
H is not, it is a program, so before we even ask H what will happen,
the answer has been fixed by the definition of the codr of H.
It is also clear that when a question has no yes or no answer because
it is self-contradictory that this question is aptly classified as
incorrect.
And the actual question DOES have a yes or no answer, in this case,
since H(D,D) says 0 (non-Halting) the actual answer to the question
does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".
It is incorrect to say that a question is not self-contradictory on the >>>> basis that it is not self-contradictory in some contexts. If a question >>>> is self-contradictory in some contexts then in these contexts it is an >>>> incorrect question.
In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.
But you are missing the difference. A Decider is a fixed piece of code,
so its answer has always been fixed to this question since it has been designed. Thus what it will say isn't a varialbe that can lead to the self-contradiction cycle, but a fixed result that will either be correct
or incorrect.
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. And >>>>>> the reason /why/ it catches so many out eventually led me to stop
using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting
question
is being asked. The students think they can see it right there in >>>>>> the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code
calls a
function that does not exist, so "it" (the constructed code, the
whole
program) does not exist either.
The fact that it's code, and the students are almost all
programmers and
not mathematicians, makes it worse. A mathematician seeing "let p be >>>>>> the largest prime" does not assume that such a p exists. So when a >>>>>> prime number p' > p is constructed from p, this is not seen as a
"self-contradictory number" because neither p nor p' exist. But the >>>>>> halting theorem is even more deceptive for programmers, because the >>>>>> desired function, H (or whatever), appears to be so well defined
-- much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's >>>>>> just
software engineering to write such things (they erroneously assume). >>>>>>
These sorts of proof can always be re-worded so as to avoid the
initial
assumption. For example, we can start "let p be any prime", and
from p
we construct a prime p' > p. And for halting, we can start "let H be >>>>>> any subroutine of two arguments always returning true or false".
Now,
all the objects /do/ exist. In the first case, the construction
shows
that no prime is the largest, and in the second it shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple of years, I >>>>>> would
start the course by setting Post's correspondence problem as if it >>>>>> were
just a fun programming challenge. As the days passed (and the course >>>>>> got into more and more serious material) it would start to become
clear
that this was no ordinary programming challenge. Many students
started
to suspect that, despite the trivial sounding specification, no
program
could do the job. I always felt a bit uneasy doing this, as if I was >>>>>> not being 100% honest, but it was a very useful learning
experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question. >>>>>
It is an easily verified fact that when Jack's question is posed to
Jack
that this question is self-contradictory for Jack or anyone else
having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being.
H is not, it is a program, so before we even ask H what will happen,
the answer has been fixed by the definition of the codr of H.
It is also clear that when a question has no yes or no answer because >>>>> it is self-contradictory that this question is aptly classified as
incorrect.
And the actual question DOES have a yes or no answer, in this case,
since H(D,D) says 0 (non-Halting) the actual answer to the question
does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".
It is incorrect to say that a question is not self-contradictory on
the
basis that it is not self-contradictory in some contexts. If a
question
is self-contradictory in some contexts then in these contexts it is an >>>>> incorrect question.
In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.
But you are missing the difference. A Decider is a fixed piece of
code, so its answer has always been fixed to this question since it
has been designed. Thus what it will say isn't a varialbe that can
lead to the self-contradiction cycle, but a fixed result that will
either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean return
values are incorrect is an incorrect input.
On 6/17/23 10:29 PM, olcott wrote:
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory"
Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. >>>>>>> And
the reason /why/ it catches so many out eventually led me to stop >>>>>>> using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting
question
is being asked. The students think they can see it right there >>>>>>> in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code >>>>>>> calls a
function that does not exist, so "it" (the constructed code, the >>>>>>> whole
program) does not exist either.
The fact that it's code, and the students are almost all
programmers and
not mathematicians, makes it worse. A mathematician seeing "let >>>>>>> p be
the largest prime" does not assume that such a p exists. So when a >>>>>>> prime number p' > p is constructed from p, this is not seen as a >>>>>>> "self-contradictory number" because neither p nor p' exist. But the >>>>>>> halting theorem is even more deceptive for programmers, because the >>>>>>> desired function, H (or whatever), appears to be so well defined >>>>>>> -- much
more well-defined than "the largest prime". We have an exact
specification for it, mapping arguments to returned values. It's >>>>>>> just
software engineering to write such things (they erroneously assume). >>>>>>>
These sorts of proof can always be re-worded so as to avoid the
initial
assumption. For example, we can start "let p be any prime", and >>>>>>> from p
we construct a prime p' > p. And for halting, we can start "let >>>>>>> H be
any subroutine of two arguments always returning true or false". >>>>>>> Now,
all the objects /do/ exist. In the first case, the construction >>>>>>> shows
that no prime is the largest, and in the second it shows that no >>>>>>> subroutine computes the halting function.
This issue led to another change. In the last couple of years, I >>>>>>> would
start the course by setting Post's correspondence problem as if
it were
just a fun programming challenge. As the days passed (and the
course
got into more and more serious material) it would start to become >>>>>>> clear
that this was no ordinary programming challenge. Many students >>>>>>> started
to suspect that, despite the trivial sounding specification, no
program
could do the job. I always felt a bit uneasy doing this, as if I >>>>>>> was
not being 100% honest, but it was a very useful learning
experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the question. >>>>>>
It is an easily verified fact that when Jack's question is posed
to Jack
that this question is self-contradictory for Jack or anyone else
having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being.
H is not, it is a program, so before we even ask H what will
happen, the answer has been fixed by the definition of the codr of H. >>>>>
It is also clear that when a question has no yes or no answer because >>>>>> it is self-contradictory that this question is aptly classified as >>>>>> incorrect.
And the actual question DOES have a yes or no answer, in this case,
since H(D,D) says 0 (non-Halting) the actual answer to the question
does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".
It is incorrect to say that a question is not self-contradictory
on the
basis that it is not self-contradictory in some contexts. If a
question
is self-contradictory in some contexts then in these contexts it
is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.
But you are missing the difference. A Decider is a fixed piece of
code, so its answer has always been fixed to this question since it
has been designed. Thus what it will say isn't a varialbe that can
lead to the self-contradiction cycle, but a fixed result that will
either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different
machines and two different inputs.
On 6/17/2023 8:46 PM, Richard Damon wrote:
On 6/17/23 7:44 PM, olcott wrote:
On 6/17/2023 6:18 PM, Richard Damon wrote:
On 6/17/23 6:03 PM, Jeff Barnett wrote:
By the way, we have noticed that you haven't played the big "C"
card recently. Is this 1) an immaculate cure, 2) you putting on
your big boy pants and taking responsibility for your own sorry
life and mind, or 3) the time where you try to wiggle out of a past
sequel of lies? We've seen all but variation 2 in past
interactions. The curious want to know the real skinny so speak up!
--
Jeff Barnett
My assumption (but just that) is that it has been a lie the whole
time to try to gain sympathy. He as earned no reputation for
honesty, and so none will be given.
I will admit he might have been sick, but there has been no actual
evidence of it, so it is mearly an unsubstantiated claim.
I did have cancer jam packed in every lymph node.
After chemo therapy last Summer this has cleared up.
It is my current understanding that Follicular Lymphoma always
comes back eventually.
A FLIPI index score of 3 was very bad news.
A 53% five year survival rate and a 35% 10 year survival rate.
https://www.nature.com/articles/s41408-019-0269-6
Which is a fairly amazing recovery, as your reports from a year and a
half ago were something like 90% dead by the end of last year from my
memory.
I won't say you are lying, as I have no evidence, and do admit you
could be telling the truth, but considering your verasity at other
topics, you have no credit earned in believability, and shading some
of the truth is an act I wouldn't put past you.
It is not the case that I ever lied on this forum. Most people
make the mistake of calling me a liar entirely on the basis that
they really really don't believe me and what I say goes against
conventional wisdom.
Most people seem to take conventional wisdom as the infallible
word of God.
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return value
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>> Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students out. >>>>>>>> And
the reason /why/ it catches so many out eventually led me to
stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting
question
is being asked. The students think they can see it right there >>>>>>>> in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code >>>>>>>> calls a
function that does not exist, so "it" (the constructed code, the >>>>>>>> whole
program) does not exist either.
The fact that it's code, and the students are almost all
programmers and
not mathematicians, makes it worse. A mathematician seeing "let >>>>>>>> p be
the largest prime" does not assume that such a p exists. So when a >>>>>>>> prime number p' > p is constructed from p, this is not seen as a >>>>>>>> "self-contradictory number" because neither p nor p' exist. But >>>>>>>> the
halting theorem is even more deceptive for programmers, because the >>>>>>>> desired function, H (or whatever), appears to be so well defined >>>>>>>> -- much
more well-defined than "the largest prime". We have an exact >>>>>>>> specification for it, mapping arguments to returned values.
It's just
software engineering to write such things (they erroneously
assume).
These sorts of proof can always be re-worded so as to avoid the >>>>>>>> initial
assumption. For example, we can start "let p be any prime", and >>>>>>>> from p
we construct a prime p' > p. And for halting, we can start "let >>>>>>>> H be
any subroutine of two arguments always returning true or false". >>>>>>>> Now,
all the objects /do/ exist. In the first case, the construction >>>>>>>> shows
that no prime is the largest, and in the second it shows that no >>>>>>>> subroutine computes the halting function.
This issue led to another change. In the last couple of years, >>>>>>>> I would
start the course by setting Post's correspondence problem as if >>>>>>>> it were
just a fun programming challenge. As the days passed (and the >>>>>>>> course
got into more and more serious material) it would start to
become clear
that this was no ordinary programming challenge. Many students >>>>>>>> started
to suspect that, despite the trivial sounding specification, no >>>>>>>> program
could do the job. I always felt a bit uneasy doing this, as if >>>>>>>> I was
not being 100% honest, but it was a very useful learning
experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful >>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the
question.
It is an easily verified fact that when Jack's question is posed >>>>>>> to Jack
that this question is self-contradictory for Jack or anyone else >>>>>>> having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being. >>>>>>
H is not, it is a program, so before we even ask H what will
happen, the answer has been fixed by the definition of the codr of H. >>>>>>
It is also clear that when a question has no yes or no answer
because
it is self-contradictory that this question is aptly classified as >>>>>>> incorrect.
And the actual question DOES have a yes or no answer, in this
case, since H(D,D) says 0 (non-Halting) the actual answer to the
question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that can
somehow change itself "at will".
It is incorrect to say that a question is not self-contradictory >>>>>>> on the
basis that it is not self-contradictory in some contexts. If a
question
is self-contradictory in some contexts then in these contexts it >>>>>>> is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.
But you are missing the difference. A Decider is a fixed piece of
code, so its answer has always been fixed to this question since it
has been designed. Thus what it will say isn't a varialbe that can
lead to the self-contradiction cycle, but a fixed result that will
either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different
machines and two different inputs.
that any H<n> having a pathological relationship to its input D<n> could possibly provide then that is proof that D<n> is an invalid input for
H<n> in the same way that any self-contradictory question is an
incorrect question.
On 6/17/23 11:10 PM, olcott wrote:
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return value
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>>> Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students
out. And
the reason /why/ it catches so many out eventually led me to >>>>>>>>> stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting >>>>>>>>> question
is being asked. The students think they can see it right there >>>>>>>>> in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code >>>>>>>>> calls a
function that does not exist, so "it" (the constructed code, >>>>>>>>> the whole
program) does not exist either.
The fact that it's code, and the students are almost all
programmers and
not mathematicians, makes it worse. A mathematician seeing >>>>>>>>> "let p be
the largest prime" does not assume that such a p exists. So >>>>>>>>> when a
prime number p' > p is constructed from p, this is not seen as a >>>>>>>>> "self-contradictory number" because neither p nor p' exist.
But the
halting theorem is even more deceptive for programmers, because >>>>>>>>> the
desired function, H (or whatever), appears to be so well
defined -- much
more well-defined than "the largest prime". We have an exact >>>>>>>>> specification for it, mapping arguments to returned values.
It's just
software engineering to write such things (they erroneously
assume).
These sorts of proof can always be re-worded so as to avoid the >>>>>>>>> initial
assumption. For example, we can start "let p be any prime", >>>>>>>>> and from p
we construct a prime p' > p. And for halting, we can start >>>>>>>>> "let H be
any subroutine of two arguments always returning true or
false". Now,
all the objects /do/ exist. In the first case, the
construction shows
that no prime is the largest, and in the second it shows that no >>>>>>>>> subroutine computes the halting function.
This issue led to another change. In the last couple of years, >>>>>>>>> I would
start the course by setting Post's correspondence problem as if >>>>>>>>> it were
just a fun programming challenge. As the days passed (and the >>>>>>>>> course
got into more and more serious material) it would start to
become clear
that this was no ordinary programming challenge. Many students >>>>>>>>> started
to suspect that, despite the trivial sounding specification, no >>>>>>>>> program
could do the job. I always felt a bit uneasy doing this, as if >>>>>>>>> I was
not being 100% honest, but it was a very useful learning
experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the >>>>>>>> question.
It is an easily verified fact that when Jack's question is posed >>>>>>>> to Jack
that this question is self-contradictory for Jack or anyone else >>>>>>>> having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being. >>>>>>>
H is not, it is a program, so before we even ask H what will
happen, the answer has been fixed by the definition of the codr
of H.
It is also clear that when a question has no yes or no answer
because
it is self-contradictory that this question is aptly classified as >>>>>>>> incorrect.
And the actual question DOES have a yes or no answer, in this
case, since H(D,D) says 0 (non-Halting) the actual answer to the >>>>>>> question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that can >>>>>>> somehow change itself "at will".
It is incorrect to say that a question is not self-contradictory >>>>>>>> on the
basis that it is not self-contradictory in some contexts. If a >>>>>>>> question
is self-contradictory in some contexts then in these contexts it >>>>>>>> is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" become
self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics.
But you are missing the difference. A Decider is a fixed piece of
code, so its answer has always been fixed to this question since it
has been designed. Thus what it will say isn't a varialbe that can
lead to the self-contradiction cycle, but a fixed result that will
either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different
machines and two different inputs.
that any H<n> having a pathological relationship to its input D<n>
could possibly provide then that is proof that D<n> is an invalid
input for H<n> in the same way that any self-contradictory question is
an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt, and
that HAS a correct answer, since your H(D,D) returns 0, the answer is
that D(D) does Halt, and thus H was wrong.
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>>>>> Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students >>>>>>>>>>> out. And
the reason /why/ it catches so many out eventually led me to >>>>>>>>>>> stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting >>>>>>>>>>> question
is being asked. The students think they can see it right >>>>>>>>>>> there in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The >>>>>>>>>>> code calls a
function that does not exist, so "it" (the constructed code, >>>>>>>>>>> the whole
program) does not exist either.
The fact that it's code, and the students are almost all >>>>>>>>>>> programmers and
not mathematicians, makes it worse. A mathematician seeing >>>>>>>>>>> "let p be
the largest prime" does not assume that such a p exists. So >>>>>>>>>>> when a
prime number p' > p is constructed from p, this is not seen as a >>>>>>>>>>> "self-contradictory number" because neither p nor p' exist. >>>>>>>>>>> But the
halting theorem is even more deceptive for programmers,
because the
desired function, H (or whatever), appears to be so well >>>>>>>>>>> defined -- much
more well-defined than "the largest prime". We have an exact >>>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>>> It's just
software engineering to write such things (they erroneously >>>>>>>>>>> assume).
These sorts of proof can always be re-worded so as to avoid >>>>>>>>>>> the initial
assumption. For example, we can start "let p be any prime", >>>>>>>>>>> and from p
we construct a prime p' > p. And for halting, we can start >>>>>>>>>>> "let H be
any subroutine of two arguments always returning true or >>>>>>>>>>> false". Now,
all the objects /do/ exist. In the first case, the
construction shows
that no prime is the largest, and in the second it shows that no >>>>>>>>>>> subroutine computes the halting function.
This issue led to another change. In the last couple of >>>>>>>>>>> years, I would
start the course by setting Post's correspondence problem as >>>>>>>>>>> if it were
just a fun programming challenge. As the days passed (and >>>>>>>>>>> the course
got into more and more serious material) it would start to >>>>>>>>>>> become clear
that this was no ordinary programming challenge. Many
students started
to suspect that, despite the trivial sounding specification, >>>>>>>>>>> no program
could do the job. I always felt a bit uneasy doing this, as >>>>>>>>>>> if I was
not being 100% honest, but it was a very useful learning >>>>>>>>>>> experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the >>>>>>>>>> question.
It is an easily verified fact that when Jack's question is >>>>>>>>>> posed to Jack
that this question is self-contradictory for Jack or anyone >>>>>>>>>> else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional >>>>>>>>> being.
H is not, it is a program, so before we even ask H what will >>>>>>>>> happen, the answer has been fixed by the definition of the codr >>>>>>>>> of H.
It is also clear that when a question has no yes or no answer >>>>>>>>>> because
it is self-contradictory that this question is aptly
classified as
incorrect.
And the actual question DOES have a yes or no answer, in this >>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to >>>>>>>>> the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that >>>>>>>>> can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some contexts. If a >>>>>>>>>> question
is self-contradictory in some contexts then in these contexts >>>>>>>>>> it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" become >>>>>>>>> self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics. >>>>>>>>
But you are missing the difference. A Decider is a fixed piece of >>>>>>> code, so its answer has always been fixed to this question since >>>>>>> it has been designed. Thus what it will say isn't a varialbe that >>>>>>> can lead to the self-contradiction cycle, but a fixed result that >>>>>>> will either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean return >>>>>> values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different
machines and two different inputs.
value that any H<n> having a pathological relationship to its input
D<n> could possibly provide then that is proof that D<n> is an
invalid input for H<n> in the same way that any self-contradictory
question is an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt, and
that HAS a correct answer, since your H(D,D) returns 0, the answer is
that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return value
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>>>> Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students >>>>>>>>>> out. And
the reason /why/ it catches so many out eventually led me to >>>>>>>>>> stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a self-contradicting >>>>>>>>>> question
is being asked. The students think they can see it right >>>>>>>>>> there in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The code >>>>>>>>>> calls a
function that does not exist, so "it" (the constructed code, >>>>>>>>>> the whole
program) does not exist either.
The fact that it's code, and the students are almost all
programmers and
not mathematicians, makes it worse. A mathematician seeing >>>>>>>>>> "let p be
the largest prime" does not assume that such a p exists. So >>>>>>>>>> when a
prime number p' > p is constructed from p, this is not seen as a >>>>>>>>>> "self-contradictory number" because neither p nor p' exist. >>>>>>>>>> But the
halting theorem is even more deceptive for programmers,
because the
desired function, H (or whatever), appears to be so well
defined -- much
more well-defined than "the largest prime". We have an exact >>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>> It's just
software engineering to write such things (they erroneously >>>>>>>>>> assume).
These sorts of proof can always be re-worded so as to avoid >>>>>>>>>> the initial
assumption. For example, we can start "let p be any prime", >>>>>>>>>> and from p
we construct a prime p' > p. And for halting, we can start >>>>>>>>>> "let H be
any subroutine of two arguments always returning true or
false". Now,
all the objects /do/ exist. In the first case, the
construction shows
that no prime is the largest, and in the second it shows that no >>>>>>>>>> subroutine computes the halting function.
This issue led to another change. In the last couple of
years, I would
start the course by setting Post's correspondence problem as >>>>>>>>>> if it were
just a fun programming challenge. As the days passed (and the >>>>>>>>>> course
got into more and more serious material) it would start to >>>>>>>>>> become clear
that this was no ordinary programming challenge. Many
students started
to suspect that, despite the trivial sounding specification, >>>>>>>>>> no program
could do the job. I always felt a bit uneasy doing this, as >>>>>>>>>> if I was
not being 100% honest, but it was a very useful learning
experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the >>>>>>>>> question.
It is an easily verified fact that when Jack's question is
posed to Jack
that this question is self-contradictory for Jack or anyone
else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional being. >>>>>>>>
H is not, it is a program, so before we even ask H what will
happen, the answer has been fixed by the definition of the codr >>>>>>>> of H.
It is also clear that when a question has no yes or no answer >>>>>>>>> because
it is self-contradictory that this question is aptly classified as >>>>>>>>> incorrect.
And the actual question DOES have a yes or no answer, in this
case, since H(D,D) says 0 (non-Halting) the actual answer to the >>>>>>>> question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that
can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some contexts. If a >>>>>>>>> question
is self-contradictory in some contexts then in these contexts >>>>>>>>> it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" become >>>>>>>> self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics. >>>>>>>
But you are missing the difference. A Decider is a fixed piece of
code, so its answer has always been fixed to this question since
it has been designed. Thus what it will say isn't a varialbe that
can lead to the self-contradiction cycle, but a fixed result that
will either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean return >>>>> values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different
machines and two different inputs.
that any H<n> having a pathological relationship to its input D<n>
could possibly provide then that is proof that D<n> is an invalid
input for H<n> in the same way that any self-contradictory question
is an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt, and
that HAS a correct answer, since your H(D,D) returns 0, the answer is
that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students >>>>>>>>>>>> out. And
the reason /why/ it catches so many out eventually led me to >>>>>>>>>>>> stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a
self-contradicting question
is being asked. The students think they can see it right >>>>>>>>>>>> there in the
constructed code: "if H says I halt, I don't halt!".
Of course, they are wrong. The code is /not/ there. The >>>>>>>>>>>> code calls a
function that does not exist, so "it" (the constructed code, >>>>>>>>>>>> the whole
program) does not exist either.
The fact that it's code, and the students are almost all >>>>>>>>>>>> programmers and
not mathematicians, makes it worse. A mathematician seeing >>>>>>>>>>>> "let p be
the largest prime" does not assume that such a p exists. So >>>>>>>>>>>> when a
prime number p' > p is constructed from p, this is not seen >>>>>>>>>>>> as a
"self-contradictory number" because neither p nor p' exist. >>>>>>>>>>>> But the
halting theorem is even more deceptive for programmers, >>>>>>>>>>>> because the
desired function, H (or whatever), appears to be so well >>>>>>>>>>>> defined -- much
more well-defined than "the largest prime". We have an exact >>>>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>>>> It's just
software engineering to write such things (they erroneously >>>>>>>>>>>> assume).
These sorts of proof can always be re-worded so as to avoid >>>>>>>>>>>> the initial
assumption. For example, we can start "let p be any prime", >>>>>>>>>>>> and from p
we construct a prime p' > p. And for halting, we can start >>>>>>>>>>>> "let H be
any subroutine of two arguments always returning true or >>>>>>>>>>>> false". Now,
all the objects /do/ exist. In the first case, the
construction shows
that no prime is the largest, and in the second it shows >>>>>>>>>>>> that no
subroutine computes the halting function.
This issue led to another change. In the last couple of >>>>>>>>>>>> years, I would
start the course by setting Post's correspondence problem as >>>>>>>>>>>> if it were
just a fun programming challenge. As the days passed (and >>>>>>>>>>>> the course
got into more and more serious material) it would start to >>>>>>>>>>>> become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>> students started
to suspect that, despite the trivial sounding specification, >>>>>>>>>>>> no program
could do the job. I always felt a bit uneasy doing this, as >>>>>>>>>>>> if I was
not being 100% honest, but it was a very useful learning >>>>>>>>>>>> experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the >>>>>>>>>>> question.
It is an easily verified fact that when Jack's question is >>>>>>>>>>> posed to Jack
that this question is self-contradictory for Jack or anyone >>>>>>>>>>> else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional >>>>>>>>>> being.
H is not, it is a program, so before we even ask H what will >>>>>>>>>> happen, the answer has been fixed by the definition of the >>>>>>>>>> codr of H.
It is also clear that when a question has no yes or no answer >>>>>>>>>>> because
it is self-contradictory that this question is aptly
classified as
incorrect.
And the actual question DOES have a yes or no answer, in this >>>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to >>>>>>>>>> the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that >>>>>>>>>> can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some contexts. If >>>>>>>>>>> a question
is self-contradictory in some contexts then in these contexts >>>>>>>>>>> it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run"
become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics. >>>>>>>>>
But you are missing the difference. A Decider is a fixed piece >>>>>>>> of code, so its answer has always been fixed to this question
since it has been designed. Thus what it will say isn't a
varialbe that can lead to the self-contradiction cycle, but a
fixed result that will either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean
return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different
machines and two different inputs.
value that any H<n> having a pathological relationship to its input
D<n> could possibly provide then that is proof that D<n> is an
invalid input for H<n> in the same way that any self-contradictory
question is an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt, and
that HAS a correct answer, since your H(D,D) returns 0, the answer
is that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
On 6/18/23 12:41 PM, olcott wrote:
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch students >>>>>>>>>>>>> out. And
the reason /why/ it catches so many out eventually led me >>>>>>>>>>>>> to stop using
the proof-by-contradiction argument in my classes.
The thing is, it looks so very much like a
self-contradicting question
is being asked. The students think they can see it right >>>>>>>>>>>>> there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. The >>>>>>>>>>>>> code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost all >>>>>>>>>>>>> programmers and
not mathematicians, makes it worse. A mathematician seeing >>>>>>>>>>>>> "let p be
the largest prime" does not assume that such a p exists. >>>>>>>>>>>>> So when a
prime number p' > p is constructed from p, this is not seen >>>>>>>>>>>>> as a
"self-contradictory number" because neither p nor p' exist. >>>>>>>>>>>>> But the
halting theorem is even more deceptive for programmers, >>>>>>>>>>>>> because the
desired function, H (or whatever), appears to be so well >>>>>>>>>>>>> defined -- much
more well-defined than "the largest prime". We have an exact >>>>>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>>>>> It's just
software engineering to write such things (they erroneously >>>>>>>>>>>>> assume).
These sorts of proof can always be re-worded so as to avoid >>>>>>>>>>>>> the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can start >>>>>>>>>>>>> "let H be
any subroutine of two arguments always returning true or >>>>>>>>>>>>> false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it shows >>>>>>>>>>>>> that no
subroutine computes the halting function.
This issue led to another change. In the last couple of >>>>>>>>>>>>> years, I would
start the course by setting Post's correspondence problem >>>>>>>>>>>>> as if it were
just a fun programming challenge. As the days passed (and >>>>>>>>>>>>> the course
got into more and more serious material) it would start to >>>>>>>>>>>>> become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding
specification, no program
could do the job. I always felt a bit uneasy doing this, >>>>>>>>>>>>> as if I was
not being 100% honest, but it was a very useful learning >>>>>>>>>>>>> experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the >>>>>>>>>>>> question.
It is an easily verified fact that when Jack's question is >>>>>>>>>>>> posed to Jack
that this question is self-contradictory for Jack or anyone >>>>>>>>>>>> else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional >>>>>>>>>>> being.
H is not, it is a program, so before we even ask H what will >>>>>>>>>>> happen, the answer has been fixed by the definition of the >>>>>>>>>>> codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly
classified as
incorrect.
And the actual question DOES have a yes or no answer, in this >>>>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to >>>>>>>>>>> the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program that >>>>>>>>>>> can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some contexts. If >>>>>>>>>>>> a question
is self-contradictory in some contexts then in these
contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not
Jack it is not self-contradictory. Context changes the semantics. >>>>>>>>>>
But you are missing the difference. A Decider is a fixed piece >>>>>>>>> of code, so its answer has always been fixed to this question >>>>>>>>> since it has been designed. Thus what it will say isn't a
varialbe that can lead to the self-contradiction cycle, but a >>>>>>>>> fixed result that will either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean >>>>>>>> return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different >>>>>>> machines and two different inputs.
value that any H<n> having a pathological relationship to its
input D<n> could possibly provide then that is proof that D<n> is
an invalid input for H<n> in the same way that any
self-contradictory question is an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt,
and that HAS a correct answer, since your H(D,D) returns 0, the
answer is that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully saying
doesn't have an answer.
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return >>>>>>>> value that any H<n> having a pathological relationship to its
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually led me >>>>>>>>>>>>>>> to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>
The thing is, it looks so very much like a
self-contradicting question
is being asked. The students think they can see it right >>>>>>>>>>>>>>> there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. The >>>>>>>>>>>>>>> code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost all >>>>>>>>>>>>>>> programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p exists. >>>>>>>>>>>>>>> So when a
prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>> seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>> because the
desired function, H (or whatever), appears to be so well >>>>>>>>>>>>>>> defined -- much
more well-defined than "the largest prime". We have an >>>>>>>>>>>>>>> exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they
erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true or >>>>>>>>>>>>>>> false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it shows >>>>>>>>>>>>>>> that no
subroutine computes the halting function.
This issue led to another change. In the last couple of >>>>>>>>>>>>>>> years, I would
start the course by setting Post's correspondence problem >>>>>>>>>>>>>>> as if it were
just a fun programming challenge. As the days passed >>>>>>>>>>>>>>> (and the course
got into more and more serious material) it would start >>>>>>>>>>>>>>> to become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding
specification, no program
could do the job. I always felt a bit uneasy doing this, >>>>>>>>>>>>>>> as if I was
not being 100% honest, but it was a very useful learning >>>>>>>>>>>>>>> experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>> truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>> the question.
It is an easily verified fact that when Jack's question is >>>>>>>>>>>>>> posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>> anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a
volitional being.
H is not, it is a program, so before we even ask H what >>>>>>>>>>>>> will happen, the answer has been fixed by the definition of >>>>>>>>>>>>> the codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>> answer to the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program >>>>>>>>>>>>> that can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some contexts. >>>>>>>>>>>>>> If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>> incorrect.
Every input to a Turing machine decider such that both Boolean >>>>>>>>>> return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two
different machines and two different inputs.
input D<n> could possibly provide then that is proof that D<n> >>>>>>>> is an invalid input for H<n> in the same way that any
self-contradictory question is an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt, >>>>>>> and that HAS a correct answer, since your H(D,D) returns 0, the
answer is that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully saying
doesn't have an answer.
context that the question is posed to Jack it is self-contradictory.
You can ignore that context matters yet that is not any rebuttal.
Right, but that has ZERO bearig on the Halting Problem,
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes:
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually led me >>>>>>>>>>>>>> to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>
The thing is, it looks so very much like a
self-contradicting question
is being asked. The students think they can see it right >>>>>>>>>>>>>> there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. The >>>>>>>>>>>>>> code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost all >>>>>>>>>>>>>> programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p exists. >>>>>>>>>>>>>> So when a
prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>> seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>> because the
desired function, H (or whatever), appears to be so well >>>>>>>>>>>>>> defined -- much
more well-defined than "the largest prime". We have an exact >>>>>>>>>>>>>> specification for it, mapping arguments to returned >>>>>>>>>>>>>> values. It's just
software engineering to write such things (they
erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true or >>>>>>>>>>>>>> false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it shows >>>>>>>>>>>>>> that no
subroutine computes the halting function.
This issue led to another change. In the last couple of >>>>>>>>>>>>>> years, I would
start the course by setting Post's correspondence problem >>>>>>>>>>>>>> as if it were
just a fun programming challenge. As the days passed (and >>>>>>>>>>>>>> the course
got into more and more serious material) it would start to >>>>>>>>>>>>>> become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding
specification, no program
could do the job. I always felt a bit uneasy doing this, >>>>>>>>>>>>>> as if I was
not being 100% honest, but it was a very useful learning >>>>>>>>>>>>>> experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
Jack can't possibly give a correct yes/no answer to the >>>>>>>>>>>>> question.
It is an easily verified fact that when Jack's question is >>>>>>>>>>>>> posed to Jack
that this question is self-contradictory for Jack or anyone >>>>>>>>>>>>> else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a volitional >>>>>>>>>>>> being.
H is not, it is a program, so before we even ask H what will >>>>>>>>>>>> happen, the answer has been fixed by the definition of the >>>>>>>>>>>> codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, in >>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>> answer to the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program >>>>>>>>>>>> that can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some contexts. >>>>>>>>>>>>> If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>> Jack it is not self-contradictory. Context changes the
semantics.
But you are missing the difference. A Decider is a fixed piece >>>>>>>>>> of code, so its answer has always been fixed to this question >>>>>>>>>> since it has been designed. Thus what it will say isn't a
varialbe that can lead to the self-contradiction cycle, but a >>>>>>>>>> fixed result that will either be correct or incorrect.
Every input to a Turing machine decider such that both Boolean >>>>>>>>> return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two different >>>>>>>> machines and two different inputs.
value that any H<n> having a pathological relationship to its
input D<n> could possibly provide then that is proof that D<n> is >>>>>>> an invalid input for H<n> in the same way that any
self-contradictory question is an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt,
and that HAS a correct answer, since your H(D,D) returns 0, the
answer is that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful
yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully saying
doesn't have an answer.
context that the question is posed to Jack it is self-contradictory.
You can ignore that context matters yet that is not any rebuttal.
On 6/18/23 2:05 PM, olcott wrote:In other words you fail to understand that when Jack's question is posed
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>> me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a
self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. >>>>>>>>>>>>>>>>> The code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>> all programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>> seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>>>> because the
desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>> well defined -- much
more well-defined than "the largest prime". We have an >>>>>>>>>>>>>>>>> exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>> or false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple >>>>>>>>>>>>>>>>> of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days passed >>>>>>>>>>>>>>>>> (and the course
got into more and more serious material) it would start >>>>>>>>>>>>>>>>> to become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>> truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>> the question.
It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>> is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>> volitional being.
H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>> of the codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>> that can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>> incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two
different machines and two different inputs.
return value that any H<n> having a pathological relationship >>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>> any self-contradictory question is an incorrect question.
But you have the wrong Question. The Question is Does D(D)
Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully saying
doesn't have an answer.
context that the question is posed to Jack it is self-contradictory.
You can ignore that context matters yet that is not any rebuttal.
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory for
Jack then it was also able to understand the following isomorphism:
For every H<n> on pathological input D<n> both Boolean return values
from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct return >>>>>>>>> value that any H<n> having a pathological relationship to its >>>>>>>>> input D<n> could possibly provide then that is proof that D<n> >>>>>>>>> is an invalid input for H<n> in the same way that any
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>> me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>
The thing is, it looks so very much like a
self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. >>>>>>>>>>>>>>>> The code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost all >>>>>>>>>>>>>>>> programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p exists. >>>>>>>>>>>>>>>> So when a
prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>> seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>>> because the
desired function, H (or whatever), appears to be so well >>>>>>>>>>>>>>>> defined -- much
more well-defined than "the largest prime". We have an >>>>>>>>>>>>>>>> exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true or >>>>>>>>>>>>>>>> false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it shows >>>>>>>>>>>>>>>> that no
subroutine computes the halting function.
This issue led to another change. In the last couple of >>>>>>>>>>>>>>>> years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days passed >>>>>>>>>>>>>>>> (and the course
got into more and more serious material) it would start >>>>>>>>>>>>>>>> to become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding
specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful learning >>>>>>>>>>>>>>>> experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>> truthful
yes/no answer to the following question:
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>> the question.
It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>> is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a
volitional being.
H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>> of the codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>> that can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some contexts. >>>>>>>>>>>>>>> If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>> incorrect.
Every input to a Turing machine decider such that both
Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two
different machines and two different inputs.
self-contradictory question is an incorrect question.
But you have the wrong Question. The Question is Does D(D) Halt, >>>>>>>> and that HAS a correct answer, since your H(D,D) returns 0, the >>>>>>>> answer is that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful >>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully saying
doesn't have an answer.
context that the question is posed to Jack it is self-contradictory.
You can ignore that context matters yet that is not any rebuttal.
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory for
Jack then it was also able to understand the following isomorphism:
For every H<n> on pathological input D<n> both Boolean return values
from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a self-contradictory question for every H<n>.
On 6/18/2023 1:20 PM, Richard Damon wrote:
On 6/18/23 2:05 PM, olcott wrote:In other words you fail to understand that when Jack's question is posed
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the >>>>> context that the question is posed to Jack it is self-contradictory. >>>>> You can ignore that context matters yet that is not any rebuttal.
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct >>>>>>>>>>> return value that any H<n> having a pathological relationship >>>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>>> any self-contradictory question is an incorrect question. >>>>>>>>>>>
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>>> me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>> self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. >>>>>>>>>>>>>>>>>> The code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>> all programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>>> seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>> programmers, because the
desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>> well defined -- much
more well-defined than "the largest prime". We have >>>>>>>>>>>>>>>>>> an exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>>> or false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple >>>>>>>>>>>>>>>>>> of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days passed >>>>>>>>>>>>>>>>>> (and the course
got into more and more serious material) it would >>>>>>>>>>>>>>>>>> start to become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>> truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>>> the question.
It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>>> is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>> volitional being.
H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>>> of the codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>>> that can somehow change itself "at will".
It is incorrect to say that a question is not >>>>>>>>>>>>>>>>> self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>>> incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two >>>>>>>>>>>> different machines and two different inputs.
But you have the wrong Question. The Question is Does D(D) >>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong. >>>>>>>>>>
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question. >>>>>>>>>
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully
saying doesn't have an answer.
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory for
Jack then it was also able to understand the following isomorphism:
For every H<n> on pathological input D<n> both Boolean return values
from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
to someone else that it remains self-contradictory.
On 6/18/23 2:05 PM, olcott wrote:Some of the elements of H<n>/D<n> are identical except for the return
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>> me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a
self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. >>>>>>>>>>>>>>>>> The code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>> all programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>> seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>>>> because the
desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>> well defined -- much
more well-defined than "the largest prime". We have an >>>>>>>>>>>>>>>>> exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>> or false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple >>>>>>>>>>>>>>>>> of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days passed >>>>>>>>>>>>>>>>> (and the course
got into more and more serious material) it would start >>>>>>>>>>>>>>>>> to become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>> truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>> the question.
It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>> is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>> volitional being.
H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>> of the codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>> that can somehow change itself "at will".
It is incorrect to say that a question is not
self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>> incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two
different machines and two different inputs.
return value that any H<n> having a pathological relationship >>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>> any self-contradictory question is an incorrect question.
But you have the wrong Question. The Question is Does D(D)
Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong.
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question.
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully saying
doesn't have an answer.
context that the question is posed to Jack it is self-contradictory.
You can ignore that context matters yet that is not any rebuttal.
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory for
Jack then it was also able to understand the following isomorphism:
For every H<n> on pathological input D<n> both Boolean return values
from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
On 6/18/2023 1:20 PM, Richard Damon wrote:
On 6/18/23 2:05 PM, olcott wrote:Some of the elements of H<n>/D<n> are identical except for the return
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the >>>>> context that the question is posed to Jack it is self-contradictory. >>>>> You can ignore that context matters yet that is not any rebuttal.
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct >>>>>>>>>>> return value that any H<n> having a pathological relationship >>>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>>> any self-contradictory question is an incorrect question. >>>>>>>>>>>
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>
Except that the Halting Problem isn't a
"Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>>> me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>> self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. >>>>>>>>>>>>>>>>>> The code calls a
function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>>> code, the whole
program) does not exist either.
The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>> all programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>>> seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>> programmers, because the
desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>> well defined -- much
more well-defined than "the largest prime". We have >>>>>>>>>>>>>>>>>> an exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>>> or false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function.
This issue led to another change. In the last couple >>>>>>>>>>>>>>>>>> of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days passed >>>>>>>>>>>>>>>>>> (and the course
got into more and more serious material) it would >>>>>>>>>>>>>>>>>> start to become clear
that this was no ordinary programming challenge. Many >>>>>>>>>>>>>>>>>> students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>> truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>>> the question.
It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>>> is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question.
But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>> volitional being.
H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>>> of the codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.
You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>>> that can somehow change itself "at will".
It is incorrect to say that a question is not >>>>>>>>>>>>>>>>> self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>>> become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>>> incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two >>>>>>>>>>>> different machines and two different inputs.
But you have the wrong Question. The Question is Does D(D) >>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong. >>>>>>>>>>
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that
are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question. >>>>>>>>>
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully
saying doesn't have an answer.
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory for
Jack then it was also able to understand the following isomorphism:
For every H<n> on pathological input D<n> both Boolean return values
from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
value from H. In both of these cases the return value is incorrect.
Since I have just defined the set of every halting problem {decider /
input} pair that can possibly exist in any universe there is no rebuttal
of: What about this element of this set?
On 6/18/2023 2:19 PM, Richard Damon wrote:
On 6/18/23 2:47 PM, olcott wrote:
On 6/18/2023 1:20 PM, Richard Damon wrote:
On 6/18/23 2:05 PM, olcott wrote:Some of the elements of H<n>/D<n> are identical except for the return
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within >>>>>>> the
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct >>>>>>>>>>>>> return value that any H<n> having a pathological
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>>But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>>> volitional being.
Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>>> led me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>>> self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. >>>>>>>>>>>>>>>>>>>> The code calls a
function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>>> constructed code, the whole
program) does not exist either.
The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>>>> all programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>>> not seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>>> programmers, because the
desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>>>> well defined -- much
more well-defined than "the largest prime". We have >>>>>>>>>>>>>>>>>>>> an exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as >>>>>>>>>>>>>>>>>>>> to avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we >>>>>>>>>>>>>>>>>>>> can start "let H be
any subroutine of two arguments always returning >>>>>>>>>>>>>>>>>>>> true or false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>>
This issue led to another change. In the last >>>>>>>>>>>>>>>>>>>> couple of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days >>>>>>>>>>>>>>>>>>>> passed (and the course
got into more and more serious material) it would >>>>>>>>>>>>>>>>>>>> start to become clear
that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>>> Many students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>>>> truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer >>>>>>>>>>>>>>>>>>> to the question.
It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>>> question is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question. >>>>>>>>>>>>>>>>>>
H is not, it is a program, so before we even ask H >>>>>>>>>>>>>>>>>> what will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>>> definition of the codr of H.
It is also clear that when a question has no yes or >>>>>>>>>>>>>>>>>>> no answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, >>>>>>>>>>>>>>>>>> in this case, since H(D,D) says 0 (non-Halting) the >>>>>>>>>>>>>>>>>> actual answer to the question does D(D) Halt is YES. >>>>>>>>>>>>>>>>>>
You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>>
It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>>> self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>>> run" become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>>>> piece of code, so its answer has always been fixed to >>>>>>>>>>>>>>>> this question since it has been designed. Thus what it >>>>>>>>>>>>>>>> will say isn't a varialbe that can lead to the >>>>>>>>>>>>>>>> self-contradiction cycle, but a fixed result that will >>>>>>>>>>>>>>>> either be correct or incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two >>>>>>>>>>>>>> different machines and two different inputs.
relationship to its input D<n> could possibly provide then >>>>>>>>>>>>> that is proof that D<n> is an invalid input for H<n> in the >>>>>>>>>>>>> same way that any self-contradictory question is an
incorrect question.
But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) >>>>>>>>>>>> returns 0, the answer is that D(D) does Halt, and thus H was >>>>>>>>>>>> wrong.
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that >>>>>>>>>>> are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because >>>>>>>>>>> the question is self-contradictory is an incorrect question. >>>>>>>>>>>
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully
saying doesn't have an answer.
context that the question is posed to Jack it is self-contradictory. >>>>>>> You can ignore that context matters yet that is not any rebuttal. >>>>>>>
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory for >>>>> Jack then it was also able to understand the following isomorphism:
For every H<n> on pathological input D<n> both Boolean return
values from H<n> are incorrect for D<n> proving that D<n> is
isomorphic to a self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
value from H. In both of these cases the return value is incorrect.
Nope, can't be.
The only difference between otherwise identical pairs of pairs H<n>/D<n>
and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m> respectively thus proving that both True and False are the wrong return
value for the identical finite string pairs D<n>/D<m>.
On 6/18/23 2:47 PM, olcott wrote:
On 6/18/2023 1:20 PM, Richard Damon wrote:
On 6/18/23 2:05 PM, olcott wrote:Some of the elements of H<n>/D<n> are identical except for the return
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because within the >>>>>> context that the question is posed to Jack it is self-contradictory. >>>>>> You can ignore that context matters yet that is not any rebuttal.
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct >>>>>>>>>>>> return value that any H<n> having a pathological
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote:
On 6/17/23 5:46 PM, olcott wrote:When this question is posed to machine H.
On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>> volitional being.
Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>> led me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>> self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ there. >>>>>>>>>>>>>>>>>>> The code calls a
function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>> constructed code, the whole
program) does not exist either.
The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>>> all programmers and
not mathematicians, makes it worse. A mathematician >>>>>>>>>>>>>>>>>>> seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>> not seen as a
"self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>>> exist. But the
halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>> programmers, because the
desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>>> well defined -- much
more well-defined than "the largest prime". We have >>>>>>>>>>>>>>>>>>> an exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>>>> avoid the initial
assumption. For example, we can start "let p be any >>>>>>>>>>>>>>>>>>> prime", and from p
we construct a prime p' > p. And for halting, we can >>>>>>>>>>>>>>>>>>> start "let H be
any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>>>> or false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>
This issue led to another change. In the last couple >>>>>>>>>>>>>>>>>>> of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days passed >>>>>>>>>>>>>>>>>>> (and the course
got into more and more serious material) it would >>>>>>>>>>>>>>>>>>> start to become clear
that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>> Many students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>>> truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer >>>>>>>>>>>>>>>>>> to the question.
It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>> question is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question. >>>>>>>>>>>>>>>>>
H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>>>> will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>> definition of the codr of H.
It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>>>> answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, >>>>>>>>>>>>>>>>> in this case, since H(D,D) says 0 (non-Halting) the >>>>>>>>>>>>>>>>> actual answer to the question does D(D) Halt is YES. >>>>>>>>>>>>>>>>>
You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>
It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>> self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>> run" become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>>> piece of code, so its answer has always been fixed to >>>>>>>>>>>>>>> this question since it has been designed. Thus what it >>>>>>>>>>>>>>> will say isn't a varialbe that can lead to the
self-contradiction cycle, but a fixed result that will >>>>>>>>>>>>>>> either be correct or incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two >>>>>>>>>>>>> different machines and two different inputs.
relationship to its input D<n> could possibly provide then >>>>>>>>>>>> that is proof that D<n> is an invalid input for H<n> in the >>>>>>>>>>>> same way that any self-contradictory question is an
incorrect question.
But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D)
returns 0, the answer is that D(D) does Halt, and thus H was >>>>>>>>>>> wrong.
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that >>>>>>>>>> are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because
the question is self-contradictory is an incorrect question. >>>>>>>>>>
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically
different question even though the words are the same.
But the question to Jack isn't the question you are actaully
saying doesn't have an answer.
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory for >>>> Jack then it was also able to understand the following isomorphism:
For every H<n> on pathological input D<n> both Boolean return values
from H<n> are incorrect for D<n> proving that D<n> is isomorphic to
a self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
value from H. In both of these cases the return value is incorrect.
Nope, can't be.
On 6/18/2023 3:10 PM, Richard Damon wrote:
On 6/18/23 3:26 PM, olcott wrote:
On 6/18/2023 2:19 PM, Richard Damon wrote:
On 6/18/23 2:47 PM, olcott wrote:
On 6/18/2023 1:20 PM, Richard Damon wrote:
On 6/18/23 2:05 PM, olcott wrote:Some of the elements of H<n>/D<n> are identical except for the return >>>>> value from H. In both of these cases the return value is incorrect.
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer. >>>>>>>>>>> The actual question posed to anyone else is a semantically >>>>>>>>>>> different question even though the words are the same.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>>> yes/no answer to the following question:
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct >>>>>>>>>>>>>>> return value that any H<n> having a pathological >>>>>>>>>>>>>>> relationship to its input D<n> could possibly provide >>>>>>>>>>>>>>> then that is proof that D<n> is an invalid input for H<n> >>>>>>>>>>>>>>> in the same way that any self-contradictory question is >>>>>>>>>>>>>>> an incorrect question.
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
When this question is posed to machine H. >>>>>>>>>>>>>>>>>>>On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>>>>But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>>>>> volitional being.
Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so >>>>>>>>>>>>>>>>>>>>>>> the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>>>>> led me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>>>>> self-contradicting question
is being asked. The students think they can see >>>>>>>>>>>>>>>>>>>>>> it right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ >>>>>>>>>>>>>>>>>>>>>> there. The code calls a
function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>>>>> constructed code, the whole
program) does not exist either.
The fact that it's code, and the students are >>>>>>>>>>>>>>>>>>>>>> almost all programmers and
not mathematicians, makes it worse. A >>>>>>>>>>>>>>>>>>>>>> mathematician seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>>>>> not seen as a
"self-contradictory number" because neither p nor >>>>>>>>>>>>>>>>>>>>>> p' exist. But the
halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>>>>> programmers, because the
desired function, H (or whatever), appears to be >>>>>>>>>>>>>>>>>>>>>> so well defined -- much
more well-defined than "the largest prime". We >>>>>>>>>>>>>>>>>>>>>> have an exact
specification for it, mapping arguments to >>>>>>>>>>>>>>>>>>>>>> returned values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as >>>>>>>>>>>>>>>>>>>>>> to avoid the initial
assumption. For example, we can start "let p be >>>>>>>>>>>>>>>>>>>>>> any prime", and from p
we construct a prime p' > p. And for halting, we >>>>>>>>>>>>>>>>>>>>>> can start "let H be
any subroutine of two arguments always returning >>>>>>>>>>>>>>>>>>>>>> true or false". Now,
all the objects /do/ exist. In the first case, >>>>>>>>>>>>>>>>>>>>>> the construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>>>>
This issue led to another change. In the last >>>>>>>>>>>>>>>>>>>>>> couple of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days >>>>>>>>>>>>>>>>>>>>>> passed (and the course
got into more and more serious material) it would >>>>>>>>>>>>>>>>>>>>>> start to become clear
that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>>>>> Many students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy >>>>>>>>>>>>>>>>>>>>>> doing this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give >>>>>>>>>>>>>>>>>>>>> a truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no >>>>>>>>>>>>>>>>>>>>> answer to the question.
It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>>>>> question is posed to Jack
that this question is self-contradictory for Jack >>>>>>>>>>>>>>>>>>>>> or anyone else having
a pathological relationship to the question. >>>>>>>>>>>>>>>>>>>>
H is not, it is a program, so before we even ask H >>>>>>>>>>>>>>>>>>>> what will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>>>>> definition of the codr of H.
It is also clear that when a question has no yes or >>>>>>>>>>>>>>>>>>>>> no answer because
it is self-contradictory that this question is >>>>>>>>>>>>>>>>>>>>> aptly classified as
incorrect.
And the actual question DOES have a yes or no >>>>>>>>>>>>>>>>>>>> answer, in this case, since H(D,D) says 0 >>>>>>>>>>>>>>>>>>>> (non-Halting) the actual answer to the question does >>>>>>>>>>>>>>>>>>>> D(D) Halt is YES.
You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>>>>
It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>>>>> self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in >>>>>>>>>>>>>>>>>>>>> these contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>>>>> run" become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are >>>>>>>>>>>>>>>>>>> not
Jack it is not self-contradictory. Context changes >>>>>>>>>>>>>>>>>>> the semantics.
But you are missing the difference. A Decider is a >>>>>>>>>>>>>>>>>> fixed piece of code, so its answer has always been >>>>>>>>>>>>>>>>>> fixed to this question since it has been designed. >>>>>>>>>>>>>>>>>> Thus what it will say isn't a varialbe that can lead >>>>>>>>>>>>>>>>>> to the self-contradiction cycle, but a fixed result >>>>>>>>>>>>>>>>>> that will either be correct or incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two >>>>>>>>>>>>>>>> different machines and two different inputs.
But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) >>>>>>>>>>>>>> returns 0, the answer is that D(D) does Halt, and thus H >>>>>>>>>>>>>> was wrong.
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that >>>>>>>>>>>>> are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics. >>>>>>>>>>>>>
Every question that lacks a correct yes/no answer because >>>>>>>>>>>>> the question is self-contradictory is an incorrect question. >>>>>>>>>>>>>
If you are not a mere Troll you will agree with this. >>>>>>>>>>>>>
But the ACTUAL QUESTION DOES have a correct answer.
But the question to Jack isn't the question you are actaully >>>>>>>>>> saying doesn't have an answer.
within the
context that the question is posed to Jack it is
self-contradictory.
You can ignore that context matters yet that is not any rebuttal. >>>>>>>>>
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is
self-contradictory for
Jack then it was also able to understand the following isomorphism: >>>>>>>
For every H<n> on pathological input D<n> both Boolean return
values from H<n> are incorrect for D<n> proving that D<n> is
isomorphic to a self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
Nope, can't be.
The only difference between otherwise identical pairs of pairs H<n>/D<n> >>> and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m>
respectively thus proving that both True and False are the wrong return
value for the identical finite string pairs D<n>/D<m>.
So they are different programs. Different is different. Almost the
same is not the same.
Unless you are claiming that 1 is the same as 0, they are different.
So, your claim is based on a LIE, or you are admitting you are insane.
The key difference with my work that is a true innovation in this field
is that H specifically recognizes self-contradictory inputs and rejects
them.
*Termination Analyzer H prevents Denial of Service attacks* https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks
On 6/18/23 3:26 PM, olcott wrote:
On 6/18/2023 2:19 PM, Richard Damon wrote:
On 6/18/23 2:47 PM, olcott wrote:
On 6/18/2023 1:20 PM, Richard Damon wrote:
On 6/18/23 2:05 PM, olcott wrote:Some of the elements of H<n>/D<n> are identical except for the return
On 6/18/2023 12:46 PM, Richard Damon wrote:
On 6/18/23 1:09 PM, olcott wrote:That is great we made excellent progress on this.
On 6/18/2023 11:54 AM, Richard Damon wrote:
On 6/18/23 12:41 PM, olcott wrote:The question posed to Jack does not have an answer because
On 6/18/2023 11:31 AM, Richard Damon wrote:
On 6/18/23 10:32 AM, olcott wrote:The actual question posed to Jack has no correct answer.
On 6/18/2023 7:02 AM, Richard Damon wrote:
On 6/17/23 11:10 PM, olcott wrote:sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
On 6/17/2023 9:57 PM, Richard Damon wrote:
On 6/17/23 10:29 PM, olcott wrote:If no one can possibly correctly answer what the correct >>>>>>>>>>>>>> return value that any H<n> having a pathological
On 6/17/2023 8:31 PM, Richard Damon wrote:
On 6/17/23 7:58 PM, olcott wrote:
On 6/17/2023 6:13 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
When this question is posed to machine H.On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>>>But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>>>> volitional being.
Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
the answer doesn't apply.
That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>>>> students out. And
the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>>>> led me to stop using
the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>>>
The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>>>> self-contradicting question
is being asked. The students think they can see it >>>>>>>>>>>>>>>>>>>>> right there in the
constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>>>
Of course, they are wrong. The code is /not/ >>>>>>>>>>>>>>>>>>>>> there. The code calls a
function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>>>> constructed code, the whole
program) does not exist either.
The fact that it's code, and the students are >>>>>>>>>>>>>>>>>>>>> almost all programmers and
not mathematicians, makes it worse. A >>>>>>>>>>>>>>>>>>>>> mathematician seeing "let p be
the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>>>> exists. So when a
prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>>>> not seen as a
"self-contradictory number" because neither p nor >>>>>>>>>>>>>>>>>>>>> p' exist. But the
halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>>>> programmers, because the
desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>>>>> well defined -- much
more well-defined than "the largest prime". We >>>>>>>>>>>>>>>>>>>>> have an exact
specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>>>>> values. It's just
software engineering to write such things (they >>>>>>>>>>>>>>>>>>>>> erroneously assume).
These sorts of proof can always be re-worded so as >>>>>>>>>>>>>>>>>>>>> to avoid the initial
assumption. For example, we can start "let p be >>>>>>>>>>>>>>>>>>>>> any prime", and from p
we construct a prime p' > p. And for halting, we >>>>>>>>>>>>>>>>>>>>> can start "let H be
any subroutine of two arguments always returning >>>>>>>>>>>>>>>>>>>>> true or false". Now,
all the objects /do/ exist. In the first case, the >>>>>>>>>>>>>>>>>>>>> construction shows
that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>>>> shows that no
subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>>>
This issue led to another change. In the last >>>>>>>>>>>>>>>>>>>>> couple of years, I would
start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>>>> problem as if it were
just a fun programming challenge. As the days >>>>>>>>>>>>>>>>>>>>> passed (and the course
got into more and more serious material) it would >>>>>>>>>>>>>>>>>>>>> start to become clear
that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>>>> Many students started
to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>>>> specification, no program
could do the job. I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>>>>> this, as if I was
not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>>>> learning experience for
most.
sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>>> You ask someone (we'll call him "Jack") to give >>>>>>>>>>>>>>>>>>>> a truthful
yes/no answer to the following question: >>>>>>>>>>>>>>>>>>>>
Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>>>
Jack can't possibly give a correct yes/no answer >>>>>>>>>>>>>>>>>>>> to the question.
It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>>>> question is posed to Jack
that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>>>>> anyone else having
a pathological relationship to the question. >>>>>>>>>>>>>>>>>>>
H is not, it is a program, so before we even ask H >>>>>>>>>>>>>>>>>>> what will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>>>> definition of the codr of H.
It is also clear that when a question has no yes or >>>>>>>>>>>>>>>>>>>> no answer because
it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>>>>> classified as
incorrect.
And the actual question DOES have a yes or no answer, >>>>>>>>>>>>>>>>>>> in this case, since H(D,D) says 0 (non-Halting) the >>>>>>>>>>>>>>>>>>> actual answer to the question does D(D) Halt is YES. >>>>>>>>>>>>>>>>>>>
You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>>>
It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>>>> self-contradictory on the
basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>>>> contexts. If a question
is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>>>>> contexts it is an
incorrect question.
In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>>>> run" become self-contradictory?
Jack could be asked the question:
Will Jack answer "no" to this question?
For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>>>>> semantics.
But you are missing the difference. A Decider is a >>>>>>>>>>>>>>>>> fixed piece of code, so its answer has always been >>>>>>>>>>>>>>>>> fixed to this question since it has been designed. Thus >>>>>>>>>>>>>>>>> what it will say isn't a varialbe that can lead to the >>>>>>>>>>>>>>>>> self-contradiction cycle, but a fixed result that will >>>>>>>>>>>>>>>>> either be correct or incorrect.
Every input to a Turing machine decider such that both >>>>>>>>>>>>>>>> Boolean return
values are incorrect is an incorrect input.
Except it isn't. The problem is you are looking at two >>>>>>>>>>>>>>> different machines and two different inputs.
relationship to its input D<n> could possibly provide then >>>>>>>>>>>>>> that is proof that D<n> is an invalid input for H<n> in >>>>>>>>>>>>>> the same way that any self-contradictory question is an >>>>>>>>>>>>>> incorrect question.
But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) >>>>>>>>>>>>> returns 0, the answer is that D(D) does Halt, and thus H >>>>>>>>>>>>> was wrong.
You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>> yes/no answer to the following question:
Will Jack's answer to this question be no?
For Jack the question is self-contradictory for others that >>>>>>>>>>>> are not Jack it is not self-contradictory.
The context (of who is asked) changes the semantics.
Every question that lacks a correct yes/no answer because >>>>>>>>>>>> the question is self-contradictory is an incorrect question. >>>>>>>>>>>>
If you are not a mere Troll you will agree with this.
But the ACTUAL QUESTION DOES have a correct answer.
The actual question posed to anyone else is a semantically >>>>>>>>>> different question even though the words are the same.
But the question to Jack isn't the question you are actaully >>>>>>>>> saying doesn't have an answer.
within the
context that the question is posed to Jack it is
self-contradictory.
You can ignore that context matters yet that is not any rebuttal. >>>>>>>>
Right, but that has ZERO bearig on the Halting Problem,
When ChatGPT understood that Jack's question is self-contradictory >>>>>> for
Jack then it was also able to understand the following isomorphism: >>>>>>
For every H<n> on pathological input D<n> both Boolean return
values from H<n> are incorrect for D<n> proving that D<n> is
isomorphic to a self-contradictory question for every H<n>.
No, because a given H<n> can only give one result,
value from H. In both of these cases the return value is incorrect.
Nope, can't be.
The only difference between otherwise identical pairs of pairs H<n>/D<n>
and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m>
respectively thus proving that both True and False are the wrong return
value for the identical finite string pairs D<n>/D<m>.
So they are different programs. Different is different. Almost the same
is not the same.
Unless you are claiming that 1 is the same as 0, they are different.
So, your claim is based on a LIE, or you are admitting you are insane.
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed to give
you the illusion that they know what they're talking about,
but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.)
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about,
but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.)
On 6/21/2023 2:10 PM, vallor wrote:
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory >>> questions lack a correct answer and are deemed incorrect, one could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about,
but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.)
I already know that and much worse than that they simply make up facts
on the fly citing purely fictional textbooks that have photos and back stories for the purely fictional authors. The fake textbooks themselves
are complete and convincing.
In my case ChatGPT was able to be convinced by clearly correct
reasoning.
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
People are not convinced by this same reasoning only because they spend
99.9% of their attention on rebuttal thus there is not enough attention
left over for comprehension.
The only reason that the halting problem cannot be solved is that the
halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return
value from a set of specific deciders.
Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".
On 6/21/23 3:59 PM, olcott wrote:
On 6/21/2023 2:10 PM, vallor wrote:
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory >>>> questions lack a correct answer and are deemed incorrect, one >>>> could
argue that the halting problem's pathological input D can be
categorized as an incorrect question when posed to the halting >>>> decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about,
but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.)
I already know that and much worse than that they simply make up facts
on the fly citing purely fictional textbooks that have photos and back
stories for the purely fictional authors. The fake textbooks themselves
are complete and convincing.
In my case ChatGPT was able to be convinced by clearly correct
reasoning.
So, you admit that they will lie and tell you want you want to hear, you think the fact that it agrees with you means something?
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Which is a good sign that it was learnig what you wanted it to say so it finally said it.
People are not convinced by this same reasoning only because they spend
99.9% of their attention on rebuttal thus there is not enough attention
left over for comprehension.
No, people can apply REAL "Correct Reasoning" and see the error in what
you call "Correct Reasoning". Your problem is that your idea of correct isn't.
The only reason that the halting problem cannot be solved is that the
halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return
value from a set of specific deciders.
Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by your
faulty logic.
Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".
Nope.
How is "Does the Machine represented by the input to the decider?"
isomopric to your statement.
On 6/21/2023 6:01 PM, Richard Damon wrote:
On 6/21/23 3:59 PM, olcott wrote:
On 6/21/2023 2:10 PM, vallor wrote:
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory >>>>> questions lack a correct answer and are deemed incorrect, one >>>>> could
argue that the halting problem's pathological input D can be >>>>> categorized as an incorrect question when posed to the halting >>>>> decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did >>>>> not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about,
but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.)
I already know that and much worse than that they simply make up facts
on the fly citing purely fictional textbooks that have photos and back
stories for the purely fictional authors. The fake textbooks themselves
are complete and convincing.
In my case ChatGPT was able to be convinced by clearly correct
reasoning.
So, you admit that they will lie and tell you want you want to hear,
you think the fact that it agrees with you means something?
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Which is a good sign that it was learnig what you wanted it to say so
it finally said it.
People are not convinced by this same reasoning only because they spend
99.9% of their attention on rebuttal thus there is not enough attention
left over for comprehension.
No, people can apply REAL "Correct Reasoning" and see the error in
what you call "Correct Reasoning". Your problem is that your idea of
correct isn't.
The only reason that the halting problem cannot be solved is that the
halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return
value from a set of specific deciders.
Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by your
faulty logic.
Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".
Nope.
How is "Does the Machine represented by the input to the decider?"
isomopric to your statement.
The halting problem instances that ask:
"Does this input halt"
are isomorphic to asking Jack this question:
"Will Jack's answer to this question be no?"
Which are both isomorphic to asking if this expression
is true or false: "This sentence is not true"
That you are unwilling to validate my work merely means that
someone else will get the credit for validating my work.
On 6/21/23 8:40 PM, olcott wrote:
On 6/21/2023 6:01 PM, Richard Damon wrote:
On 6/21/23 3:59 PM, olcott wrote:
On 6/21/2023 2:10 PM, vallor wrote:
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, one >>>>>> could
argue that the halting problem's pathological input D can be >>>>>> categorized as an incorrect question when posed to the halting >>>>>> decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did >>>>>> not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about,
but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.)
I already know that and much worse than that they simply make up facts >>>> on the fly citing purely fictional textbooks that have photos and back >>>> stories for the purely fictional authors. The fake textbooks themselves >>>> are complete and convincing.
In my case ChatGPT was able to be convinced by clearly correct
reasoning.
So, you admit that they will lie and tell you want you want to hear,
you think the fact that it agrees with you means something?
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Which is a good sign that it was learnig what you wanted it to say so
it finally said it.
People are not convinced by this same reasoning only because they spend >>>> 99.9% of their attention on rebuttal thus there is not enough attention >>>> left over for comprehension.
No, people can apply REAL "Correct Reasoning" and see the error in
what you call "Correct Reasoning". Your problem is that your idea of
correct isn't.
The only reason that the halting problem cannot be solved is that the
halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return
value from a set of specific deciders.
Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by
your faulty logic.
Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".
Nope.
How is "Does the Machine represented by the input to the decider?"
isomopric to your statement.
The halting problem instances that ask:
"Does this input halt"
are isomorphic to asking Jack this question:
"Will Jack's answer to this question be no?"
Nope, because Jack is a volitional being, so we CAN'T know the correct
answer to the question until after Jack answers the question, thus Jack,
in trying to be correct, hits a contradiction.
On 6/21/2023 9:47 PM, Richard Damon wrote:
On 6/21/23 8:40 PM, olcott wrote:
On 6/21/2023 6:01 PM, Richard Damon wrote:
On 6/21/23 3:59 PM, olcott wrote:
On 6/21/2023 2:10 PM, vallor wrote:
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, >>>>>>> one could
argue that the halting problem's pathological input D can be >>>>>>> categorized as an incorrect question when posed to the halting >>>>>>> decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It >>>>>>> did
not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed
to give you the illusion that they know what they're talking about, >>>>>> but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.) >>>>>>
I already know that and much worse than that they simply make up facts >>>>> on the fly citing purely fictional textbooks that have photos and back >>>>> stories for the purely fictional authors. The fake textbooks
themselves
are complete and convincing.
In my case ChatGPT was able to be convinced by clearly correct
reasoning.
So, you admit that they will lie and tell you want you want to hear,
you think the fact that it agrees with you means something?
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Which is a good sign that it was learnig what you wanted it to say
so it finally said it.
People are not convinced by this same reasoning only because they
spend
99.9% of their attention on rebuttal thus there is not enough
attention
left over for comprehension.
No, people can apply REAL "Correct Reasoning" and see the error in
what you call "Correct Reasoning". Your problem is that your idea of
correct isn't.
The only reason that the halting problem cannot be solved is that the >>>>> halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return >>>>> value from a set of specific deciders.
Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by
your faulty logic.
Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".
Nope.
How is "Does the Machine represented by the input to the decider?"
isomopric to your statement.
The halting problem instances that ask:
"Does this input halt"
are isomorphic to asking Jack this question:
"Will Jack's answer to this question be no?"
Nope, because Jack is a volitional being, so we CAN'T know the correct
answer to the question until after Jack answers the question, thus
Jack, in trying to be correct, hits a contradiction.
We can know that the correct answer from Jack and the correct return
value from H cannot possibly exist, now and forever.
You are just playing head games.
On 6/21/23 10:58 PM, olcott wrote:Yes it is and you just keep playing heed games.
On 6/21/2023 9:47 PM, Richard Damon wrote:
On 6/21/23 8:40 PM, olcott wrote:
On 6/21/2023 6:01 PM, Richard Damon wrote:
On 6/21/23 3:59 PM, olcott wrote:
On 6/21/2023 2:10 PM, vallor wrote:
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that self-contradictory
questions lack a correct answer and are deemed incorrect, >>>>>>>> one could
argue that the halting problem's pathological input D can be >>>>>>>> categorized as an incorrect question when posed to the halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a >>>>>>>> It did
not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed >>>>>>> to give you the illusion that they know what they're talking about, >>>>>>> but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.) >>>>>>>
I already know that and much worse than that they simply make up
facts
on the fly citing purely fictional textbooks that have photos and
back
stories for the purely fictional authors. The fake textbooks
themselves
are complete and convincing.
In my case ChatGPT was able to be convinced by clearly correct
reasoning.
So, you admit that they will lie and tell you want you want to
hear, you think the fact that it agrees with you means something?
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Which is a good sign that it was learnig what you wanted it to say
so it finally said it.
People are not convinced by this same reasoning only because they
spend
99.9% of their attention on rebuttal thus there is not enough
attention
left over for comprehension.
No, people can apply REAL "Correct Reasoning" and see the error in
what you call "Correct Reasoning". Your problem is that your idea
of correct isn't.
The only reason that the halting problem cannot be solved is that the >>>>>> halting question is phrased incorrectly. The way that the halting
problem is phrased allows inputs that contradict every Boolean return >>>>>> value from a set of specific deciders.
Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by
your faulty logic.
Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".
Nope.
How is "Does the Machine represented by the input to the decider?"
isomopric to your statement.
The halting problem instances that ask:
"Does this input halt"
are isomorphic to asking Jack this question:
"Will Jack's answer to this question be no?"
Nope, because Jack is a volitional being, so we CAN'T know the
correct answer to the question until after Jack answers the question,
thus Jack, in trying to be correct, hits a contradiction.
We can know that the correct answer from Jack and the correct return
value from H cannot possibly exist, now and forever.
You are just playing head games.
But the question isn't what H can return to be correct,
On 6/22/2023 6:26 AM, Richard Damon wrote:
On 6/21/23 10:58 PM, olcott wrote:Yes it is and you just keep playing heed games.
On 6/21/2023 9:47 PM, Richard Damon wrote:
On 6/21/23 8:40 PM, olcott wrote:
On 6/21/2023 6:01 PM, Richard Damon wrote:
On 6/21/23 3:59 PM, olcott wrote:
On 6/21/2023 2:10 PM, vallor wrote:
On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:
ChatGPT:
“Therefore, based on the understanding that
self-contradictory
questions lack a correct answer and are deemed incorrect, >>>>>>>>> one could
argue that the halting problem's pathological input D can be >>>>>>>>> categorized as an incorrect question when posed to the >>>>>>>>> halting
decider H.”
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a >>>>>>>>> It did
not leap to this conclusion it took a lot of convincing.
Chatbots are highly unreliable at reasoning. They are designed >>>>>>>> to give you the illusion that they know what they're talking about, >>>>>>>> but they are the world's best BS artists.
(Try playing a game of chess with ChatGPT, you'll see what I mean.) >>>>>>>>
I already know that and much worse than that they simply make up >>>>>>> facts
on the fly citing purely fictional textbooks that have photos and >>>>>>> back
stories for the purely fictional authors. The fake textbooks
themselves
are complete and convincing.
In my case ChatGPT was able to be convinced by clearly correct
reasoning.
So, you admit that they will lie and tell you want you want to
hear, you think the fact that it agrees with you means something?
https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
It did not leap to this conclusion it took a lot of convincing.
Which is a good sign that it was learnig what you wanted it to say >>>>>> so it finally said it.
People are not convinced by this same reasoning only because they >>>>>>> spend
99.9% of their attention on rebuttal thus there is not enough
attention
left over for comprehension.
No, people can apply REAL "Correct Reasoning" and see the error in >>>>>> what you call "Correct Reasoning". Your problem is that your idea
of correct isn't.
The only reason that the halting problem cannot be solved is that >>>>>>> the
halting question is phrased incorrectly. The way that the halting >>>>>>> problem is phrased allows inputs that contradict every Boolean
return
value from a set of specific deciders.
Nope, it is phrased exactly as needed. Your alterations allow the
decider to give false answer and still be considered "correct" by
your faulty logic.
Each of the halting problems instances is exactly isomorphic to
requiring a correct answer to this question:
Is this sentence true or false: "This sentence is not true".
Nope.
How is "Does the Machine represented by the input to the decider?" >>>>>> isomopric to your statement.
The halting problem instances that ask:
"Does this input halt"
are isomorphic to asking Jack this question:
"Will Jack's answer to this question be no?"
Nope, because Jack is a volitional being, so we CAN'T know the
correct answer to the question until after Jack answers the
question, thus Jack, in trying to be correct, hits a contradiction.
We can know that the correct answer from Jack and the correct return
value from H cannot possibly exist, now and forever.
You are just playing head games.
But the question isn't what H can return to be correct,
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 366 |
Nodes: | 16 (2 / 14) |
Uptime: | 17:02:57 |
Calls: | 7,812 |
Files: | 12,927 |
Messages: | 5,766,217 |