• ChatGPT agrees that the halting problem input can be construed as an in

    From olcott@21:1/5 to All on Sat Jun 17 00:54:32 2023
    XPost: comp.theory, sci.logic

    ChatGPT:
    “Therefore, based on the understanding that self-contradictory
    questions lack a correct answer and are deemed incorrect, one could
    argue that the halting problem's pathological input D can be
    categorized as an incorrect question when posed to the halting
    decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 08:09:09 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 1:54 AM, olcott wrote:
    ChatGPT:
       “Therefore, based on the understanding that self-contradictory
       questions lack a correct answer and are deemed incorrect, one could
       argue that the halting problem's pathological input D can be
       categorized as an incorrect question when posed to the halting
       decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.


    Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
    so the answer doesn't apply.

    H^ doesn't contradict ITSELF, it constrdicts H. Thus, the answer to what
    is the Halting Behavior of a specific input, always has a definite
    answer, as all machine/input combinations will either Halt or not. What
    IS self-contradictory is the design process of trying to make an H that
    can answer the template correctly. THAT has no solution, showing you
    can't make a correct Halt Decider that works for all input. So, you are
    proved that you are wrong about have refuted the problem, because you
    don't understand what the problem is in the first place.

    Also, you do know that ChatGPT can lie, especially if you lead it a lot.
    It's programming was based, in part, with telling its conversation
    partner the things it thinks they wanted to hear.

    You are just showing you don't understand what you are talking about or
    even how this sort of AI works.


    YOU FAIL.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Jun 17 11:59:02 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 7:09 AM, Richard Damon wrote:
    On 6/17/23 1:54 AM, olcott wrote:
    ChatGPT:
        “Therefore, based on the understanding that self-contradictory
        questions lack a correct answer and are deemed incorrect, one could >>     argue that the halting problem's pathological input D can be
        categorized as an incorrect question when posed to the halting
        decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.


    Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
    so the answer doesn't apply.


    My original source of Jack's question:
    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

    You ask someone (we'll call him "Jack") to give a truthful
    yes/no answer to the following question:

    Will Jack's answer to this question be no?

    Jack can't possibly give a correct yes/no answer to the question.



    I had to capture the dialogue as two huge images.
    Then I converted them to PDF. It is about 60 pages of dialogue. https://www.liarparadox.org/ChatGPT_HP.pdf

    This is how the ChatGPT conversation began:

    You ask someone to give a truthful yes/no answer to the following
    question: Will your answer to this question be no?
    Can they give a correct answer to that question?

    After sixty pages dialogue ChatGPT understood that
    any question (like the above question) that lacks a
    correct yes or no answer because it is self-contradictory
    when posed to a specific person/machine is an incorrect
    question within this full context.

    ChatGPT:
    "Therefore, based on the understanding that self-contradictory
    questions lack a correct answer and are deemed incorrect, one could
    argue that the halting problem's pathological input D can be
    categorized as an incorrect question when posed to the halting
    decider H."

    Double talk and misdirection might convince gullible fools that the
    above 60 pages of reasoning is not correct. Double talk and misdirection
    do not count as the slightest trace of any actual rebuttal.

    Quit using ad hominem attacks and mere rhetoric to convince gullible
    fools and try and find an actual flaw in the reasoning.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 13:43:20 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 12:59 PM, olcott wrote:
    On 6/17/2023 7:09 AM, Richard Damon wrote:
    On 6/17/23 1:54 AM, olcott wrote:
    ChatGPT:
        “Therefore, based on the understanding that self-contradictory
        questions lack a correct answer and are deemed incorrect, one could >>>     argue that the halting problem's pathological input D can be
        categorized as an incorrect question when posed to the halting
        decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.


    Except that the Halting Problem isn't a "Self-Contradictory" Quesiton,
    so the answer doesn't apply.


    My original source of Jack's question:
    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

       You ask someone (we'll call him "Jack") to give a truthful
       yes/no answer to the following question:

       Will Jack's answer to this question be no?

       Jack can't possibly give a correct yes/no answer to the question.



    But you aren't claiming to be solving the Jack Question.

    You are being asked the questions does D(D) Halt? when D is a fully
    defined program which means H is a fully defined program. This question
    ALWAYS has a definite answer.

    Since this H DOES abort its simulation of D(D) and return 0 (to say non-halting), this D(D) Halts so the correct answer is Halting, and H
    returned the wrong answer.

    There is no "Self-Contradictory" behavior, at least not once you
    actually create your H. Yes, D acted contrary to the return value of but
    since they are DIFFERENT (but related) programs, there is no "Self"
    attrtibute.

    The only point when you hit "Self-contradictory" is when you try to
    apply logic to designing H, at that point, you hit the
    self-contradiction that a correctc H needs to give the answer the
    opposite of what it give. This means that no such H can exist, which
    proves the theorem, not refute it, because you FIRST need to generate an
    H, then you can test it.



    I had to capture the dialogue as two huge images.
    Then I converted them to PDF. It is about 60 pages of dialogue. https://www.liarparadox.org/ChatGPT_HP.pdf

    This is how the ChatGPT conversation began:

    You ask someone to give a truthful yes/no answer to the following
    question: Will your answer to this question be no?
    Can they give a correct answer to that question?

    After sixty pages dialogue ChatGPT understood that
    any question (like the above question) that lacks a
    correct yes or no answer because it is self-contradictory
    when posed to a specific person/machine is an incorrect
    question within this full context.

    ChatGPT:
      "Therefore, based on the understanding that self-contradictory
       questions lack a correct answer and are deemed incorrect, one could
       argue that the halting problem's pathological input D can be
       categorized as an incorrect question when posed to the halting
       decider H."

    Double talk and misdirection might convince gullible fools that the
    above 60 pages of reasoning is not correct. Double talk and misdirection
    do not count as the slightest trace of any actual rebuttal.

    Quit using ad hominem attacks and mere rhetoric to convince gullible
    fools and try and find an actual flaw in the reasoning.


    So, which of my rebuttals are you going to try to refute?

    You have actually pointed an actual logical error to ANY of them,
    because it seems you are incapable.

    Note, you also don't understand what an "ad hominem" attack is. That
    would be saying your arguement is wrong BECAUSE of something about you.
    That isn't what have been saying.

    I have been pointing out the error of your logic on the basis of the
    logic itself, and pointing out the attribute of you that can be infered
    from the fact that you put forward such bad logic.

    A Correct rebuttal would be to point out what part of my statements
    refuting your logic are incorrect, which you have been unable to do, all
    you have done in this thread is continue an "Appeal to Athority" in
    ChatGPT, which is laughable since ChatGPT isn't an accept authority on
    logic, and has in fact been proven to make many provably false
    statements, so is NOT in fact, a source of knowledge.

    OF course, your problem is that you don't seem to understand the nature
    of Truth and Knowledge and seem to think that computers can actually
    "Know" something in the same way people do. There is a reason it is
    called ARTIFICIAL intelegence, because it isn't actually a real
    intelligence.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Jun 17 13:23:11 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 12:43 PM, Richard Damon wrote:
    On 6/17/23 12:59 PM, olcott wrote:
    On 6/17/2023 7:09 AM, Richard Damon wrote:
    On 6/17/23 1:54 AM, olcott wrote:
    ChatGPT:
        “Therefore, based on the understanding that self-contradictory >>>>     questions lack a correct answer and are deemed incorrect, one could >>>>     argue that the halting problem's pathological input D can be
        categorized as an incorrect question when posed to the halting
        decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.


    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so the answer doesn't apply.


    My original source of Jack's question:
    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the question.



    But you aren't claiming to be solving the Jack Question.



    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

    You ask someone (we'll call him "Jack") to give a truthful
    yes/no answer to the following question:

    Will Jack's answer to this question be no?

    Jack can't possibly give a correct yes/no answer to the question.

    When the halting problem is construed as requiring a correct yes/no
    answer to a self-contradictory question it cannot be solved.

    My semantic linguist friends understand that the context of the question
    must include who the question is posed to otherwise the same word-for-
    word question acquires different semantics.

    The input D to H is the same as Jack's question posed to Jack,
    has no correct answer because within this context the question is self-contradictory.

    When we ask someone else what Jack's answer will be or we present a
    different TM with input D the same word-for-word question (or bytes of
    machine description) acquires entirely different semantics and is no
    longer self-contradictory.

    When we construe the halting problem as determining whether or not an
    (a) Input D will halt on its input <or>
    (b) Either D will not halt or D has a pathological relationship with H

    Then this halting problem cannot be showed to be unsolvable by any of
    the conventional halting problem proofs.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 16:27:13 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 2:23 PM, olcott wrote:
    On 6/17/2023 12:43 PM, Richard Damon wrote:
    On 6/17/23 12:59 PM, olcott wrote:
    On 6/17/2023 7:09 AM, Richard Damon wrote:
    On 6/17/23 1:54 AM, olcott wrote:
    ChatGPT:
        “Therefore, based on the understanding that self-contradictory >>>>>     questions lack a correct answer and are deemed incorrect, one >>>>> could
        argue that the halting problem's pathological input D can be
        categorized as an incorrect question when posed to the halting >>>>>     decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.


    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so the answer doesn't apply.


    My original source of Jack's question:
    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the question. >>>


    But you aren't claiming to be solving the Jack Question.



    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM

       You ask someone (we'll call him "Jack") to give a truthful
       yes/no answer to the following question:

       Will Jack's answer to this question be no?

       Jack can't possibly give a correct yes/no answer to the question.

    When the halting problem is construed as requiring a correct yes/no
    answer to a self-contradictory question it cannot be solved.

    RIght, an


    My semantic linguist friends understand that the context of the question
    must include who the question is posed to otherwise the same word-for-
    word question acquires different semantics.


    No, it doesn't in this case, because the answer to the question isn't
    based on who you ask. Remember, the actual question is does the machine
    and input describe halt when run. That question isn't a function of who
    you ask.

    Do you think the actual answer for the question of who won the last Presidentlal election in the United States of America depend on you you ask?


    The input D to H is the same as Jack's question posed to Jack,
    has no correct answer because within this context the question is self-contradictory.

    Nope, we can ask that question to ANY halt decider.

    The thing you keep forgetting is that H needs to have already been
    decided, so its answer to this input has been fixed for all time by the algorithm coded into H, so we can give a description of this D to any
    decider we want.


    When we ask someone else what Jack's answer will be or we present a
    different TM with input D the same word-for-word question (or bytes of machine description) acquires entirely different semantics and is no
    longer self-contradictory.


    Except in this case, Jack is a antomiton with a fixed response for every question, so his answer is determinable. You don't seem to understand
    that machines don't have "free-will", but apparently you don't
    understand that.

    When we construe the halting problem as determining whether or not an
    (a) Input D will halt on its input <or>
    (b) Either D will not halt or D has a pathological relationship with H

    Nope, Not the definition of the Halting Problem, so you are just
    admitting you have wasted you life on the wrong problem.

    You don't get to change the problem.


    Then this halting problem cannot be showed to be unsolvable by any of
    the conventional halting problem proofs.


    Except it isn't the halting problem any more, so your logic is based on
    a false premise.


    Remember, the fact that you are incapable of understanding the simple
    problem doesn't give you the power to redefine it and still correctly
    claim you are working on it.

    You have just admitted you are an utter failure.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Richard Damon on Sat Jun 17 22:09:03 2023
    XPost: comp.theory, sci.logic

    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out. And
    the reason /why/ it catches so many out eventually led me to stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting question
    is being asked. The students think they can see it right there in the constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong. The code is /not/ there. The code calls a
    function that does not exist, so "it" (the constructed code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all programmers and
    not mathematicians, makes it worse. A mathematician seeing "let p be
    the largest prime" does not assume that such a p exists. So when a
    prime number p' > p is constructed from p, this is not seen as a "self-contradictory number" because neither p nor p' exist. But the
    halting theorem is even more deceptive for programmers, because the
    desired function, H (or whatever), appears to be so well defined -- much
    more well-defined than "the largest prime". We have an exact
    specification for it, mapping arguments to returned values. It's just
    software engineering to write such things (they erroneously assume).

    These sorts of proof can always be re-worded so as to avoid the initial assumption. For example, we can start "let p be any prime", and from p
    we construct a prime p' > p. And for halting, we can start "let H be
    any subroutine of two arguments always returning true or false". Now,
    all the objects /do/ exist. In the first case, the construction shows
    that no prime is the largest, and in the second it shows that no
    subroutine computes the halting function.

    This issue led to another change. In the last couple of years, I would
    start the course by setting Post's correspondence problem as if it were
    just a fun programming challenge. As the days passed (and the course
    got into more and more serious material) it would start to become clear
    that this was no ordinary programming challenge. Many students started
    to suspect that, despite the trivial sounding specification, no program
    could do the job. I always felt a bit uneasy doing this, as if I was
    not being 100% honest, but it was a very useful learning experience for
    most.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeff Barnett@21:1/5 to All on Sat Jun 17 16:03:41 2023
    XPost: comp.theory, sci.logic

    T24gNi8xNy8yMDIzIDM6NDYgUE0sIG9sY290dCB3cm90ZToNCj4gT24gNi8xNy8yMDIzIDQ6 MDkgUE0sIEJlbiBCYWNhcmlzc2Ugd3JvdGU6DQo+PiBSaWNoYXJkIERhbW9uIDxSaWNoYXJk QERhbW9uLUZhbWlseS5vcmc+IHdyaXRlczoNCj4+DQo+Pj4gRXhjZXB0IHRoYXQgdGhlIEhh bHRpbmcgUHJvYmxlbSBpc24ndCBhICJTZWxmLUNvbnRyYWRpY3RvcnkiIA0KPj4+IFF1ZXNp dG9uLCBzbw0KPj4+IHRoZSBhbnN3ZXIgZG9lc24ndCBhcHBseS4NCj4+DQo+PiBUaGF0J3Mg YW4gaW50ZXJlc3RpbmcgcG9pbnQgdGhhdCB3b3VsZCBvZnRlbiBjYXRjaCBzdHVkZW50cyBv dXQuwqAgQW5kDQo+PiB0aGUgcmVhc29uIC93aHkvIGl0IGNhdGNoZXMgc28gbWFueSBvdXQg ZXZlbnR1YWxseSBsZWQgbWUgdG8gc3RvcCB1c2luZw0KPj4gdGhlIHByb29mLWJ5LWNvbnRy YWRpY3Rpb24gYXJndW1lbnQgaW4gbXkgY2xhc3Nlcy4NCj4+DQo+PiBUaGUgdGhpbmcgaXMs IGl0IGxvb2tzIHNvIHZlcnkgbXVjaCBsaWtlIGEgc2VsZi1jb250cmFkaWN0aW5nIHF1ZXN0 aW9uDQo+PiBpcyBiZWluZyBhc2tlZC7CoCBUaGUgc3R1ZGVudHMgdGhpbmsgdGhleSBjYW4g c2VlIGl0IHJpZ2h0IHRoZXJlIGluIHRoZQ0KPj4gY29uc3RydWN0ZWQgY29kZTogImlmIEgg c2F5cyBJIGhhbHQsIEkgZG9uJ3QgaGFsdCEiLg0KPj4NCj4+IE9mIGNvdXJzZSwgdGhleSBh cmUgd3JvbmcuwqAgVGhlIGNvZGUgaXMgL25vdC8gdGhlcmUuwqAgVGhlIGNvZGUgY2FsbHMg YQ0KPj4gZnVuY3Rpb24gdGhhdCBkb2VzIG5vdCBleGlzdCwgc28gIml0IiAodGhlIGNvbnN0 cnVjdGVkIGNvZGUsIHRoZSB3aG9sZQ0KPj4gcHJvZ3JhbSkgZG9lcyBub3QgZXhpc3QgZWl0 aGVyLg0KPj4NCj4+IFRoZSBmYWN0IHRoYXQgaXQncyBjb2RlLCBhbmQgdGhlIHN0dWRlbnRz IGFyZSBhbG1vc3QgYWxsIHByb2dyYW1tZXJzIGFuZA0KPj4gbm90IG1hdGhlbWF0aWNpYW5z LCBtYWtlcyBpdCB3b3JzZS7CoCBBIG1hdGhlbWF0aWNpYW4gc2VlaW5nICJsZXQgcCBiZQ0K Pj4gdGhlIGxhcmdlc3QgcHJpbWUiIGRvZXMgbm90IGFzc3VtZSB0aGF0IHN1Y2ggYSBwIGV4 aXN0cy7CoCBTbyB3aGVuIGENCj4+IHByaW1lIG51bWJlciBwJyA+IHAgaXMgY29uc3RydWN0 ZWQgZnJvbSBwLCB0aGlzIGlzIG5vdCBzZWVuIGFzIGENCj4+ICJzZWxmLWNvbnRyYWRpY3Rv cnkgbnVtYmVyIiBiZWNhdXNlIG5laXRoZXIgcCBub3IgcCcgZXhpc3QuwqAgQnV0IHRoZQ0K Pj4gaGFsdGluZyB0aGVvcmVtIGlzIGV2ZW4gbW9yZSBkZWNlcHRpdmUgZm9yIHByb2dyYW1t ZXJzLCBiZWNhdXNlIHRoZQ0KPj4gZGVzaXJlZCBmdW5jdGlvbiwgSCAob3Igd2hhdGV2ZXIp LCBhcHBlYXJzIHRvIGJlIHNvIHdlbGwgZGVmaW5lZCAtLSBtdWNoDQo+PiBtb3JlIHdlbGwt ZGVmaW5lZCB0aGFuICJ0aGUgbGFyZ2VzdCBwcmltZSIuwqAgV2UgaGF2ZSBhbiBleGFjdA0K Pj4gc3BlY2lmaWNhdGlvbiBmb3IgaXQsIG1hcHBpbmcgYXJndW1lbnRzIHRvIHJldHVybmVk IHZhbHVlcy7CoCBJdCdzIGp1c3QNCj4+IHNvZnR3YXJlIGVuZ2luZWVyaW5nIHRvIHdyaXRl IHN1Y2ggdGhpbmdzICh0aGV5IGVycm9uZW91c2x5IGFzc3VtZSkuDQo+Pg0KPj4gVGhlc2Ug c29ydHMgb2YgcHJvb2YgY2FuIGFsd2F5cyBiZSByZS13b3JkZWQgc28gYXMgdG8gYXZvaWQg dGhlIGluaXRpYWwNCj4+IGFzc3VtcHRpb24uwqAgRm9yIGV4YW1wbGUsIHdlIGNhbiBzdGFy dCAibGV0IHAgYmUgYW55IHByaW1lIiwgYW5kIGZyb20gcA0KPj4gd2UgY29uc3RydWN0IGEg cHJpbWUgcCcgPiBwLsKgIEFuZCBmb3IgaGFsdGluZywgd2UgY2FuIHN0YXJ0ICJsZXQgSCBi ZQ0KPj4gYW55IHN1YnJvdXRpbmUgb2YgdHdvIGFyZ3VtZW50cyBhbHdheXMgcmV0dXJuaW5n IHRydWUgb3IgZmFsc2UiLsKgIE5vdywNCj4+IGFsbCB0aGUgb2JqZWN0cyAvZG8vIGV4aXN0 LsKgIEluIHRoZSBmaXJzdCBjYXNlLCB0aGUgY29uc3RydWN0aW9uIHNob3dzDQo+PiB0aGF0 IG5vIHByaW1lIGlzIHRoZSBsYXJnZXN0LCBhbmQgaW4gdGhlIHNlY29uZCBpdCBzaG93cyB0 aGF0IG5vDQo+PiBzdWJyb3V0aW5lIGNvbXB1dGVzIHRoZSBoYWx0aW5nIGZ1bmN0aW9uLg0K Pj4NCj4+IFRoaXMgaXNzdWUgbGVkIHRvIGFub3RoZXIgY2hhbmdlLsKgIEluIHRoZSBsYXN0 IGNvdXBsZSBvZiB5ZWFycywgSSB3b3VsZA0KPj4gc3RhcnQgdGhlIGNvdXJzZSBieSBzZXR0 aW5nIFBvc3QncyBjb3JyZXNwb25kZW5jZSBwcm9ibGVtIGFzIGlmIGl0IHdlcmUNCj4+IGp1 c3QgYSBmdW4gcHJvZ3JhbW1pbmcgY2hhbGxlbmdlLsKgIEFzIHRoZSBkYXlzIHBhc3NlZCAo YW5kIHRoZSBjb3Vyc2UNCj4+IGdvdCBpbnRvIG1vcmUgYW5kIG1vcmUgc2VyaW91cyBtYXRl cmlhbCkgaXQgd291bGQgc3RhcnQgdG8gYmVjb21lIGNsZWFyDQo+PiB0aGF0IHRoaXMgd2Fz IG5vIG9yZGluYXJ5IHByb2dyYW1taW5nIGNoYWxsZW5nZS7CoCBNYW55IHN0dWRlbnRzIHN0 YXJ0ZWQNCj4+IHRvIHN1c3BlY3QgdGhhdCwgZGVzcGl0ZSB0aGUgdHJpdmlhbCBzb3VuZGlu ZyBzcGVjaWZpY2F0aW9uLCBubyBwcm9ncmFtDQo+PiBjb3VsZCBkbyB0aGUgam9iLsKgIEkg YWx3YXlzIGZlbHQgYSBiaXQgdW5lYXN5IGRvaW5nIHRoaXMsIGFzIGlmIEkgd2FzDQo+PiBu b3QgYmVpbmcgMTAwJSBob25lc3QsIGJ1dCBpdCB3YXMgYSB2ZXJ5IHVzZWZ1bCBsZWFybmlu ZyBleHBlcmllbmNlIGZvcg0KPj4gbW9zdC4NCj4+DQo+IA0KPiBzY2kubG9naWMgRGFyeWwg TWNDdWxsb3VnaCBKdW4gMjUsIDIwMDQsIDY6MzA6MznigK9QTQ0KPiAgwqDCoCBZb3UgYXNr IHNvbWVvbmUgKHdlJ2xsIGNhbGwgaGltICJKYWNrIikgdG8gZ2l2ZSBhIHRydXRoZnVsDQo+ ICDCoMKgIHllcy9ubyBhbnN3ZXIgdG8gdGhlIGZvbGxvd2luZyBxdWVzdGlvbjoNCj4gDQo+ ICDCoMKgIFdpbGwgSmFjaydzIGFuc3dlciB0byB0aGlzIHF1ZXN0aW9uIGJlIG5vPw0KPiAN Cj4gIMKgwqAgSmFjayBjYW4ndCBwb3NzaWJseSBnaXZlIGEgY29ycmVjdCB5ZXMvbm8gYW5z d2VyIHRvIHRoZSBxdWVzdGlvbi4NCj4gDQo+IEl0IGlzIGFuIGVhc2lseSB2ZXJpZmllZCBm YWN0IHRoYXQgd2hlbiBKYWNrJ3MgcXVlc3Rpb24gaXMgcG9zZWQgdG8gSmFjaw0KPiB0aGF0 IHRoaXMgcXVlc3Rpb24gaXMgc2VsZi1jb250cmFkaWN0b3J5IGZvciBKYWNrIG9yIGFueW9u ZSBlbHNlIGhhdmluZw0KPiBhIHBhdGhvbG9naWNhbCByZWxhdGlvbnNoaXAgdG8gdGhlIHF1 ZXN0aW9uLg0KPiANCj4gSXQgaXMgYWxzbyBjbGVhciB0aGF0IHdoZW4gYSBxdWVzdGlvbiBo YXMgbm8geWVzIG9yIG5vIGFuc3dlciBiZWNhdXNlDQo+IGl0IGlzIHNlbGYtY29udHJhZGlj dG9yeSB0aGF0IHRoaXMgcXVlc3Rpb24gaXMgYXB0bHkgY2xhc3NpZmllZCBhcw0KPiBpbmNv cnJlY3QuDQo+IA0KPiBJdCBpcyBpbmNvcnJlY3QgdG8gc2F5IHRoYXQgYSBxdWVzdGlvbiBp cyBub3Qgc2VsZi1jb250cmFkaWN0b3J5IG9uIHRoZQ0KPiBiYXNpcyB0aGF0IGl0IGlzIG5v dCBzZWxmLWNvbnRyYWRpY3RvcnkgaW4gc29tZSBjb250ZXh0cy4gSWYgYSBxdWVzdGlvbg0K PiBpcyBzZWxmLWNvbnRyYWRpY3RvcnkgaW4gc29tZSBjb250ZXh0cyB0aGVuIGluIHRoZXNl IGNvbnRleHRzIGl0IGlzIGFuDQo+IGluY29ycmVjdCBxdWVzdGlvbi4NCj4gDQo+IFdoZW4g d2UgY2xlYXJseSB1bmRlcnN0YW5kIHRoZSB0cnV0aCBvZiB0aGlzIHRoZW4gYW5kIG9ubHkg dGhlbiB3ZSBoYXZlDQo+IHRoZSBtZWFucyB0byBvdmVyY29tZSB0aGUgZW5vcm1vdXMgaW5l cnRpYSBvZiB0aGUgW3JlY2VpdmVkIHZpZXddIG9mDQo+IHRoZSBjb252ZW50aW9uYWwgd2lz ZG9tIHJlZ2FyZGluZyBkZWNpc2lvbiBwcm9ibGVtcyB0aGF0IGFyZSBvbmx5DQo+IHVuZGVj aWRhYmxlIGJlY2F1c2Ugb2YgcGF0aG9sb2dpY2FsIHJlbGF0aW9uc2hpcHMuDQo+IA0KPiBC ZWNhdXNlIG9mIHRoZSBicmlsbGlhbnQgd29yayBvZiBEYXJ5bCBNY0N1bGxvdWdoIHdlIGNh biBzZWUgdGhlIGFjdHVhbA0KPiByZWFsaXR5IGJlaGluZCBkZWNpc2lvbiBwcm9ibGVtcyB0 aGF0IGFyZSB1bmRlY2lkYWJsZSBiZWNhdXNlIG9mIHRoZWlyDQo+IHBhdGhvbG9naWNhbCBy ZWxhdGlvbnNoaXBzLg0KPiANCj4gSXQgb25seSB0b29rIENoYXRHUFQgYSBmZXcgaG91cnMg YW5kIDYwIHBhZ2VzIG9mIGRpYWxvZ3VlDQo+IHRvIHVuZGVyc3RhbmQgYW5kIGFncmVlIHdp dGggdGhpcy4NCj4gaHR0cHM6Ly93d3cubGlhcnBhcmFkb3gub3JnL0NoYXRHUFRfSFAucGRm DQo+IA0KPiBDaGF0R1BUOg0KPiAgwqAgIlRoZXJlZm9yZSwgYmFzZWQgb24gdGhlIHVuZGVy c3RhbmRpbmcgdGhhdCBzZWxmLWNvbnRyYWRpY3RvcnkNCj4gIMKgwqAgcXVlc3Rpb25zIGxh Y2sgYSBjb3JyZWN0IGFuc3dlciBhbmQgYXJlIGRlZW1lZCBpbmNvcnJlY3QsIG9uZSBjb3Vs ZA0KPiAgwqDCoCBhcmd1ZSB0aGF0IHRoZSBoYWx0aW5nIHByb2JsZW0ncyBwYXRob2xvZ2lj YWwgaW5wdXQgRCBjYW4gYmUNCj4gIMKgwqAgY2F0ZWdvcml6ZWQgYXMgYW4gaW5jb3JyZWN0 IHF1ZXN0aW9uIHdoZW4gcG9zZWQgdG8gdGhlIGhhbHRpbmcNCj4gIMKgwqAgZGVjaWRlciBI LiINCkJlbiB3YXMgZGVzY3JpYmluZyBhbiBpbXByb3ZlZCBhcHByb2FjaCB0byB0ZWFjaGlu ZyBzb21lIHRoZW9yZXRpY2FsIA0KcmVzdWx0cyB0byBDUyBwdXBpbHMuIFRob3NlIHB1cGls cyB3ZXJlIGFzc3VtZWQgdG8gaGF2ZSBzb21lIGdyb3VuZGluZyANCmluIHByYWN0aWNhbCBh c3BlY3RzIHN1Y2ggYXMgcHJvZ3JhbW1pbmcgYW5kIGF0IGxlYXN0IGEgc21hbGwgaW50ZXJl c3QgDQphbmQgY29tcGV0ZW5jZSBpbiBiYXNpYyBtYXRoZW1hdGljcy4gWW91IHNlZW1lZCB0 byBub3QgYmUgdGhlcmUgd2hlbiBnb2QgDQpoYW5kZWQgb3V0IHRob3NlIGJhc2ljIGNvbXBv bmVudHMgb2YgYSBodW1hbiBicmFpbi4gWW91IGFyZSBuZWl0aGVyIHRoZSANCmV4Y2VwdGlv biBvciB0aGUgcnVsZTsganVzdCBhbiBhcnJvZ2FudCBkdW1iIGZ1Y2suDQoNCkJ5IHRoZSB3 YXksIHdlIGhhdmUgbm90aWNlZCB0aGF0IHlvdSBoYXZlbid0IHBsYXllZCB0aGUgYmlnICJD IiBjYXJkIA0KcmVjZW50bHkuIElzIHRoaXMgMSkgYW4gaW1tYWN1bGF0ZSBjdXJlLCAyKSB5 b3UgcHV0dGluZyBvbiB5b3VyIGJpZyBib3kgDQpwYW50cyBhbmQgdGFraW5nIHJlc3BvbnNp YmlsaXR5IGZvciB5b3VyIG93biBzb3JyeSBsaWZlIGFuZCBtaW5kLCBvciAzKSANCnRoZSB0 aW1lIHdoZXJlIHlvdSB0cnkgdG8gd2lnZ2xlIG91dCBvZiBhIHBhc3Qgc2VxdWVsIG9mIGxp ZXM/IFdlJ3ZlIA0Kc2VlbiBhbGwgYnV0IHZhcmlhdGlvbiAyIGluIHBhc3QgaW50ZXJhY3Rp b25zLiBUaGUgY3VyaW91cyB3YW50IHRvIGtub3cgDQp0aGUgcmVhbCBza2lubnkgc28gc3Bl YWsgdXAhDQotLSANCkplZmYgQmFybmV0dA0KDQo=

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Ben Bacarisse on Sat Jun 17 16:46:34 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out. And
    the reason /why/ it catches so many out eventually led me to stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting question
    is being asked. The students think they can see it right there in the constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong. The code is /not/ there. The code calls a function that does not exist, so "it" (the constructed code, the whole program) does not exist either.

    The fact that it's code, and the students are almost all programmers and
    not mathematicians, makes it worse. A mathematician seeing "let p be
    the largest prime" does not assume that such a p exists. So when a
    prime number p' > p is constructed from p, this is not seen as a "self-contradictory number" because neither p nor p' exist. But the
    halting theorem is even more deceptive for programmers, because the
    desired function, H (or whatever), appears to be so well defined -- much
    more well-defined than "the largest prime". We have an exact
    specification for it, mapping arguments to returned values. It's just software engineering to write such things (they erroneously assume).

    These sorts of proof can always be re-worded so as to avoid the initial assumption. For example, we can start "let p be any prime", and from p
    we construct a prime p' > p. And for halting, we can start "let H be
    any subroutine of two arguments always returning true or false". Now,
    all the objects /do/ exist. In the first case, the construction shows
    that no prime is the largest, and in the second it shows that no
    subroutine computes the halting function.

    This issue led to another change. In the last couple of years, I would
    start the course by setting Post's correspondence problem as if it were
    just a fun programming challenge. As the days passed (and the course
    got into more and more serious material) it would start to become clear
    that this was no ordinary programming challenge. Many students started
    to suspect that, despite the trivial sounding specification, no program
    could do the job. I always felt a bit uneasy doing this, as if I was
    not being 100% honest, but it was a very useful learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
    You ask someone (we'll call him "Jack") to give a truthful
    yes/no answer to the following question:

    Will Jack's answer to this question be no?

    Jack can't possibly give a correct yes/no answer to the question.

    It is an easily verified fact that when Jack's question is posed to Jack
    that this question is self-contradictory for Jack or anyone else having
    a pathological relationship to the question.

    It is also clear that when a question has no yes or no answer because
    it is self-contradictory that this question is aptly classified as
    incorrect.

    It is incorrect to say that a question is not self-contradictory on the
    basis that it is not self-contradictory in some contexts. If a question
    is self-contradictory in some contexts then in these contexts it is an incorrect question.

    When we clearly understand the truth of this then and only then we have
    the means to overcome the enormous inertia of the [received view] of
    the conventional wisdom regarding decision problems that are only
    undecidable because of pathological relationships.

    Because of the brilliant work of Daryl McCullough we can see the actual
    reality behind decision problems that are undecidable because of their pathological relationships.

    It only took ChatGPT a few hours and 60 pages of dialogue
    to understand and agree with this.
    https://www.liarparadox.org/ChatGPT_HP.pdf

    ChatGPT:
    "Therefore, based on the understanding that self-contradictory
    questions lack a correct answer and are deemed incorrect, one could
    argue that the halting problem's pathological input D can be
    categorized as an incorrect question when posed to the halting
    decider H."

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 19:13:19 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out.  And
    the reason /why/ it catches so many out eventually led me to stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting question
    is being asked.  The students think they can see it right there in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code calls a
    function that does not exist, so "it" (the constructed code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all programmers and
    not mathematicians, makes it worse.  A mathematician seeing "let p be
    the largest prime" does not assume that such a p exists.  So when a
    prime number p' > p is constructed from p, this is not seen as a
    "self-contradictory number" because neither p nor p' exist.  But the
    halting theorem is even more deceptive for programmers, because the
    desired function, H (or whatever), appears to be so well defined -- much
    more well-defined than "the largest prime".  We have an exact
    specification for it, mapping arguments to returned values.  It's just
    software engineering to write such things (they erroneously assume).

    These sorts of proof can always be re-worded so as to avoid the initial
    assumption.  For example, we can start "let p be any prime", and from p
    we construct a prime p' > p.  And for halting, we can start "let H be
    any subroutine of two arguments always returning true or false".  Now,
    all the objects /do/ exist.  In the first case, the construction shows
    that no prime is the largest, and in the second it shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, I would
    start the course by setting Post's correspondence problem as if it were
    just a fun programming challenge.  As the days passed (and the course
    got into more and more serious material) it would start to become clear
    that this was no ordinary programming challenge.  Many students started
    to suspect that, despite the trivial sounding specification, no program
    could do the job.  I always felt a bit uneasy doing this, as if I was
    not being 100% honest, but it was a very useful learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
       You ask someone (we'll call him "Jack") to give a truthful
       yes/no answer to the following question:

       Will Jack's answer to this question be no?

       Jack can't possibly give a correct yes/no answer to the question.

    It is an easily verified fact that when Jack's question is posed to Jack
    that this question is self-contradictory for Jack or anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being.

    H is not, it is a program, so before we even ask H what will happen, the
    answer has been fixed by the definition of the codr of H.


    It is also clear that when a question has no yes or no answer because
    it is self-contradictory that this question is aptly classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this case,
    since H(D,D) says 0 (non-Halting) the actual answer to the question does
    D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can
    somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory on the
    basis that it is not self-contradictory in some contexts. If a question
    is self-contradictory in some contexts then in these contexts it is an incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become self-contradictory?

    Remember, to ask the question, D has to have been defined, which means H
    has been defined, so there is no arguing about "if H acted different"
    since the specific example can't act different.


    When we clearly understand the truth of this then and only then we have
    the means to overcome the enormous inertia of the [received view] of
    the conventional wisdom regarding decision problems that are only
    undecidable because of pathological relationships.

    No, you have poisoned your brain to think that reality doesn't actually
    matter. You have made yourself an idiot.

    H does what it does, and arguing about what would happen if it did
    something else is like claiming cats can bark, because if a cat was a
    dog, it could do that.


    Because of the brilliant work of Daryl McCullough we can see the actual reality behind decision problems that are undecidable because of their pathological relationships.

    It only took ChatGPT a few hours and 60 pages of dialogue
    to understand and agree with this.
    https://www.liarparadox.org/ChatGPT_HP.pdf

    ChatGPT:
      "Therefore, based on the understanding that self-contradictory
       questions lack a correct answer and are deemed incorrect, one could
       argue that the halting problem's pathological input D can be
       categorized as an incorrect question when posed to the halting
       decider H."



    And, as pointed out, that isn't the question being ask, so you arguement
    just shows you are wrong.

    IF you think that a given machine's halting property when it is run
    depends on who you ask, shows you are just STUPID.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Jun 17 18:58:35 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out.  And
    the reason /why/ it catches so many out eventually led me to stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting question
    is being asked.  The students think they can see it right there in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code calls a >>> function that does not exist, so "it" (the constructed code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all programmers and >>> not mathematicians, makes it worse.  A mathematician seeing "let p be
    the largest prime" does not assume that such a p exists.  So when a
    prime number p' > p is constructed from p, this is not seen as a
    "self-contradictory number" because neither p nor p' exist.  But the
    halting theorem is even more deceptive for programmers, because the
    desired function, H (or whatever), appears to be so well defined -- much >>> more well-defined than "the largest prime".  We have an exact
    specification for it, mapping arguments to returned values.  It's just
    software engineering to write such things (they erroneously assume).

    These sorts of proof can always be re-worded so as to avoid the initial
    assumption.  For example, we can start "let p be any prime", and from p >>> we construct a prime p' > p.  And for halting, we can start "let H be
    any subroutine of two arguments always returning true or false".  Now,
    all the objects /do/ exist.  In the first case, the construction shows
    that no prime is the largest, and in the second it shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, I would >>> start the course by setting Post's correspondence problem as if it were
    just a fun programming challenge.  As the days passed (and the course
    got into more and more serious material) it would start to become clear
    that this was no ordinary programming challenge.  Many students started >>> to suspect that, despite the trivial sounding specification, no program
    could do the job.  I always felt a bit uneasy doing this, as if I was
    not being 100% honest, but it was a very useful learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the question.

    It is an easily verified fact that when Jack's question is posed to Jack
    that this question is self-contradictory for Jack or anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being.

    H is not, it is a program, so before we even ask H what will happen, the answer has been fixed by the definition of the codr of H.


    It is also clear that when a question has no yes or no answer because
    it is self-contradictory that this question is aptly classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this case,
    since H(D,D) says 0 (non-Halting) the actual answer to the question does
    D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can
    somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory on the
    basis that it is not self-contradictory in some contexts. If a question
    is self-contradictory in some contexts then in these contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to Jeff Barnett on Sat Jun 17 19:18:21 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 6:03 PM, Jeff Barnett wrote:

    By the way, we have noticed that you haven't played the big "C" card recently. Is this 1) an immaculate cure, 2) you putting on your big boy
    pants and taking responsibility for your own sorry life and mind, or 3)
    the time where you try to wiggle out of a past sequel of lies? We've
    seen all but variation 2 in past interactions. The curious want to know
    the real skinny so speak up!
    --
    Jeff Barnett


    My assumption (but just that) is that it has been a lie the whole time
    to try to gain sympathy. He as earned no reputation for honesty, and so
    none will be given.

    I will admit he might have been sick, but there has been no actual
    evidence of it, so it is mearly an unsubstantiated claim.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Jun 17 18:44:16 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 6:18 PM, Richard Damon wrote:
    On 6/17/23 6:03 PM, Jeff Barnett wrote:

    By the way, we have noticed that you haven't played the big "C" card
    recently. Is this 1) an immaculate cure, 2) you putting on your big
    boy pants and taking responsibility for your own sorry life and mind,
    or 3) the time where you try to wiggle out of a past sequel of lies?
    We've seen all but variation 2 in past interactions. The curious want
    to know the real skinny so speak up!
    --
    Jeff Barnett


    My assumption (but just that) is that it has been a lie the whole time
    to try to gain sympathy. He as earned no reputation for honesty, and so
    none will be given.

    I will admit he might have been sick, but there has been no actual
    evidence of it, so it is mearly an unsubstantiated claim.

    I did have cancer jam packed in every lymph node.
    After chemo therapy last Summer this has cleared up.

    It is my current understanding that Follicular Lymphoma always
    comes back eventually.

    A FLIPI index score of 3 was very bad news.
    A 53% five year survival rate and a 35% 10 year survival rate. https://www.nature.com/articles/s41408-019-0269-6




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 21:46:13 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 7:44 PM, olcott wrote:
    On 6/17/2023 6:18 PM, Richard Damon wrote:
    On 6/17/23 6:03 PM, Jeff Barnett wrote:

    By the way, we have noticed that you haven't played the big "C" card
    recently. Is this 1) an immaculate cure, 2) you putting on your big
    boy pants and taking responsibility for your own sorry life and mind,
    or 3) the time where you try to wiggle out of a past sequel of lies?
    We've seen all but variation 2 in past interactions. The curious want
    to know the real skinny so speak up!
    --
    Jeff Barnett


    My assumption (but just that) is that it has been a lie the whole time
    to try to gain sympathy. He as earned no reputation for honesty, and
    so none will be given.

    I will admit he might have been sick, but there has been no actual
    evidence of it, so it is mearly an unsubstantiated claim.

    I did have cancer jam packed in every lymph node.
    After chemo therapy last Summer this has cleared up.

    It is my current understanding that Follicular Lymphoma always
    comes back eventually.

    A FLIPI index score of 3 was very bad news.
    A 53% five year survival rate and a 35% 10 year survival rate. https://www.nature.com/articles/s41408-019-0269-6


    Which is a fairly amazing recovery, as your reports from a year and a
    half ago were something like 90% dead by the end of last year from my
    memory.

    I won't say you are lying, as I have no evidence, and do admit you could
    be telling the truth, but considering your verasity at other topics, you
    have no credit earned in believability, and shading some of the truth is
    an act I wouldn't put past you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 21:31:58 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out.  And >>>> the reason /why/ it catches so many out eventually led me to stop using >>>> the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting question >>>> is being asked.  The students think they can see it right there in the >>>> constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code calls a >>>> function that does not exist, so "it" (the constructed code, the whole >>>> program) does not exist either.

    The fact that it's code, and the students are almost all programmers
    and
    not mathematicians, makes it worse.  A mathematician seeing "let p be >>>> the largest prime" does not assume that such a p exists.  So when a
    prime number p' > p is constructed from p, this is not seen as a
    "self-contradictory number" because neither p nor p' exist.  But the
    halting theorem is even more deceptive for programmers, because the
    desired function, H (or whatever), appears to be so well defined --
    much
    more well-defined than "the largest prime".  We have an exact
    specification for it, mapping arguments to returned values.  It's just >>>> software engineering to write such things (they erroneously assume).

    These sorts of proof can always be re-worded so as to avoid the initial >>>> assumption.  For example, we can start "let p be any prime", and from p >>>> we construct a prime p' > p.  And for halting, we can start "let H be >>>> any subroutine of two arguments always returning true or false".  Now, >>>> all the objects /do/ exist.  In the first case, the construction shows >>>> that no prime is the largest, and in the second it shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, I would >>>> start the course by setting Post's correspondence problem as if it were >>>> just a fun programming challenge.  As the days passed (and the course >>>> got into more and more serious material) it would start to become clear >>>> that this was no ordinary programming challenge.  Many students started >>>> to suspect that, despite the trivial sounding specification, no program >>>> could do the job.  I always felt a bit uneasy doing this, as if I was >>>> not being 100% honest, but it was a very useful learning experience for >>>> most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the question. >>>
    It is an easily verified fact that when Jack's question is posed to Jack >>> that this question is self-contradictory for Jack or anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being.

    H is not, it is a program, so before we even ask H what will happen,
    the answer has been fixed by the definition of the codr of H.


    It is also clear that when a question has no yes or no answer because
    it is self-contradictory that this question is aptly classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this case,
    since H(D,D) says 0 (non-Halting) the actual answer to the question
    does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can
    somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory on the
    basis that it is not self-contradictory in some contexts. If a question
    is self-contradictory in some contexts then in these contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become
    self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics.


    But you are missing the difference. A Decider is a fixed piece of code,
    so its answer has always been fixed to this question since it has been designed. Thus what it will say isn't a varialbe that can lead to the self-contradiction cycle, but a fixed result that will either be correct
    or incorrect.

    A given H can't help but give the answer its program says it will give.
    and thus it doesn't matter that we are asking H itself, as its answer is already fixed.

    You are confusing logic about volitional beings with logic about fixed procedures.

    Add in that if you actually did it right, and the input had a new copy
    of a program equivalent to H, then your method used by H to detect the "pathological" interaction become impossible. (This is why you need to precisely define what you mean by "pathological relationship", you will
    find that either you H can't detect it or we can make a variation on H
    that D can use that doesn't meet your defintion of pathological but
    still makes H wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Jun 17 21:35:14 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 8:46 PM, Richard Damon wrote:
    On 6/17/23 7:44 PM, olcott wrote:
    On 6/17/2023 6:18 PM, Richard Damon wrote:
    On 6/17/23 6:03 PM, Jeff Barnett wrote:

    By the way, we have noticed that you haven't played the big "C" card
    recently. Is this 1) an immaculate cure, 2) you putting on your big
    boy pants and taking responsibility for your own sorry life and
    mind, or 3) the time where you try to wiggle out of a past sequel of
    lies? We've seen all but variation 2 in past interactions. The
    curious want to know the real skinny so speak up!
    --
    Jeff Barnett


    My assumption (but just that) is that it has been a lie the whole
    time to try to gain sympathy. He as earned no reputation for honesty,
    and so none will be given.

    I will admit he might have been sick, but there has been no actual
    evidence of it, so it is mearly an unsubstantiated claim.

    I did have cancer jam packed in every lymph node.
    After chemo therapy last Summer this has cleared up.

    It is my current understanding that Follicular Lymphoma always
    comes back eventually.

    A FLIPI index score of 3 was very bad news.
    A 53% five year survival rate and a 35% 10 year survival rate.
    https://www.nature.com/articles/s41408-019-0269-6


    Which is a fairly amazing recovery, as your reports from a year and a
    half ago were something like 90% dead by the end of last year from my
    memory.

    I won't say you are lying, as I have no evidence, and do admit you could
    be telling the truth, but considering your verasity at other topics, you
    have no credit earned in believability, and shading some of the truth is
    an act I wouldn't put past you.


    It is not the case that I ever lied on this forum. Most people
    make the mistake of calling me a liar entirely on the basis that
    they really really don't believe me and what I say goes against
    conventional wisdom.

    Most people seem to take conventional wisdom as the infallible
    word of God.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Jun 17 21:29:50 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out.  And >>>>> the reason /why/ it catches so many out eventually led me to stop
    using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting question >>>>> is being asked.  The students think they can see it right there in the >>>>> constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code calls a >>>>> function that does not exist, so "it" (the constructed code, the whole >>>>> program) does not exist either.

    The fact that it's code, and the students are almost all
    programmers and
    not mathematicians, makes it worse.  A mathematician seeing "let p be >>>>> the largest prime" does not assume that such a p exists.  So when a >>>>> prime number p' > p is constructed from p, this is not seen as a
    "self-contradictory number" because neither p nor p' exist.  But the >>>>> halting theorem is even more deceptive for programmers, because the
    desired function, H (or whatever), appears to be so well defined --
    much
    more well-defined than "the largest prime".  We have an exact
    specification for it, mapping arguments to returned values.  It's just >>>>> software engineering to write such things (they erroneously assume). >>>>>
    These sorts of proof can always be re-worded so as to avoid the
    initial
    assumption.  For example, we can start "let p be any prime", and
    from p
    we construct a prime p' > p.  And for halting, we can start "let H be >>>>> any subroutine of two arguments always returning true or false".  Now, >>>>> all the objects /do/ exist.  In the first case, the construction shows >>>>> that no prime is the largest, and in the second it shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, I
    would
    start the course by setting Post's correspondence problem as if it
    were
    just a fun programming challenge.  As the days passed (and the course >>>>> got into more and more serious material) it would start to become
    clear
    that this was no ordinary programming challenge.  Many students
    started
    to suspect that, despite the trivial sounding specification, no
    program
    could do the job.  I always felt a bit uneasy doing this, as if I was >>>>> not being 100% honest, but it was a very useful learning experience
    for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the question. >>>>
    It is an easily verified fact that when Jack's question is posed to
    Jack
    that this question is self-contradictory for Jack or anyone else having >>>> a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being.

    H is not, it is a program, so before we even ask H what will happen,
    the answer has been fixed by the definition of the codr of H.


    It is also clear that when a question has no yes or no answer because
    it is self-contradictory that this question is aptly classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this case,
    since H(D,D) says 0 (non-Halting) the actual answer to the question
    does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can
    somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory on the >>>> basis that it is not self-contradictory in some contexts. If a question >>>> is self-contradictory in some contexts then in these contexts it is an >>>> incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become
    self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics.


    But you are missing the difference. A Decider is a fixed piece of code,
    so its answer has always been fixed to this question since it has been designed. Thus what it will say isn't a varialbe that can lead to the self-contradiction cycle, but a fixed result that will either be correct
    or incorrect.


    Every input to a Turing machine decider such that both Boolean return
    values are incorrect is an incorrect input.



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 22:57:19 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out.  And >>>>>> the reason /why/ it catches so many out eventually led me to stop
    using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting
    question
    is being asked.  The students think they can see it right there in >>>>>> the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code
    calls a
    function that does not exist, so "it" (the constructed code, the
    whole
    program) does not exist either.

    The fact that it's code, and the students are almost all
    programmers and
    not mathematicians, makes it worse.  A mathematician seeing "let p be >>>>>> the largest prime" does not assume that such a p exists.  So when a >>>>>> prime number p' > p is constructed from p, this is not seen as a
    "self-contradictory number" because neither p nor p' exist.  But the >>>>>> halting theorem is even more deceptive for programmers, because the >>>>>> desired function, H (or whatever), appears to be so well defined
    -- much
    more well-defined than "the largest prime".  We have an exact
    specification for it, mapping arguments to returned values.  It's >>>>>> just
    software engineering to write such things (they erroneously assume). >>>>>>
    These sorts of proof can always be re-worded so as to avoid the
    initial
    assumption.  For example, we can start "let p be any prime", and
    from p
    we construct a prime p' > p.  And for halting, we can start "let H be >>>>>> any subroutine of two arguments always returning true or false".
    Now,
    all the objects /do/ exist.  In the first case, the construction
    shows
    that no prime is the largest, and in the second it shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, I >>>>>> would
    start the course by setting Post's correspondence problem as if it >>>>>> were
    just a fun programming challenge.  As the days passed (and the course >>>>>> got into more and more serious material) it would start to become
    clear
    that this was no ordinary programming challenge.  Many students
    started
    to suspect that, despite the trivial sounding specification, no
    program
    could do the job.  I always felt a bit uneasy doing this, as if I was >>>>>> not being 100% honest, but it was a very useful learning
    experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the question. >>>>>
    It is an easily verified fact that when Jack's question is posed to
    Jack
    that this question is self-contradictory for Jack or anyone else
    having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being.

    H is not, it is a program, so before we even ask H what will happen,
    the answer has been fixed by the definition of the codr of H.


    It is also clear that when a question has no yes or no answer because >>>>> it is self-contradictory that this question is aptly classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this case,
    since H(D,D) says 0 (non-Halting) the actual answer to the question
    does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can
    somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory on
    the
    basis that it is not self-contradictory in some contexts. If a
    question
    is self-contradictory in some contexts then in these contexts it is an >>>>> incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become
    self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics.


    But you are missing the difference. A Decider is a fixed piece of
    code, so its answer has always been fixed to this question since it
    has been designed. Thus what it will say isn't a varialbe that can
    lead to the self-contradiction cycle, but a fixed result that will
    either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different
    machines and two different inputs.

    If you define your H0 to return 0 when given the input <D0> <D0> for the
    D0 built on D0, then since D0 applied to <D0> will halt so the correct
    answer is 1. If H0 returned that answer, it would have been correct, but
    since H0 was defined with code that answered 0, that is the only thing
    that it can answer.

    On the other hand, if you instead defined a DIFFERENT machine H1, that
    uses similar logic, but instead of returning Non-Halting, returned
    Halting, the H1 applied to <D0> <D0> would abort its simulation and
    return 1, and it would have been correct. The problem here is that since
    H1 is a different machine, its "pathological" program is different,
    (since it will be built on H1, not H0) and H1 applied to <D1> <D1> will
    abort its simulation and return 1, but D1 applied to <D1> will go into
    an infinite loop, so the correct answer should have been 0.

    So, the problem is that the two cases you are looking at are DIFF#RENT
    inputs, because they are built on DIFFERENT machines. You don't seem to understand that a machine WILL generate the results that machine is
    programmed for, so "hypothetical" about it doing somethng different are
    just looking at impossible actions.

    So, it isn't the case that both answers are wrong for the same question,
    it is that the question changes when you alter your decider and whatever
    answer you make you decider give, will be wrong, and the other one right.

    Other deciders can get the correct answer for THAT input, but there will
    be a different input, based on them, that they will get wrong.

    You just seem to have a blind spot about what needs to stay the same,
    and what changes when you play your mind games.

    You dig your gas-lit hole because you seem to naturally do the deceptive
    thing of not giving new names to things when you change them, but try to
    hide the fact that you changed things by reusing names. This is a sign
    of potentially intentional deception.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sat Jun 17 22:10:50 2023
    XPost: comp.theory, sci.logic

    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory"
    Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out. >>>>>>> And
    the reason /why/ it catches so many out eventually led me to stop >>>>>>> using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting
    question
    is being asked.  The students think they can see it right there >>>>>>> in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code >>>>>>> calls a
    function that does not exist, so "it" (the constructed code, the >>>>>>> whole
    program) does not exist either.

    The fact that it's code, and the students are almost all
    programmers and
    not mathematicians, makes it worse.  A mathematician seeing "let >>>>>>> p be
    the largest prime" does not assume that such a p exists.  So when a >>>>>>> prime number p' > p is constructed from p, this is not seen as a >>>>>>> "self-contradictory number" because neither p nor p' exist.  But the >>>>>>> halting theorem is even more deceptive for programmers, because the >>>>>>> desired function, H (or whatever), appears to be so well defined >>>>>>> -- much
    more well-defined than "the largest prime".  We have an exact
    specification for it, mapping arguments to returned values.  It's >>>>>>> just
    software engineering to write such things (they erroneously assume). >>>>>>>
    These sorts of proof can always be re-worded so as to avoid the
    initial
    assumption.  For example, we can start "let p be any prime", and >>>>>>> from p
    we construct a prime p' > p.  And for halting, we can start "let >>>>>>> H be
    any subroutine of two arguments always returning true or false". >>>>>>> Now,
    all the objects /do/ exist.  In the first case, the construction >>>>>>> shows
    that no prime is the largest, and in the second it shows that no >>>>>>> subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, I >>>>>>> would
    start the course by setting Post's correspondence problem as if
    it were
    just a fun programming challenge.  As the days passed (and the
    course
    got into more and more serious material) it would start to become >>>>>>> clear
    that this was no ordinary programming challenge.  Many students >>>>>>> started
    to suspect that, despite the trivial sounding specification, no
    program
    could do the job.  I always felt a bit uneasy doing this, as if I >>>>>>> was
    not being 100% honest, but it was a very useful learning
    experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the question. >>>>>>
    It is an easily verified fact that when Jack's question is posed
    to Jack
    that this question is self-contradictory for Jack or anyone else
    having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being.

    H is not, it is a program, so before we even ask H what will
    happen, the answer has been fixed by the definition of the codr of H. >>>>>

    It is also clear that when a question has no yes or no answer because >>>>>> it is self-contradictory that this question is aptly classified as >>>>>> incorrect.

    And the actual question DOES have a yes or no answer, in this case,
    since H(D,D) says 0 (non-Halting) the actual answer to the question
    does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can
    somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory
    on the
    basis that it is not self-contradictory in some contexts. If a
    question
    is self-contradictory in some contexts then in these contexts it
    is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become
    self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics.


    But you are missing the difference. A Decider is a fixed piece of
    code, so its answer has always been fixed to this question since it
    has been designed. Thus what it will say isn't a varialbe that can
    lead to the self-contradiction cycle, but a fixed result that will
    either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different
    machines and two different inputs.

    If no one can possibly correctly answer what the correct return value
    that any H<n> having a pathological relationship to its input D<n> could possibly provide then that is proof that D<n> is an invalid input for
    H<n> in the same way that any self-contradictory question is an
    incorrect question.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Jun 17 23:03:26 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 10:35 PM, olcott wrote:
    On 6/17/2023 8:46 PM, Richard Damon wrote:
    On 6/17/23 7:44 PM, olcott wrote:
    On 6/17/2023 6:18 PM, Richard Damon wrote:
    On 6/17/23 6:03 PM, Jeff Barnett wrote:

    By the way, we have noticed that you haven't played the big "C"
    card recently. Is this 1) an immaculate cure, 2) you putting on
    your big boy pants and taking responsibility for your own sorry
    life and mind, or 3) the time where you try to wiggle out of a past
    sequel of lies? We've seen all but variation 2 in past
    interactions. The curious want to know the real skinny so speak up!
    --
    Jeff Barnett


    My assumption (but just that) is that it has been a lie the whole
    time to try to gain sympathy. He as earned no reputation for
    honesty, and so none will be given.

    I will admit he might have been sick, but there has been no actual
    evidence of it, so it is mearly an unsubstantiated claim.

    I did have cancer jam packed in every lymph node.
    After chemo therapy last Summer this has cleared up.

    It is my current understanding that Follicular Lymphoma always
    comes back eventually.

    A FLIPI index score of 3 was very bad news.
    A 53% five year survival rate and a 35% 10 year survival rate.
    https://www.nature.com/articles/s41408-019-0269-6


    Which is a fairly amazing recovery, as your reports from a year and a
    half ago were something like 90% dead by the end of last year from my
    memory.

    I won't say you are lying, as I have no evidence, and do admit you
    could be telling the truth, but considering your verasity at other
    topics, you have no credit earned in believability, and shading some
    of the truth is an act I wouldn't put past you.


    It is not the case that I ever lied on this forum. Most people
    make the mistake of calling me a liar entirely on the basis that
    they really really don't believe me and what I say goes against
    conventional wisdom.

    That is not true. There have been several cases where you have said that someone said something that just wasn't true.

    You also twist the words of people claiming they gave your ideas
    support, when they did no such thing.

    You also engage in great deception by improper trimming of quotations,
    removing "inconvient" (to you) parts of statements to change the meaning
    of them.


    Most people seem to take conventional wisdom as the infallible
    word of God.



    While you think your own words are that infallible word of God, as you
    think you are him.

    You don't understand the difference between "conventional Wisdom" and
    the DEFINITION of what something is, in part, because you just don't
    understand what Truth actually is.

    You seem incapable of actually dealing with truth, which is why you are
    a pathological liar. I don't think your mind can actually handle how
    truth actually works.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 08:02:23 2023
    XPost: comp.theory, sci.logic

    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>> Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students out. >>>>>>>> And
    the reason /why/ it catches so many out eventually led me to
    stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting
    question
    is being asked.  The students think they can see it right there >>>>>>>> in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code >>>>>>>> calls a
    function that does not exist, so "it" (the constructed code, the >>>>>>>> whole
    program) does not exist either.

    The fact that it's code, and the students are almost all
    programmers and
    not mathematicians, makes it worse.  A mathematician seeing "let >>>>>>>> p be
    the largest prime" does not assume that such a p exists.  So when a >>>>>>>> prime number p' > p is constructed from p, this is not seen as a >>>>>>>> "self-contradictory number" because neither p nor p' exist.  But >>>>>>>> the
    halting theorem is even more deceptive for programmers, because the >>>>>>>> desired function, H (or whatever), appears to be so well defined >>>>>>>> -- much
    more well-defined than "the largest prime".  We have an exact >>>>>>>> specification for it, mapping arguments to returned values.
    It's just
    software engineering to write such things (they erroneously
    assume).

    These sorts of proof can always be re-worded so as to avoid the >>>>>>>> initial
    assumption.  For example, we can start "let p be any prime", and >>>>>>>> from p
    we construct a prime p' > p.  And for halting, we can start "let >>>>>>>> H be
    any subroutine of two arguments always returning true or false". >>>>>>>> Now,
    all the objects /do/ exist.  In the first case, the construction >>>>>>>> shows
    that no prime is the largest, and in the second it shows that no >>>>>>>> subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, >>>>>>>> I would
    start the course by setting Post's correspondence problem as if >>>>>>>> it were
    just a fun programming challenge.  As the days passed (and the >>>>>>>> course
    got into more and more serious material) it would start to
    become clear
    that this was no ordinary programming challenge.  Many students >>>>>>>> started
    to suspect that, despite the trivial sounding specification, no >>>>>>>> program
    could do the job.  I always felt a bit uneasy doing this, as if >>>>>>>> I was
    not being 100% honest, but it was a very useful learning
    experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the
    question.

    It is an easily verified fact that when Jack's question is posed >>>>>>> to Jack
    that this question is self-contradictory for Jack or anyone else >>>>>>> having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being. >>>>>>
    H is not, it is a program, so before we even ask H what will
    happen, the answer has been fixed by the definition of the codr of H. >>>>>>

    It is also clear that when a question has no yes or no answer
    because
    it is self-contradictory that this question is aptly classified as >>>>>>> incorrect.

    And the actual question DOES have a yes or no answer, in this
    case, since H(D,D) says 0 (non-Halting) the actual answer to the
    question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can
    somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory >>>>>>> on the
    basis that it is not self-contradictory in some contexts. If a
    question
    is self-contradictory in some contexts then in these contexts it >>>>>>> is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become
    self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics.


    But you are missing the difference. A Decider is a fixed piece of
    code, so its answer has always been fixed to this question since it
    has been designed. Thus what it will say isn't a varialbe that can
    lead to the self-contradiction cycle, but a fixed result that will
    either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different
    machines and two different inputs.

    If no one can possibly correctly answer what the correct return value
    that any H<n> having a pathological relationship to its input D<n> could possibly provide then that is proof that D<n> is an invalid input for
    H<n> in the same way that any self-contradictory question is an
    incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt, and
    that HAS a correct answer, since your H(D,D) returns 0, the answer is
    that D(D) does Halt, and thus H was wrong.

    It isn't a proper question to ask what a given machine should return,
    since what it returns is determined by what its code it.

    The fact that the DESIGN question of what can we design H to return for
    this question, that NOW actually become pathological and creates a self-contradiction, just shows, as you are trying to use logic to show,
    that such a design is impossible.

    This means that the problem is unsolvable, and thus no universally
    correct halt decider can exist. It doesn't mean the problem was
    incorrect. MANY problems prove impossible to solve, but are still valid problem, its just their answer is that no answer exists.

    For instance, what are the real roots of x*x + 1 = 0. That is a problem
    with no solutions, but is still a perfectly valid question. Your logic
    system is very poor if you don't allow the asking of questions without solutions, and in fact, such a system becomes nearly worthless, as you
    can ask about things until you actaully can know the question has an
    answer, but you can't even ask if it has an asnswer, until you know if
    THAT question has an answer, and so on.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 09:32:22 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>>> Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students
    out. And
    the reason /why/ it catches so many out eventually led me to >>>>>>>>> stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting >>>>>>>>> question
    is being asked.  The students think they can see it right there >>>>>>>>> in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code >>>>>>>>> calls a
    function that does not exist, so "it" (the constructed code, >>>>>>>>> the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all
    programmers and
    not mathematicians, makes it worse.  A mathematician seeing >>>>>>>>> "let p be
    the largest prime" does not assume that such a p exists.  So >>>>>>>>> when a
    prime number p' > p is constructed from p, this is not seen as a >>>>>>>>> "self-contradictory number" because neither p nor p' exist.
    But the
    halting theorem is even more deceptive for programmers, because >>>>>>>>> the
    desired function, H (or whatever), appears to be so well
    defined -- much
    more well-defined than "the largest prime".  We have an exact >>>>>>>>> specification for it, mapping arguments to returned values.
    It's just
    software engineering to write such things (they erroneously
    assume).

    These sorts of proof can always be re-worded so as to avoid the >>>>>>>>> initial
    assumption.  For example, we can start "let p be any prime", >>>>>>>>> and from p
    we construct a prime p' > p.  And for halting, we can start >>>>>>>>> "let H be
    any subroutine of two arguments always returning true or
    false". Now,
    all the objects /do/ exist.  In the first case, the
    construction shows
    that no prime is the largest, and in the second it shows that no >>>>>>>>> subroutine computes the halting function.

    This issue led to another change.  In the last couple of years, >>>>>>>>> I would
    start the course by setting Post's correspondence problem as if >>>>>>>>> it were
    just a fun programming challenge.  As the days passed (and the >>>>>>>>> course
    got into more and more serious material) it would start to
    become clear
    that this was no ordinary programming challenge.  Many students >>>>>>>>> started
    to suspect that, despite the trivial sounding specification, no >>>>>>>>> program
    could do the job.  I always felt a bit uneasy doing this, as if >>>>>>>>> I was
    not being 100% honest, but it was a very useful learning
    experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the >>>>>>>> question.

    It is an easily verified fact that when Jack's question is posed >>>>>>>> to Jack
    that this question is self-contradictory for Jack or anyone else >>>>>>>> having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being. >>>>>>>
    H is not, it is a program, so before we even ask H what will
    happen, the answer has been fixed by the definition of the codr
    of H.


    It is also clear that when a question has no yes or no answer
    because
    it is self-contradictory that this question is aptly classified as >>>>>>>> incorrect.

    And the actual question DOES have a yes or no answer, in this
    case, since H(D,D) says 0 (non-Halting) the actual answer to the >>>>>>> question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that can >>>>>>> somehow change itself "at will".


    It is incorrect to say that a question is not self-contradictory >>>>>>>> on the
    basis that it is not self-contradictory in some contexts. If a >>>>>>>> question
    is self-contradictory in some contexts then in these contexts it >>>>>>>> is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become
    self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics.


    But you are missing the difference. A Decider is a fixed piece of
    code, so its answer has always been fixed to this question since it
    has been designed. Thus what it will say isn't a varialbe that can
    lead to the self-contradiction cycle, but a fixed result that will
    either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different
    machines and two different inputs.

    If no one can possibly correctly answer what the correct return value
    that any H<n> having a pathological relationship to its input D<n>
    could possibly provide then that is proof that D<n> is an invalid
    input for H<n> in the same way that any self-contradictory question is
    an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt, and
    that HAS a correct answer, since your H(D,D) returns 0, the answer is
    that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
    You ask someone (we'll call him "Jack") to give a truthful
    yes/no answer to the following question:

    Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 11:41:32 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>>>>> Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students >>>>>>>>>>> out. And
    the reason /why/ it catches so many out eventually led me to >>>>>>>>>>> stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting >>>>>>>>>>> question
    is being asked.  The students think they can see it right >>>>>>>>>>> there in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The >>>>>>>>>>> code calls a
    function that does not exist, so "it" (the constructed code, >>>>>>>>>>> the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all >>>>>>>>>>> programmers and
    not mathematicians, makes it worse.  A mathematician seeing >>>>>>>>>>> "let p be
    the largest prime" does not assume that such a p exists.  So >>>>>>>>>>> when a
    prime number p' > p is constructed from p, this is not seen as a >>>>>>>>>>> "self-contradictory number" because neither p nor p' exist. >>>>>>>>>>> But the
    halting theorem is even more deceptive for programmers,
    because the
    desired function, H (or whatever), appears to be so well >>>>>>>>>>> defined -- much
    more well-defined than "the largest prime".  We have an exact >>>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>>> It's just
    software engineering to write such things (they erroneously >>>>>>>>>>> assume).

    These sorts of proof can always be re-worded so as to avoid >>>>>>>>>>> the initial
    assumption.  For example, we can start "let p be any prime", >>>>>>>>>>> and from p
    we construct a prime p' > p.  And for halting, we can start >>>>>>>>>>> "let H be
    any subroutine of two arguments always returning true or >>>>>>>>>>> false". Now,
    all the objects /do/ exist.  In the first case, the
    construction shows
    that no prime is the largest, and in the second it shows that no >>>>>>>>>>> subroutine computes the halting function.

    This issue led to another change.  In the last couple of >>>>>>>>>>> years, I would
    start the course by setting Post's correspondence problem as >>>>>>>>>>> if it were
    just a fun programming challenge.  As the days passed (and >>>>>>>>>>> the course
    got into more and more serious material) it would start to >>>>>>>>>>> become clear
    that this was no ordinary programming challenge.  Many
    students started
    to suspect that, despite the trivial sounding specification, >>>>>>>>>>> no program
    could do the job.  I always felt a bit uneasy doing this, as >>>>>>>>>>> if I was
    not being 100% honest, but it was a very useful learning >>>>>>>>>>> experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the >>>>>>>>>> question.

    It is an easily verified fact that when Jack's question is >>>>>>>>>> posed to Jack
    that this question is self-contradictory for Jack or anyone >>>>>>>>>> else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional >>>>>>>>> being.

    H is not, it is a program, so before we even ask H what will >>>>>>>>> happen, the answer has been fixed by the definition of the codr >>>>>>>>> of H.


    It is also clear that when a question has no yes or no answer >>>>>>>>>> because
    it is self-contradictory that this question is aptly
    classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this >>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to >>>>>>>>> the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that >>>>>>>>> can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some contexts. If a >>>>>>>>>> question
    is self-contradictory in some contexts then in these contexts >>>>>>>>>> it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become >>>>>>>>> self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics. >>>>>>>>

    But you are missing the difference. A Decider is a fixed piece of >>>>>>> code, so its answer has always been fixed to this question since >>>>>>> it has been designed. Thus what it will say isn't a varialbe that >>>>>>> can lead to the self-contradiction cycle, but a fixed result that >>>>>>> will either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean return >>>>>> values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different
    machines and two different inputs.

    If no one can possibly correctly answer what the correct return
    value that any H<n> having a pathological relationship to its input
    D<n> could possibly provide then that is proof that D<n> is an
    invalid input for H<n> in the same way that any self-contradictory
    question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt, and
    that HAS a correct answer, since your H(D,D) returns 0, the answer is
    that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 12:31:42 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a "Self-Contradictory" >>>>>>>>>>> Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students >>>>>>>>>> out. And
    the reason /why/ it catches so many out eventually led me to >>>>>>>>>> stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a self-contradicting >>>>>>>>>> question
    is being asked.  The students think they can see it right >>>>>>>>>> there in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The code >>>>>>>>>> calls a
    function that does not exist, so "it" (the constructed code, >>>>>>>>>> the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all
    programmers and
    not mathematicians, makes it worse.  A mathematician seeing >>>>>>>>>> "let p be
    the largest prime" does not assume that such a p exists.  So >>>>>>>>>> when a
    prime number p' > p is constructed from p, this is not seen as a >>>>>>>>>> "self-contradictory number" because neither p nor p' exist. >>>>>>>>>> But the
    halting theorem is even more deceptive for programmers,
    because the
    desired function, H (or whatever), appears to be so well
    defined -- much
    more well-defined than "the largest prime".  We have an exact >>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>> It's just
    software engineering to write such things (they erroneously >>>>>>>>>> assume).

    These sorts of proof can always be re-worded so as to avoid >>>>>>>>>> the initial
    assumption.  For example, we can start "let p be any prime", >>>>>>>>>> and from p
    we construct a prime p' > p.  And for halting, we can start >>>>>>>>>> "let H be
    any subroutine of two arguments always returning true or
    false". Now,
    all the objects /do/ exist.  In the first case, the
    construction shows
    that no prime is the largest, and in the second it shows that no >>>>>>>>>> subroutine computes the halting function.

    This issue led to another change.  In the last couple of
    years, I would
    start the course by setting Post's correspondence problem as >>>>>>>>>> if it were
    just a fun programming challenge.  As the days passed (and the >>>>>>>>>> course
    got into more and more serious material) it would start to >>>>>>>>>> become clear
    that this was no ordinary programming challenge.  Many
    students started
    to suspect that, despite the trivial sounding specification, >>>>>>>>>> no program
    could do the job.  I always felt a bit uneasy doing this, as >>>>>>>>>> if I was
    not being 100% honest, but it was a very useful learning
    experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the >>>>>>>>> question.

    It is an easily verified fact that when Jack's question is
    posed to Jack
    that this question is self-contradictory for Jack or anyone
    else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional being. >>>>>>>>
    H is not, it is a program, so before we even ask H what will
    happen, the answer has been fixed by the definition of the codr >>>>>>>> of H.


    It is also clear that when a question has no yes or no answer >>>>>>>>> because
    it is self-contradictory that this question is aptly classified as >>>>>>>>> incorrect.

    And the actual question DOES have a yes or no answer, in this
    case, since H(D,D) says 0 (non-Halting) the actual answer to the >>>>>>>> question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that
    can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some contexts. If a >>>>>>>>> question
    is self-contradictory in some contexts then in these contexts >>>>>>>>> it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" become >>>>>>>> self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics. >>>>>>>

    But you are missing the difference. A Decider is a fixed piece of
    code, so its answer has always been fixed to this question since
    it has been designed. Thus what it will say isn't a varialbe that
    can lead to the self-contradiction cycle, but a fixed result that
    will either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean return >>>>> values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different
    machines and two different inputs.

    If no one can possibly correctly answer what the correct return value
    that any H<n> having a pathological relationship to its input D<n>
    could possibly provide then that is proof that D<n> is an invalid
    input for H<n> in the same way that any self-contradictory question
    is an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt, and
    that HAS a correct answer, since your H(D,D) returns 0, the answer is
    that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
       You ask someone (we'll call him "Jack") to give a truthful
       yes/no answer to the following question:

       Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.

    You are just stuck with the worng question.

    The Question is, Does D(D) Halt?, asked by giving the decider the
    appropriate representation.

    Since your H(D,D) answers 0 (Non-Halting), the D, the only D in view,
    will Halt when given the input D.

    That is the correct answer no matter who you ask, and thus there is no "self-contradiction" around. We can ask H, and because H is the program
    that H is, it MUST answer 0, and is thus wrong.

    When you Hypothise that H does something different, that is just a LIE,
    because this H CAN'T do something different, not and be this H,

    You can hypotosis what whould happen if H was instead H1, that acted differently, but then you need to be clear on what you are doing, are
    you asking H1 about D(D), or about another hypothetical D1(D1) for the
    D1 built on it.

    H1(D,D) can correctly answeer the question, but that doesn't prove
    anything.

    If you look at H1(D1,D1) you see that D1(D1) is non-halting, but D1 is a different machine than D(D) so different behavior is understandable.

    Thus, your whole arguement is based on the desception of assuming that
    the machine H can become a different machine (called H1 above) but still
    be the "same" H so the same question.

    That is just a LIE and not a valid "Hypothetica;", because something
    can't be somethig else and still be itself. That is the thinking of
    insanity.

    You are just proving that you can not think correctly, but are stuck
    with totally invalid and unsound logic rules stuck in your mind.

    You LIE about what you are doing, and about what is true.

    YOU FAIL.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 12:54:25 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students >>>>>>>>>>>> out. And
    the reason /why/ it catches so many out eventually led me to >>>>>>>>>>>> stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a
    self-contradicting question
    is being asked.  The students think they can see it right >>>>>>>>>>>> there in the
    constructed code: "if H says I halt, I don't halt!".

    Of course, they are wrong.  The code is /not/ there.  The >>>>>>>>>>>> code calls a
    function that does not exist, so "it" (the constructed code, >>>>>>>>>>>> the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all >>>>>>>>>>>> programmers and
    not mathematicians, makes it worse.  A mathematician seeing >>>>>>>>>>>> "let p be
    the largest prime" does not assume that such a p exists.  So >>>>>>>>>>>> when a
    prime number p' > p is constructed from p, this is not seen >>>>>>>>>>>> as a
    "self-contradictory number" because neither p nor p' exist. >>>>>>>>>>>> But the
    halting theorem is even more deceptive for programmers, >>>>>>>>>>>> because the
    desired function, H (or whatever), appears to be so well >>>>>>>>>>>> defined -- much
    more well-defined than "the largest prime".  We have an exact >>>>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>>>> It's just
    software engineering to write such things (they erroneously >>>>>>>>>>>> assume).

    These sorts of proof can always be re-worded so as to avoid >>>>>>>>>>>> the initial
    assumption.  For example, we can start "let p be any prime", >>>>>>>>>>>> and from p
    we construct a prime p' > p.  And for halting, we can start >>>>>>>>>>>> "let H be
    any subroutine of two arguments always returning true or >>>>>>>>>>>> false". Now,
    all the objects /do/ exist.  In the first case, the
    construction shows
    that no prime is the largest, and in the second it shows >>>>>>>>>>>> that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of >>>>>>>>>>>> years, I would
    start the course by setting Post's correspondence problem as >>>>>>>>>>>> if it were
    just a fun programming challenge.  As the days passed (and >>>>>>>>>>>> the course
    got into more and more serious material) it would start to >>>>>>>>>>>> become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding specification, >>>>>>>>>>>> no program
    could do the job.  I always felt a bit uneasy doing this, as >>>>>>>>>>>> if I was
    not being 100% honest, but it was a very useful learning >>>>>>>>>>>> experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the >>>>>>>>>>> question.

    It is an easily verified fact that when Jack's question is >>>>>>>>>>> posed to Jack
    that this question is self-contradictory for Jack or anyone >>>>>>>>>>> else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional >>>>>>>>>> being.

    H is not, it is a program, so before we even ask H what will >>>>>>>>>> happen, the answer has been fixed by the definition of the >>>>>>>>>> codr of H.


    It is also clear that when a question has no yes or no answer >>>>>>>>>>> because
    it is self-contradictory that this question is aptly
    classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this >>>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to >>>>>>>>>> the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that >>>>>>>>>> can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some contexts. If >>>>>>>>>>> a question
    is self-contradictory in some contexts then in these contexts >>>>>>>>>>> it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run"
    become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics. >>>>>>>>>

    But you are missing the difference. A Decider is a fixed piece >>>>>>>> of code, so its answer has always been fixed to this question
    since it has been designed. Thus what it will say isn't a
    varialbe that can lead to the self-contradiction cycle, but a
    fixed result that will either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean
    return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different
    machines and two different inputs.

    If no one can possibly correctly answer what the correct return
    value that any H<n> having a pathological relationship to its input
    D<n> could possibly provide then that is proof that D<n> is an
    invalid input for H<n> in the same way that any self-contradictory
    question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt, and
    that HAS a correct answer, since your H(D,D) returns 0, the answer
    is that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully saying
    doesn't have an answer.

    Yes, asking Jack (a volitional being) about what he will do in the
    future can lead to this form of self-contradiction.

    Asking a "Program" (which isn't volitional, but deterministic) doesn't
    since the answer was fixed when the program was writte.

    It is like asking Jack if the answer to the LAST question was no, but constraining him that he also must answer it the same as that last question.

    It is the constraint that gives the impossibility to get a right answer,
    just as it is the fundamental constraint on a program to always give the
    same answer on the same input that leads to the impossibility of H to
    give the right answer.

    You just seem to be unable to tell the difference between things that
    are different, and don't seem to understand the fundamental nature of
    programs. You don't seem to understand the actual nature of Truth and
    Knowledge or Intelegence, thinking that an "Artificial Intelegence" is
    just the same as an "Intelegent and volitional Being". This seems to be
    your insanity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 12:09:50 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch students >>>>>>>>>>>>> out. And
    the reason /why/ it catches so many out eventually led me >>>>>>>>>>>>> to stop using
    the proof-by-contradiction argument in my classes.

    The thing is, it looks so very much like a
    self-contradicting question
    is being asked.  The students think they can see it right >>>>>>>>>>>>> there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there.  The >>>>>>>>>>>>> code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all >>>>>>>>>>>>> programmers and
    not mathematicians, makes it worse.  A mathematician seeing >>>>>>>>>>>>> "let p be
    the largest prime" does not assume that such a p exists. >>>>>>>>>>>>> So when a
    prime number p' > p is constructed from p, this is not seen >>>>>>>>>>>>> as a
    "self-contradictory number" because neither p nor p' exist. >>>>>>>>>>>>> But the
    halting theorem is even more deceptive for programmers, >>>>>>>>>>>>> because the
    desired function, H (or whatever), appears to be so well >>>>>>>>>>>>> defined -- much
    more well-defined than "the largest prime".  We have an exact >>>>>>>>>>>>> specification for it, mapping arguments to returned values. >>>>>>>>>>>>> It's just
    software engineering to write such things (they erroneously >>>>>>>>>>>>> assume).

    These sorts of proof can always be re-worded so as to avoid >>>>>>>>>>>>> the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can start >>>>>>>>>>>>> "let H be
    any subroutine of two arguments always returning true or >>>>>>>>>>>>> false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it shows >>>>>>>>>>>>> that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of >>>>>>>>>>>>> years, I would
    start the course by setting Post's correspondence problem >>>>>>>>>>>>> as if it were
    just a fun programming challenge.  As the days passed (and >>>>>>>>>>>>> the course
    got into more and more serious material) it would start to >>>>>>>>>>>>> become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding
    specification, no program
    could do the job.  I always felt a bit uneasy doing this, >>>>>>>>>>>>> as if I was
    not being 100% honest, but it was a very useful learning >>>>>>>>>>>>> experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the >>>>>>>>>>>> question.

    It is an easily verified fact that when Jack's question is >>>>>>>>>>>> posed to Jack
    that this question is self-contradictory for Jack or anyone >>>>>>>>>>>> else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional >>>>>>>>>>> being.

    H is not, it is a program, so before we even ask H what will >>>>>>>>>>> happen, the answer has been fixed by the definition of the >>>>>>>>>>> codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly
    classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in this >>>>>>>>>>> case, since H(D,D) says 0 (non-Halting) the actual answer to >>>>>>>>>>> the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program that >>>>>>>>>>> can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some contexts. If >>>>>>>>>>>> a question
    is self-contradictory in some contexts then in these
    contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not
    Jack it is not self-contradictory. Context changes the semantics. >>>>>>>>>>

    But you are missing the difference. A Decider is a fixed piece >>>>>>>>> of code, so its answer has always been fixed to this question >>>>>>>>> since it has been designed. Thus what it will say isn't a
    varialbe that can lead to the self-contradiction cycle, but a >>>>>>>>> fixed result that will either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean >>>>>>>> return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different >>>>>>> machines and two different inputs.

    If no one can possibly correctly answer what the correct return
    value that any H<n> having a pathological relationship to its
    input D<n> could possibly provide then that is proof that D<n> is
    an invalid input for H<n> in the same way that any
    self-contradictory question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt,
    and that HAS a correct answer, since your H(D,D) returns 0, the
    answer is that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully saying
    doesn't have an answer.

    The question posed to Jack does not have an answer because within the
    context that the question is posed to Jack it is self-contradictory.
    You can ignore that context matters yet that is not any rebuttal.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 13:05:26 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually led me >>>>>>>>>>>>>>> to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>
    The thing is, it looks so very much like a
    self-contradicting question
    is being asked.  The students think they can see it right >>>>>>>>>>>>>>> there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there.  The >>>>>>>>>>>>>>> code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all >>>>>>>>>>>>>>> programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p exists. >>>>>>>>>>>>>>> So when a
    prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>> seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>> because the
    desired function, H (or whatever), appears to be so well >>>>>>>>>>>>>>> defined -- much
    more well-defined than "the largest prime".  We have an >>>>>>>>>>>>>>> exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they
    erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true or >>>>>>>>>>>>>>> false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it shows >>>>>>>>>>>>>>> that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of >>>>>>>>>>>>>>> years, I would
    start the course by setting Post's correspondence problem >>>>>>>>>>>>>>> as if it were
    just a fun programming challenge.  As the days passed >>>>>>>>>>>>>>> (and the course
    got into more and more serious material) it would start >>>>>>>>>>>>>>> to become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding
    specification, no program
    could do the job.  I always felt a bit uneasy doing this, >>>>>>>>>>>>>>> as if I was
    not being 100% honest, but it was a very useful learning >>>>>>>>>>>>>>> experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>> truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>> the question.

    It is an easily verified fact that when Jack's question is >>>>>>>>>>>>>> posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a
    volitional being.

    H is not, it is a program, so before we even ask H what >>>>>>>>>>>>> will happen, the answer has been fixed by the definition of >>>>>>>>>>>>> the codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>> answer to the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program >>>>>>>>>>>>> that can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some contexts. >>>>>>>>>>>>>> If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>> incorrect.


    Every input to a Turing machine decider such that both Boolean >>>>>>>>>> return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two
    different machines and two different inputs.

    If no one can possibly correctly answer what the correct return >>>>>>>> value that any H<n> having a pathological relationship to its
    input D<n> could possibly provide then that is proof that D<n> >>>>>>>> is an invalid input for H<n> in the same way that any
    self-contradictory question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt, >>>>>>> and that HAS a correct answer, since your H(D,D) returns 0, the
    answer is that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully saying
    doesn't have an answer.

    The question posed to Jack does not have an answer because within the
    context that the question is posed to Jack it is self-contradictory.
    You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for
    Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return values
    from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a self-contradictory question for every H<n>.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 13:46:00 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes:

    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually led me >>>>>>>>>>>>>> to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>
    The thing is, it looks so very much like a
    self-contradicting question
    is being asked.  The students think they can see it right >>>>>>>>>>>>>> there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there.  The >>>>>>>>>>>>>> code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all >>>>>>>>>>>>>> programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p exists. >>>>>>>>>>>>>> So when a
    prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>> seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>> because the
    desired function, H (or whatever), appears to be so well >>>>>>>>>>>>>> defined -- much
    more well-defined than "the largest prime".  We have an exact >>>>>>>>>>>>>> specification for it, mapping arguments to returned >>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they
    erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true or >>>>>>>>>>>>>> false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it shows >>>>>>>>>>>>>> that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of >>>>>>>>>>>>>> years, I would
    start the course by setting Post's correspondence problem >>>>>>>>>>>>>> as if it were
    just a fun programming challenge.  As the days passed (and >>>>>>>>>>>>>> the course
    got into more and more serious material) it would start to >>>>>>>>>>>>>> become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding
    specification, no program
    could do the job.  I always felt a bit uneasy doing this, >>>>>>>>>>>>>> as if I was
    not being 100% honest, but it was a very useful learning >>>>>>>>>>>>>> experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

        Jack can't possibly give a correct yes/no answer to the >>>>>>>>>>>>> question.

    It is an easily verified fact that when Jack's question is >>>>>>>>>>>>> posed to Jack
    that this question is self-contradictory for Jack or anyone >>>>>>>>>>>>> else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a volitional >>>>>>>>>>>> being.

    H is not, it is a program, so before we even ask H what will >>>>>>>>>>>> happen, the answer has been fixed by the definition of the >>>>>>>>>>>> codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in >>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>> answer to the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program >>>>>>>>>>>> that can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some contexts. >>>>>>>>>>>>> If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>> Jack it is not self-contradictory. Context changes the
    semantics.


    But you are missing the difference. A Decider is a fixed piece >>>>>>>>>> of code, so its answer has always been fixed to this question >>>>>>>>>> since it has been designed. Thus what it will say isn't a
    varialbe that can lead to the self-contradiction cycle, but a >>>>>>>>>> fixed result that will either be correct or incorrect.


    Every input to a Turing machine decider such that both Boolean >>>>>>>>> return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two different >>>>>>>> machines and two different inputs.

    If no one can possibly correctly answer what the correct return
    value that any H<n> having a pathological relationship to its
    input D<n> could possibly provide then that is proof that D<n> is >>>>>>> an invalid input for H<n> in the same way that any
    self-contradictory question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt,
    and that HAS a correct answer, since your H(D,D) returns 0, the
    answer is that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully saying
    doesn't have an answer.

    The question posed to Jack does not have an answer because within the
    context that the question is posed to Jack it is self-contradictory.
    You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem, and the fact
    that you don't see that and have gotten you stuck on this Red Herring
    just shows your ignorance.

    You are just proving that you personal logic system is filled with all
    the logical fallacies in the book, so we can't trust anything you say to
    have actual meaning.

    You have religated yourself to the ashheap of history.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 13:30:33 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>> me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a
    self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there. >>>>>>>>>>>>>>>>> The code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>> all programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>> seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>>>> because the
    desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>> well defined -- much
    more well-defined than "the largest prime".  We have an >>>>>>>>>>>>>>>>> exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>> or false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple >>>>>>>>>>>>>>>>> of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days passed >>>>>>>>>>>>>>>>> (and the course
    got into more and more serious material) it would start >>>>>>>>>>>>>>>>> to become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>> truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>> the question.

    It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>> is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>> of the codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>> that can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>> incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two
    different machines and two different inputs.

    If no one can possibly correctly answer what the correct
    return value that any H<n> having a pathological relationship >>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>> any self-contradictory question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D)
    Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully saying
    doesn't have an answer.

    The question posed to Jack does not have an answer because within the
    context that the question is posed to Jack it is self-contradictory.
    You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for
    Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return values
    from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
    self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    In other words you fail to understand that when Jack's question is posed
    to someone else that it remains self-contradictory.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 14:20:44 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>> me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a
    self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there. >>>>>>>>>>>>>>>> The code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost all >>>>>>>>>>>>>>>> programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p exists. >>>>>>>>>>>>>>>> So when a
    prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>> seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>>> because the
    desired function, H (or whatever), appears to be so well >>>>>>>>>>>>>>>> defined -- much
    more well-defined than "the largest prime".  We have an >>>>>>>>>>>>>>>> exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true or >>>>>>>>>>>>>>>> false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it shows >>>>>>>>>>>>>>>> that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple of >>>>>>>>>>>>>>>> years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days passed >>>>>>>>>>>>>>>> (and the course
    got into more and more serious material) it would start >>>>>>>>>>>>>>>> to become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding
    specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful learning >>>>>>>>>>>>>>>> experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>> truthful
        yes/no answer to the following question:

        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>> the question.

    It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>> is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a
    volitional being.

    H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>> of the codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>> that can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some contexts. >>>>>>>>>>>>>>> If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>> incorrect.


    Every input to a Turing machine decider such that both
    Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two
    different machines and two different inputs.

    If no one can possibly correctly answer what the correct return >>>>>>>>> value that any H<n> having a pathological relationship to its >>>>>>>>> input D<n> could possibly provide then that is proof that D<n> >>>>>>>>> is an invalid input for H<n> in the same way that any
    self-contradictory question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D) Halt, >>>>>>>> and that HAS a correct answer, since your H(D,D) returns 0, the >>>>>>>> answer is that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully saying
    doesn't have an answer.

    The question posed to Jack does not have an answer because within the
    context that the question is posed to Jack it is self-contradictory.
    You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for
    Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return values
    from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result, the result that its
    code will generate. The other "oossibe output" is actually impossible
    for THAT H<n> to generate, and thus talking about it doing so is invalid
    logic.

    For example, for your defined H in your sample code, since H(D,D)
    returns 0, the correct answer is 1, so both answers are not incorrect,
    only the answer that H gives.

    Your logic is like asking what is the color of a black cat that is
    white? The question has an illogical premise (that something that is
    black can be white) just like your question does, that a given program
    COULD return both answers. The things that produce the two answers are different programs.

    The key point is that whatever value a given H generates, the OTHER
    value would have been correct, due to the "pathological" nature of D<n>.

    Thus, for every questions "Does D<n>(D<n>) Halt" there IS a correct
    value that a correct halt decider should return. The problem is that, by
    its code, H<n> doesn't happen to generate that value.

    You are confusig the volitional Jack with the deterministic H<n>. Jack,
    because his future chose isn't fixed, and he has free will to chose his
    answer, sees the self-contradiction. H<n>, because it doesn't have
    free-will, and whose answer has been fixed by its programing, is just
    wrong, and the correct answer does exist, it just doesn't give it.

    You get yourself stuck on the WRONG question that is actually put to the free-will designer, what should I program my H to generate for a problem generated by this template. yes THAT question is self-contraditory,
    which shows that the programmer can't write a valid program to give the
    right answer, because the correct answer is not computable. The key is
    that the actual Halting Question can't possibly be asked, until the
    programmer commits themselve to a claimed answer, as the input program
    can't exist until the claim decider exists as an actual program, and
    once you do that, the self-contradiction has been resolved, and the
    decider is proven wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 14:43:22 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 2:30 PM, olcott wrote:
    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>>> me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>> self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there. >>>>>>>>>>>>>>>>>> The code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>> all programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>>> seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>> programmers, because the
    desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>> well defined -- much
    more well-defined than "the largest prime".  We have >>>>>>>>>>>>>>>>>> an exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>>> or false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple >>>>>>>>>>>>>>>>>> of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days passed >>>>>>>>>>>>>>>>>> (and the course
    got into more and more serious material) it would >>>>>>>>>>>>>>>>>> start to become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>> truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>>> the question.

    It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>>> is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>>> of the codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>>> that can somehow change itself "at will".


    It is incorrect to say that a question is not >>>>>>>>>>>>>>>>> self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>>> incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two >>>>>>>>>>>> different machines and two different inputs.

    If no one can possibly correctly answer what the correct >>>>>>>>>>> return value that any H<n> having a pathological relationship >>>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>>> any self-contradictory question is an incorrect question. >>>>>>>>>>>

    But you have the wrong Question. The Question is Does D(D) >>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong. >>>>>>>>>>
    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question. >>>>>>>>>
    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully
    saying doesn't have an answer.

    The question posed to Jack does not have an answer because within the >>>>> context that the question is posed to Jack it is self-contradictory. >>>>> You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for
    Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return values
    from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
    self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    In other words you fail to understand that when Jack's question is posed
    to someone else that it remains self-contradictory.


    And you fail to understand that the nature of the halting question is fundamentally different then the question to Jack.

    The Question to Jack is about the future behavior of a volitional being,
    so it doesn't have a "correct" answer until some point after it is
    answered. The halting problem is about the results of a determinist
    computation that has a correct answer even somewhat before the question
    can be asked (but maybe not until the question CAN be asked).

    There are philosophical arguments about the Jack question asked to
    someone besides Jack, if it even HAS a "Correct answer" at the point the question is asked, since the answer doesn't actually HAVE a truth value
    until Jack answers his next question.

    On the other hand, the question about the behavior of D(D) has a correct
    answer as soon as D is actually constructed (or defined) which requires
    that H be constructed (or fully defined). At that point, the answer is
    fixed, and just happens to be that which makes H incorrect (if H answers).

    The question you keep on looking at isn't the actual halting question,
    but a design question on the path of trying to make a correct decider H.
    The fact that THIS question leads you to the impossible state shows that
    there can not be a correct H, not that the Halting Question is
    malformed. It just shows that the Halting Question isn't computable. It
    is answerable, just maybe not by a "computation".

    You are just showing your inability to actually distinguish between
    things that are categorically different, because you have lost your
    connection to reality.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 13:47:44 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>> me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a
    self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there. >>>>>>>>>>>>>>>>> The code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>> all programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>> seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for programmers, >>>>>>>>>>>>>>>>> because the
    desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>> well defined -- much
    more well-defined than "the largest prime".  We have an >>>>>>>>>>>>>>>>> exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>> or false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple >>>>>>>>>>>>>>>>> of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days passed >>>>>>>>>>>>>>>>> (and the course
    got into more and more serious material) it would start >>>>>>>>>>>>>>>>> to become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>> truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>> the question.

    It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>> is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>> of the codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>> that can somehow change itself "at will".


    It is incorrect to say that a question is not
    self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>> incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two
    different machines and two different inputs.

    If no one can possibly correctly answer what the correct
    return value that any H<n> having a pathological relationship >>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>> any self-contradictory question is an incorrect question.


    But you have the wrong Question. The Question is Does D(D)
    Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question.

    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully saying
    doesn't have an answer.

    The question posed to Jack does not have an answer because within the
    context that the question is posed to Jack it is self-contradictory.
    You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for
    Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return values
    from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
    self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    Some of the elements of H<n>/D<n> are identical except for the return
    value from H. In both of these cases the return value is incorrect.

    Since I have just defined the set of every halting problem {decider /
    input} pair that can possibly exist in any universe there is no rebuttal
    of: What about this element of this set?



    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 15:19:09 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 2:47 PM, olcott wrote:
    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote:
    Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a
    "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually led >>>>>>>>>>>>>>>>>> me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>> self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there. >>>>>>>>>>>>>>>>>> The code calls a
    function that does not exist, so "it" (the constructed >>>>>>>>>>>>>>>>>> code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>> all programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is not >>>>>>>>>>>>>>>>>> seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>> programmers, because the
    desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>> well defined -- much
    more well-defined than "the largest prime".  We have >>>>>>>>>>>>>>>>>> an exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>>> or false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function.

    This issue led to another change.  In the last couple >>>>>>>>>>>>>>>>>> of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days passed >>>>>>>>>>>>>>>>>> (and the course
    got into more and more serious material) it would >>>>>>>>>>>>>>>>>> start to become clear
    that this was no ordinary programming challenge.  Many >>>>>>>>>>>>>>>>>> students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>> truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer to >>>>>>>>>>>>>>>>> the question.

    It is an easily verified fact that when Jack's question >>>>>>>>>>>>>>>>> is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question.

    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>>> will happen, the answer has been fixed by the definition >>>>>>>>>>>>>>>> of the codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, in >>>>>>>>>>>>>>>> this case, since H(D,D) says 0 (non-Halting) the actual >>>>>>>>>>>>>>>> answer to the question does D(D) Halt is YES.

    You just confuse yourself by trying to imagine a program >>>>>>>>>>>>>>>> that can somehow change itself "at will".


    It is incorrect to say that a question is not >>>>>>>>>>>>>>>>> self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When run" >>>>>>>>>>>>>>>> become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>> piece of code, so its answer has always been fixed to this >>>>>>>>>>>>>> question since it has been designed. Thus what it will say >>>>>>>>>>>>>> isn't a varialbe that can lead to the self-contradiction >>>>>>>>>>>>>> cycle, but a fixed result that will either be correct or >>>>>>>>>>>>>> incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two >>>>>>>>>>>> different machines and two different inputs.

    If no one can possibly correctly answer what the correct >>>>>>>>>>> return value that any H<n> having a pathological relationship >>>>>>>>>>> to its input D<n> could possibly provide then that is proof >>>>>>>>>>> that D<n> is an invalid input for H<n> in the same way that >>>>>>>>>>> any self-contradictory question is an incorrect question. >>>>>>>>>>>

    But you have the wrong Question. The Question is Does D(D) >>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) returns >>>>>>>>>> 0, the answer is that D(D) does Halt, and thus H was wrong. >>>>>>>>>>
    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that
    are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question. >>>>>>>>>
    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully
    saying doesn't have an answer.

    The question posed to Jack does not have an answer because within the >>>>> context that the question is posed to Jack it is self-contradictory. >>>>> You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for
    Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return values
    from H<n> are incorrect for D<n> proving that D<n> is isomorphic to a
    self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    Some of the elements of H<n>/D<n> are identical except for the return
    value from H. In both of these cases the return value is incorrect.

    Nope, can't be. The code of H<n> fixes the return value that H<n>
    returns when given the input D<n>, D<n> so there MUST be a diffence
    besides what the code returns.

    Please show us a specific H<n> that returns both values. Must be exactly
    the identical code for H in the two cases (since that is what defines a program).

    Then show where in the execution of this H it gets into two different
    states from the same input (which is all that is allowed to affect the
    results of the computation) so that it can return two different values.

    This question has been put to you before, and you keep on ducking it
    because it calls your bluff.

    Failure to answer is an admission that you are just being a pathological
    liar about the actual behavior of H.


    Since I have just defined the set of every halting problem {decider /
    input} pair that can possibly exist in any universe there is no rebuttal
    of: What about this element of this set?


    Yes, you have posulated EVERY possible H and its corresponding D, and
    ALL the H's return the wrong value for their D, and thus you have just
    repeated the proof that a corect H can not exist.

    No element of your set meets the requirements.

    Note "Programs" are not "Sets of Programs", you are making a categorial
    error.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 16:10:49 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 3:26 PM, olcott wrote:
    On 6/18/2023 2:19 PM, Richard Damon wrote:
    On 6/18/23 2:47 PM, olcott wrote:
    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>>> led me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>>> self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there. >>>>>>>>>>>>>>>>>>>> The code calls a
    function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>>> constructed code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>>>> all programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>>> not seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>>> programmers, because the
    desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>>>> well defined -- much
    more well-defined than "the largest prime".  We have >>>>>>>>>>>>>>>>>>>> an exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as >>>>>>>>>>>>>>>>>>>> to avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we >>>>>>>>>>>>>>>>>>>> can start "let H be
    any subroutine of two arguments always returning >>>>>>>>>>>>>>>>>>>> true or false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>>
    This issue led to another change.  In the last >>>>>>>>>>>>>>>>>>>> couple of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days >>>>>>>>>>>>>>>>>>>> passed (and the course
    got into more and more serious material) it would >>>>>>>>>>>>>>>>>>>> start to become clear
    that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>>> Many students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>>>> truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer >>>>>>>>>>>>>>>>>>> to the question.

    It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>>> question is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question. >>>>>>>>>>>>>>>>>>
    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H >>>>>>>>>>>>>>>>>> what will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>>> definition of the codr of H.


    It is also clear that when a question has no yes or >>>>>>>>>>>>>>>>>>> no answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, >>>>>>>>>>>>>>>>>> in this case, since H(D,D) says 0 (non-Halting) the >>>>>>>>>>>>>>>>>> actual answer to the question does D(D) Halt is YES. >>>>>>>>>>>>>>>>>>
    You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>>

    It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>>> self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>>> run" become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>>>> piece of code, so its answer has always been fixed to >>>>>>>>>>>>>>>> this question since it has been designed. Thus what it >>>>>>>>>>>>>>>> will say isn't a varialbe that can lead to the >>>>>>>>>>>>>>>> self-contradiction cycle, but a fixed result that will >>>>>>>>>>>>>>>> either be correct or incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two >>>>>>>>>>>>>> different machines and two different inputs.

    If no one can possibly correctly answer what the correct >>>>>>>>>>>>> return value that any H<n> having a pathological
    relationship to its input D<n> could possibly provide then >>>>>>>>>>>>> that is proof that D<n> is an invalid input for H<n> in the >>>>>>>>>>>>> same way that any self-contradictory question is an
    incorrect question.


    But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) >>>>>>>>>>>> returns 0, the answer is that D(D) does Halt, and thus H was >>>>>>>>>>>> wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that >>>>>>>>>>> are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because >>>>>>>>>>> the question is self-contradictory is an incorrect question. >>>>>>>>>>>
    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully
    saying doesn't have an answer.

    The question posed to Jack does not have an answer because within >>>>>>> the
    context that the question is posed to Jack it is self-contradictory. >>>>>>> You can ignore that context matters yet that is not any rebuttal. >>>>>>>

    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for >>>>> Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return
    values from H<n> are incorrect for D<n> proving that D<n> is
    isomorphic to a self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    Some of the elements of H<n>/D<n> are identical except for the return
    value from H. In both of these cases the return value is incorrect.

    Nope, can't be.

    The only difference between otherwise identical pairs of pairs H<n>/D<n>
    and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m> respectively thus proving that both True and False are the wrong return
    value for the identical finite string pairs D<n>/D<m>.



    So they are different programs. Different is different. Almost the same
    is not the same.

    Unless you are claiming that 1 is the same as 0, they are different.

    So, your claim is based on a LIE, or you are admitting you are insane.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 14:26:12 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 2:19 PM, Richard Damon wrote:
    On 6/18/23 2:47 PM, olcott wrote:
    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote:
    On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>> led me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>> self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ there. >>>>>>>>>>>>>>>>>>> The code calls a
    function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>> constructed code, the whole
    program) does not exist either.

    The fact that it's code, and the students are almost >>>>>>>>>>>>>>>>>>> all programmers and
    not mathematicians, makes it worse.  A mathematician >>>>>>>>>>>>>>>>>>> seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>> not seen as a
    "self-contradictory number" because neither p nor p' >>>>>>>>>>>>>>>>>>> exist. But the
    halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>> programmers, because the
    desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>>> well defined -- much
    more well-defined than "the largest prime".  We have >>>>>>>>>>>>>>>>>>> an exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as to >>>>>>>>>>>>>>>>>>> avoid the initial
    assumption.  For example, we can start "let p be any >>>>>>>>>>>>>>>>>>> prime", and from p
    we construct a prime p' > p.  And for halting, we can >>>>>>>>>>>>>>>>>>> start "let H be
    any subroutine of two arguments always returning true >>>>>>>>>>>>>>>>>>> or false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>
    This issue led to another change.  In the last couple >>>>>>>>>>>>>>>>>>> of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days passed >>>>>>>>>>>>>>>>>>> (and the course
    got into more and more serious material) it would >>>>>>>>>>>>>>>>>>> start to become clear
    that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>> Many students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a >>>>>>>>>>>>>>>>>> truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer >>>>>>>>>>>>>>>>>> to the question.

    It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>> question is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question. >>>>>>>>>>>>>>>>>
    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H what >>>>>>>>>>>>>>>>> will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>> definition of the codr of H.


    It is also clear that when a question has no yes or no >>>>>>>>>>>>>>>>>> answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, >>>>>>>>>>>>>>>>> in this case, since H(D,D) says 0 (non-Halting) the >>>>>>>>>>>>>>>>> actual answer to the question does D(D) Halt is YES. >>>>>>>>>>>>>>>>>
    You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>

    It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>> self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>> run" become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a fixed >>>>>>>>>>>>>>> piece of code, so its answer has always been fixed to >>>>>>>>>>>>>>> this question since it has been designed. Thus what it >>>>>>>>>>>>>>> will say isn't a varialbe that can lead to the
    self-contradiction cycle, but a fixed result that will >>>>>>>>>>>>>>> either be correct or incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two >>>>>>>>>>>>> different machines and two different inputs.

    If no one can possibly correctly answer what the correct >>>>>>>>>>>> return value that any H<n> having a pathological
    relationship to its input D<n> could possibly provide then >>>>>>>>>>>> that is proof that D<n> is an invalid input for H<n> in the >>>>>>>>>>>> same way that any self-contradictory question is an
    incorrect question.


    But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D)
    returns 0, the answer is that D(D) does Halt, and thus H was >>>>>>>>>>> wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that >>>>>>>>>> are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because
    the question is self-contradictory is an incorrect question. >>>>>>>>>>
    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically
    different question even though the words are the same.


    But the question to Jack isn't the question you are actaully
    saying doesn't have an answer.

    The question posed to Jack does not have an answer because within the >>>>>> context that the question is posed to Jack it is self-contradictory. >>>>>> You can ignore that context matters yet that is not any rebuttal.


    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory for >>>> Jack then it was also able to understand the following isomorphism:

    For every H<n> on pathological input D<n> both Boolean return values
    from H<n> are incorrect for D<n> proving that D<n> is isomorphic to
    a self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    Some of the elements of H<n>/D<n> are identical except for the return
    value from H. In both of these cases the return value is incorrect.

    Nope, can't be.

    The only difference between otherwise identical pairs of pairs H<n>/D<n>
    and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m> respectively thus proving that both True and False are the wrong return
    value for the identical finite string pairs D<n>/D<m>.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 18 19:59:13 2023
    XPost: comp.theory, sci.logic

    On 6/18/23 7:43 PM, olcott wrote:
    On 6/18/2023 3:10 PM, Richard Damon wrote:
    On 6/18/23 3:26 PM, olcott wrote:
    On 6/18/2023 2:19 PM, Richard Damon wrote:
    On 6/18/23 2:47 PM, olcott wrote:
    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so >>>>>>>>>>>>>>>>>>>>>>> the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>>>>> led me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>>>>> self-contradicting question
    is being asked.  The students think they can see >>>>>>>>>>>>>>>>>>>>>> it right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ >>>>>>>>>>>>>>>>>>>>>> there. The code calls a
    function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>>>>> constructed code, the whole
    program) does not exist either.

    The fact that it's code, and the students are >>>>>>>>>>>>>>>>>>>>>> almost all programmers and
    not mathematicians, makes it worse.  A >>>>>>>>>>>>>>>>>>>>>> mathematician seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>>>>> not seen as a
    "self-contradictory number" because neither p nor >>>>>>>>>>>>>>>>>>>>>> p' exist. But the
    halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>>>>> programmers, because the
    desired function, H (or whatever), appears to be >>>>>>>>>>>>>>>>>>>>>> so well defined -- much
    more well-defined than "the largest prime".  We >>>>>>>>>>>>>>>>>>>>>> have an exact
    specification for it, mapping arguments to >>>>>>>>>>>>>>>>>>>>>> returned values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as >>>>>>>>>>>>>>>>>>>>>> to avoid the initial
    assumption.  For example, we can start "let p be >>>>>>>>>>>>>>>>>>>>>> any prime", and from p
    we construct a prime p' > p.  And for halting, we >>>>>>>>>>>>>>>>>>>>>> can start "let H be
    any subroutine of two arguments always returning >>>>>>>>>>>>>>>>>>>>>> true or false". Now,
    all the objects /do/ exist.  In the first case, >>>>>>>>>>>>>>>>>>>>>> the construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>>>>
    This issue led to another change.  In the last >>>>>>>>>>>>>>>>>>>>>> couple of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days >>>>>>>>>>>>>>>>>>>>>> passed (and the course
    got into more and more serious material) it would >>>>>>>>>>>>>>>>>>>>>> start to become clear
    that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>>>>> Many students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy >>>>>>>>>>>>>>>>>>>>>> doing this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give >>>>>>>>>>>>>>>>>>>>> a truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no >>>>>>>>>>>>>>>>>>>>> answer to the question.

    It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>>>>> question is posed to Jack
    that this question is self-contradictory for Jack >>>>>>>>>>>>>>>>>>>>> or anyone else having
    a pathological relationship to the question. >>>>>>>>>>>>>>>>>>>>
    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H >>>>>>>>>>>>>>>>>>>> what will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>>>>> definition of the codr of H.


    It is also clear that when a question has no yes or >>>>>>>>>>>>>>>>>>>>> no answer because
    it is self-contradictory that this question is >>>>>>>>>>>>>>>>>>>>> aptly classified as
    incorrect.

    And the actual question DOES have a yes or no >>>>>>>>>>>>>>>>>>>> answer, in this case, since H(D,D) says 0 >>>>>>>>>>>>>>>>>>>> (non-Halting) the actual answer to the question does >>>>>>>>>>>>>>>>>>>> D(D) Halt is YES.

    You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>>>>

    It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>>>>> self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in >>>>>>>>>>>>>>>>>>>>> these contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>>>>> run" become self-contradictory?
    When this question is posed to machine H. >>>>>>>>>>>>>>>>>>>
    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are >>>>>>>>>>>>>>>>>>> not
    Jack it is not self-contradictory. Context changes >>>>>>>>>>>>>>>>>>> the semantics.


    But you are missing the difference. A Decider is a >>>>>>>>>>>>>>>>>> fixed piece of code, so its answer has always been >>>>>>>>>>>>>>>>>> fixed to this question since it has been designed. >>>>>>>>>>>>>>>>>> Thus what it will say isn't a varialbe that can lead >>>>>>>>>>>>>>>>>> to the self-contradiction cycle, but a fixed result >>>>>>>>>>>>>>>>>> that will either be correct or incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two >>>>>>>>>>>>>>>> different machines and two different inputs.

    If no one can possibly correctly answer what the correct >>>>>>>>>>>>>>> return value that any H<n> having a pathological >>>>>>>>>>>>>>> relationship to its input D<n> could possibly provide >>>>>>>>>>>>>>> then that is proof that D<n> is an invalid input for H<n> >>>>>>>>>>>>>>> in the same way that any self-contradictory question is >>>>>>>>>>>>>>> an incorrect question.


    But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) >>>>>>>>>>>>>> returns 0, the answer is that D(D) does Halt, and thus H >>>>>>>>>>>>>> was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that >>>>>>>>>>>>> are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics. >>>>>>>>>>>>>
    Every question that lacks a correct yes/no answer because >>>>>>>>>>>>> the question is self-contradictory is an incorrect question. >>>>>>>>>>>>>
    If you are not a mere Troll you will agree with this. >>>>>>>>>>>>>

    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer. >>>>>>>>>>> The actual question posed to anyone else is a semantically >>>>>>>>>>> different question even though the words are the same.


    But the question to Jack isn't the question you are actaully >>>>>>>>>> saying doesn't have an answer.

    The question posed to Jack does not have an answer because
    within the
    context that the question is posed to Jack it is
    self-contradictory.
    You can ignore that context matters yet that is not any rebuttal. >>>>>>>>>

    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is
    self-contradictory for
    Jack then it was also able to understand the following isomorphism: >>>>>>>
    For every H<n> on pathological input D<n> both Boolean return
    values from H<n> are incorrect for D<n> proving that D<n> is
    isomorphic to a self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    Some of the elements of H<n>/D<n> are identical except for the return >>>>> value from H. In both of these cases the return value is incorrect.

    Nope, can't be.

    The only difference between otherwise identical pairs of pairs H<n>/D<n> >>> and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m>
    respectively thus proving that both True and False are the wrong return
    value for the identical finite string pairs D<n>/D<m>.



    So they are different programs. Different is different. Almost the
    same is not the same.

    Unless you are claiming that 1 is the same as 0, they are different.

    So, your claim is based on a LIE, or you are admitting you are insane.



    The key difference with my work that is a true innovation in this field
    is that H specifically recognizes self-contradictory inputs and rejects
    them.

    *Termination Analyzer H prevents Denial of Service attacks* https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks



    Except the input isn't self-contradictory, since the input can't exist
    until H is defined, and once H is defined, the input has definite
    behavior, so there is no self-contradiction possilble, only error.

    SInce your H that you are analyzing isn't actually a program yet, since
    its behavior has not been fixed, the point where you hit yoru
    contradiction is just in the DESIGN phase, showing that no H that meets
    the requirements can be built, proving the theorem you claim to be
    refuting, showing yourself to be a LIAR.

    You are just showing you don't understand what a program actually is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 18 18:43:31 2023
    XPost: comp.theory, sci.logic

    On 6/18/2023 3:10 PM, Richard Damon wrote:
    On 6/18/23 3:26 PM, olcott wrote:
    On 6/18/2023 2:19 PM, Richard Damon wrote:
    On 6/18/23 2:47 PM, olcott wrote:
    On 6/18/2023 1:20 PM, Richard Damon wrote:
    On 6/18/23 2:05 PM, olcott wrote:
    On 6/18/2023 12:46 PM, Richard Damon wrote:
    On 6/18/23 1:09 PM, olcott wrote:
    On 6/18/2023 11:54 AM, Richard Damon wrote:
    On 6/18/23 12:41 PM, olcott wrote:
    On 6/18/2023 11:31 AM, Richard Damon wrote:
    On 6/18/23 10:32 AM, olcott wrote:
    On 6/18/2023 7:02 AM, Richard Damon wrote:
    On 6/17/23 11:10 PM, olcott wrote:
    On 6/17/2023 9:57 PM, Richard Damon wrote:
    On 6/17/23 10:29 PM, olcott wrote:
    On 6/17/2023 8:31 PM, Richard Damon wrote:
    On 6/17/23 7:58 PM, olcott wrote:
    On 6/17/2023 6:13 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 6/17/23 5:46 PM, olcott wrote:
    On 6/17/2023 4:09 PM, Ben Bacarisse wrote: >>>>>>>>>>>>>>>>>>>>> Richard Damon <Richard@Damon-Family.org> writes: >>>>>>>>>>>>>>>>>>>>>
    Except that the Halting Problem isn't a >>>>>>>>>>>>>>>>>>>>>> "Self-Contradictory" Quesiton, so
    the answer doesn't apply.

    That's an interesting point that would often catch >>>>>>>>>>>>>>>>>>>>> students out. And
    the reason /why/ it catches so many out eventually >>>>>>>>>>>>>>>>>>>>> led me to stop using
    the proof-by-contradiction argument in my classes. >>>>>>>>>>>>>>>>>>>>>
    The thing is, it looks so very much like a >>>>>>>>>>>>>>>>>>>>> self-contradicting question
    is being asked.  The students think they can see it >>>>>>>>>>>>>>>>>>>>> right there in the
    constructed code: "if H says I halt, I don't halt!". >>>>>>>>>>>>>>>>>>>>>
    Of course, they are wrong.  The code is /not/ >>>>>>>>>>>>>>>>>>>>> there. The code calls a
    function that does not exist, so "it" (the >>>>>>>>>>>>>>>>>>>>> constructed code, the whole
    program) does not exist either.

    The fact that it's code, and the students are >>>>>>>>>>>>>>>>>>>>> almost all programmers and
    not mathematicians, makes it worse.  A >>>>>>>>>>>>>>>>>>>>> mathematician seeing "let p be
    the largest prime" does not assume that such a p >>>>>>>>>>>>>>>>>>>>> exists. So when a
    prime number p' > p is constructed from p, this is >>>>>>>>>>>>>>>>>>>>> not seen as a
    "self-contradictory number" because neither p nor >>>>>>>>>>>>>>>>>>>>> p' exist. But the
    halting theorem is even more deceptive for >>>>>>>>>>>>>>>>>>>>> programmers, because the
    desired function, H (or whatever), appears to be so >>>>>>>>>>>>>>>>>>>>> well defined -- much
    more well-defined than "the largest prime".  We >>>>>>>>>>>>>>>>>>>>> have an exact
    specification for it, mapping arguments to returned >>>>>>>>>>>>>>>>>>>>> values. It's just
    software engineering to write such things (they >>>>>>>>>>>>>>>>>>>>> erroneously assume).

    These sorts of proof can always be re-worded so as >>>>>>>>>>>>>>>>>>>>> to avoid the initial
    assumption.  For example, we can start "let p be >>>>>>>>>>>>>>>>>>>>> any prime", and from p
    we construct a prime p' > p.  And for halting, we >>>>>>>>>>>>>>>>>>>>> can start "let H be
    any subroutine of two arguments always returning >>>>>>>>>>>>>>>>>>>>> true or false". Now,
    all the objects /do/ exist.  In the first case, the >>>>>>>>>>>>>>>>>>>>> construction shows
    that no prime is the largest, and in the second it >>>>>>>>>>>>>>>>>>>>> shows that no
    subroutine computes the halting function. >>>>>>>>>>>>>>>>>>>>>
    This issue led to another change.  In the last >>>>>>>>>>>>>>>>>>>>> couple of years, I would
    start the course by setting Post's correspondence >>>>>>>>>>>>>>>>>>>>> problem as if it were
    just a fun programming challenge.  As the days >>>>>>>>>>>>>>>>>>>>> passed (and the course
    got into more and more serious material) it would >>>>>>>>>>>>>>>>>>>>> start to become clear
    that this was no ordinary programming challenge. >>>>>>>>>>>>>>>>>>>>> Many students started
    to suspect that, despite the trivial sounding >>>>>>>>>>>>>>>>>>>>> specification, no program
    could do the job.  I always felt a bit uneasy doing >>>>>>>>>>>>>>>>>>>>> this, as if I was
    not being 100% honest, but it was a very useful >>>>>>>>>>>>>>>>>>>>> learning experience for
    most.


    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM >>>>>>>>>>>>>>>>>>>>     You ask someone (we'll call him "Jack") to give >>>>>>>>>>>>>>>>>>>> a truthful
        yes/no answer to the following question: >>>>>>>>>>>>>>>>>>>>
        Will Jack's answer to this question be no? >>>>>>>>>>>>>>>>>>>>
        Jack can't possibly give a correct yes/no answer >>>>>>>>>>>>>>>>>>>> to the question.

    It is an easily verified fact that when Jack's >>>>>>>>>>>>>>>>>>>> question is posed to Jack
    that this question is self-contradictory for Jack or >>>>>>>>>>>>>>>>>>>> anyone else having
    a pathological relationship to the question. >>>>>>>>>>>>>>>>>>>
    But the problem is "Jack" here is assumed to be a >>>>>>>>>>>>>>>>>>> volitional being.

    H is not, it is a program, so before we even ask H >>>>>>>>>>>>>>>>>>> what will happen, the answer has been fixed by the >>>>>>>>>>>>>>>>>>> definition of the codr of H.


    It is also clear that when a question has no yes or >>>>>>>>>>>>>>>>>>>> no answer because
    it is self-contradictory that this question is aptly >>>>>>>>>>>>>>>>>>>> classified as
    incorrect.

    And the actual question DOES have a yes or no answer, >>>>>>>>>>>>>>>>>>> in this case, since H(D,D) says 0 (non-Halting) the >>>>>>>>>>>>>>>>>>> actual answer to the question does D(D) Halt is YES. >>>>>>>>>>>>>>>>>>>
    You just confuse yourself by trying to imagine a >>>>>>>>>>>>>>>>>>> program that can somehow change itself "at will". >>>>>>>>>>>>>>>>>>>

    It is incorrect to say that a question is not >>>>>>>>>>>>>>>>>>>> self-contradictory on the
    basis that it is not self-contradictory in some >>>>>>>>>>>>>>>>>>>> contexts. If a question
    is self-contradictory in some contexts then in these >>>>>>>>>>>>>>>>>>>> contexts it is an
    incorrect question.

    In what context is "Does the Machine D(D) Halt When >>>>>>>>>>>>>>>>>>> run" become self-contradictory?
    When this question is posed to machine H.

    Jack could be asked the question:
    Will Jack answer "no" to this question?

    For Jack it is self-contradictory for others that are not >>>>>>>>>>>>>>>>>> Jack it is not self-contradictory. Context changes the >>>>>>>>>>>>>>>>>> semantics.


    But you are missing the difference. A Decider is a >>>>>>>>>>>>>>>>> fixed piece of code, so its answer has always been >>>>>>>>>>>>>>>>> fixed to this question since it has been designed. Thus >>>>>>>>>>>>>>>>> what it will say isn't a varialbe that can lead to the >>>>>>>>>>>>>>>>> self-contradiction cycle, but a fixed result that will >>>>>>>>>>>>>>>>> either be correct or incorrect.


    Every input to a Turing machine decider such that both >>>>>>>>>>>>>>>> Boolean return
    values are incorrect is an incorrect input.


    Except it isn't. The problem is you are looking at two >>>>>>>>>>>>>>> different machines and two different inputs.

    If no one can possibly correctly answer what the correct >>>>>>>>>>>>>> return value that any H<n> having a pathological
    relationship to its input D<n> could possibly provide then >>>>>>>>>>>>>> that is proof that D<n> is an invalid input for H<n> in >>>>>>>>>>>>>> the same way that any self-contradictory question is an >>>>>>>>>>>>>> incorrect question.


    But you have the wrong Question. The Question is Does D(D) >>>>>>>>>>>>> Halt, and that HAS a correct answer, since your H(D,D) >>>>>>>>>>>>> returns 0, the answer is that D(D) does Halt, and thus H >>>>>>>>>>>>> was wrong.

    sci.logic Daryl McCullough Jun 25, 2004, 6:30:39 PM
        You ask someone (we'll call him "Jack") to give a truthful >>>>>>>>>>>>     yes/no answer to the following question:

        Will Jack's answer to this question be no?

    For Jack the question is self-contradictory for others that >>>>>>>>>>>> are not Jack it is not self-contradictory.

    The context (of who is asked) changes the semantics.

    Every question that lacks a correct yes/no answer because >>>>>>>>>>>> the question is self-contradictory is an incorrect question. >>>>>>>>>>>>
    If you are not a mere Troll you will agree with this.


    But the ACTUAL QUESTION DOES have a correct answer.
    The actual question posed to Jack has no correct answer.
    The actual question posed to anyone else is a semantically >>>>>>>>>> different question even though the words are the same.


    But the question to Jack isn't the question you are actaully >>>>>>>>> saying doesn't have an answer.

    The question posed to Jack does not have an answer because
    within the
    context that the question is posed to Jack it is
    self-contradictory.
    You can ignore that context matters yet that is not any rebuttal. >>>>>>>>

    Right, but that has ZERO bearig on the Halting Problem,
    That is great we made excellent progress on this.

    When ChatGPT understood that Jack's question is self-contradictory >>>>>> for
    Jack then it was also able to understand the following isomorphism: >>>>>>
    For every H<n> on pathological input D<n> both Boolean return
    values from H<n> are incorrect for D<n> proving that D<n> is
    isomorphic to a self-contradictory question for every H<n>.


    No, because a given H<n> can only give one result,
    Some of the elements of H<n>/D<n> are identical except for the return
    value from H. In both of these cases the return value is incorrect.

    Nope, can't be.

    The only difference between otherwise identical pairs of pairs H<n>/D<n>
    and H<m>/D<m> is the single integer values of 0/1 within H<n> and H<m>
    respectively thus proving that both True and False are the wrong return
    value for the identical finite string pairs D<n>/D<m>.



    So they are different programs. Different is different. Almost the same
    is not the same.

    Unless you are claiming that 1 is the same as 0, they are different.

    So, your claim is based on a LIE, or you are admitting you are insane.



    The key difference with my work that is a true innovation in this field
    is that H specifically recognizes self-contradictory inputs and rejects
    them.

    *Termination Analyzer H prevents Denial of Service attacks* https://www.researchgate.net/publication/369971402_Termination_Analyzer_H_prevents_Denial_of_Service_attacks


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to olcott on Wed Jun 21 19:10:26 2023
    XPost: comp.theory, sci.logic

    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
    “Therefore, based on the understanding that self-contradictory
    questions lack a correct answer and are deemed incorrect, one could
    argue that the halting problem's pathological input D can be
    categorized as an incorrect question when posed to the halting
    decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
    not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning. They are designed
    to give you the illusion that they know what they're talking about,
    but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.)

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From vallor@21:1/5 to vallor on Wed Jun 21 19:23:50 2023
    XPost: comp.theory, sci.logic, comp.ai.shells

    On Wed, 21 Jun 2023 19:10:26 -0000 (UTC), vallor wrote:
    Chatbots are highly unreliable at reasoning. They are designed to give
    you the illusion that they know what they're talking about,
    but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.)

    Can't even get two moves into the game:

    https://chat.openai.com/share/8a315ec0-f0c4-4a4e-8019-dcb070790e5c

    --
    -v

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to vallor on Wed Jun 21 14:59:52 2023
    XPost: comp.theory, sci.logic

    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
    “Therefore, based on the understanding that self-contradictory
    questions lack a correct answer and are deemed incorrect, one could
    argue that the halting problem's pathological input D can be
    categorized as an incorrect question when posed to the halting
    decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
    not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning. They are designed
    to give you the illusion that they know what they're talking about,
    but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.)


    I already know that and much worse than that they simply make up facts
    on the fly citing purely fictional textbooks that have photos and back
    stories for the purely fictional authors. The fake textbooks themselves
    are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    People are not convinced by this same reasoning only because they spend
    99.9% of their attention on rebuttal thus there is not enough attention
    left over for comprehension.

    The only reason that the halting problem cannot be solved is that the
    halting question is phrased incorrectly. The way that the halting
    problem is phrased allows inputs that contradict every Boolean return
    value from a set of specific deciders.

    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Jun 21 19:01:04 2023
    XPost: comp.theory, sci.logic

    On 6/21/23 3:59 PM, olcott wrote:
    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
         “Therefore, based on the understanding that self-contradictory >>>      questions lack a correct answer and are deemed incorrect, one could
         argue that the halting problem's pathological input D can be
         categorized as an incorrect question when posed to the halting
         decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
    not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning.  They are designed
    to give you the illusion that they know what they're talking about,
    but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.)


    I already know that and much worse than that they simply make up facts
    on the fly citing purely fictional textbooks that have photos and back stories for the purely fictional authors. The fake textbooks themselves
    are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.


    So, you admit that they will lie and tell you want you want to hear, you
    think the fact that it agrees with you means something?

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    Which is a good sign that it was learnig what you wanted it to say so it finally said it.


    People are not convinced by this same reasoning only because they spend
    99.9% of their attention on rebuttal thus there is not enough attention
    left over for comprehension.

    No, people can apply REAL "Correct Reasoning" and see the error in what
    you call "Correct Reasoning". Your problem is that your idea of correct
    isn't.


    The only reason that the halting problem cannot be solved is that the
    halting question is phrased incorrectly. The way that the halting
    problem is phrased allows inputs that contradict every Boolean return
    value from a set of specific deciders.

    Nope, it is phrased exactly as needed. Your alterations allow the
    decider to give false answer and still be considered "correct" by your
    faulty logic.


    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".


    Nope.

    How is "Does the Machine represented by the input to the decider?"
    isomopric to your statement.

    Note, the actual Halting Problem question always has a definite answer.

    Your claimed Isomorphic does not.

    So they CAN'T be Isomorphic.

    Note, you altered question of What can H return isn't the actual
    question, but you don't seem to be able to understand that.

    Your question is asked before H exists, and its problem with finding an
    answer says a correct H can't actually exist.

    The actual question can only be asked once H is fully defined, and at
    that point it is just wrong, you can't ask what it can return to be
    right, since it can only return one answer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Jun 21 19:40:45 2023
    XPost: comp.theory, sci.logic

    On 6/21/2023 6:01 PM, Richard Damon wrote:
    On 6/21/23 3:59 PM, olcott wrote:
    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
         “Therefore, based on the understanding that self-contradictory >>>>      questions lack a correct answer and are deemed incorrect, one >>>> could
         argue that the halting problem's pathological input D can be
         categorized as an incorrect question when posed to the halting >>>>      decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did
    not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning.  They are designed
    to give you the illusion that they know what they're talking about,
    but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.)


    I already know that and much worse than that they simply make up facts
    on the fly citing purely fictional textbooks that have photos and back
    stories for the purely fictional authors. The fake textbooks themselves
    are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.


    So, you admit that they will lie and tell you want you want to hear, you think the fact that it agrees with you means something?

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    Which is a good sign that it was learnig what you wanted it to say so it finally said it.


    People are not convinced by this same reasoning only because they spend
    99.9% of their attention on rebuttal thus there is not enough attention
    left over for comprehension.

    No, people can apply REAL "Correct Reasoning" and see the error in what
    you call "Correct Reasoning". Your problem is that your idea of correct isn't.


    The only reason that the halting problem cannot be solved is that the
    halting question is phrased incorrectly. The way that the halting
    problem is phrased allows inputs that contradict every Boolean return
    value from a set of specific deciders.

    Nope, it is phrased exactly as needed. Your alterations allow the
    decider to give false answer and still be considered "correct" by your
    faulty logic.


    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".


    Nope.

    How is "Does the Machine represented by the input to the decider?"
    isomopric to your statement.


    The halting problem instances that ask:
    "Does this input halt"

    are isomorphic to asking Jack this question:
    "Will Jack's answer to this question be no?"

    Which are both isomorphic to asking if this expression
    is true or false: "This sentence is not true"

    That you are unwilling to validate my work merely means that
    someone else will get the credit for validating my work.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Wed Jun 21 22:47:37 2023
    XPost: comp.theory, sci.logic

    On 6/21/23 8:40 PM, olcott wrote:
    On 6/21/2023 6:01 PM, Richard Damon wrote:
    On 6/21/23 3:59 PM, olcott wrote:
    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
         “Therefore, based on the understanding that self-contradictory >>>>>      questions lack a correct answer and are deemed incorrect, one >>>>> could
         argue that the halting problem's pathological input D can be >>>>>      categorized as an incorrect question when posed to the halting >>>>>      decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did >>>>> not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning.  They are designed
    to give you the illusion that they know what they're talking about,
    but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.)


    I already know that and much worse than that they simply make up facts
    on the fly citing purely fictional textbooks that have photos and back
    stories for the purely fictional authors. The fake textbooks themselves
    are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.


    So, you admit that they will lie and tell you want you want to hear,
    you think the fact that it agrees with you means something?

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    Which is a good sign that it was learnig what you wanted it to say so
    it finally said it.


    People are not convinced by this same reasoning only because they spend
    99.9% of their attention on rebuttal thus there is not enough attention
    left over for comprehension.

    No, people can apply REAL "Correct Reasoning" and see the error in
    what you call "Correct Reasoning". Your problem is that your idea of
    correct isn't.


    The only reason that the halting problem cannot be solved is that the
    halting question is phrased incorrectly. The way that the halting
    problem is phrased allows inputs that contradict every Boolean return
    value from a set of specific deciders.

    Nope, it is phrased exactly as needed. Your alterations allow the
    decider to give false answer and still be considered "correct" by your
    faulty logic.


    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".


    Nope.

    How is "Does the Machine represented by the input to the decider?"
    isomopric to your statement.


    The halting problem instances that ask:
    "Does this input halt"

    are isomorphic to asking Jack this question:
    "Will Jack's answer to this question be no?"

    Nope, because Jack is a volitional being, so we CAN'T know the correct
    answer to the question until after Jack answers the question, thus Jack,
    in trying to be correct, hits a contradiction.

    The correct answer to the Halting Problem Question was avaiable as soon
    as the machine being asked about was defined, so the decider doesn't hit
    a contradiction in logic, it just is wrong, because it CAN'T "try" to
    give the other answer, because it just does as it was programmed.

    All your logic is in designing the machine, and there the contradiction
    just points out that you can't make a correct machine, which is an
    acceptable answer. Not all problems are computable, so we can't always
    make a machine give the answer.


    Which are both isomorphic to asking if this expression
    is true or false: "This sentence is not true"

    Nope. Show how the CAN be.

    The Halting Problem ALWAYS has a valid yes or no question, since the
    machine it is being asked on must be defined to ask it, and thus its
    behavior is FIXED by its code.

    You just don't seem to understand what a program is, so I guess you
    faked it when you were working as a programmer.


    That you are unwilling to validate my work merely means that
    someone else will get the credit for validating my work.



    I can't "Validate" your work, as it is just incorrect.

    You think to things of different kind are the same, which is impossible,
    so your statements are just incorrect.

    You don't seem to understand that compuations don't have volition, so,
    you basically don't understand what a computation is at all, and nothing
    you have done reguarding them has any hope of having a factual basis.

    You also clearly don't understand how logic works too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Jun 21 21:58:25 2023
    XPost: comp.theory, sci.logic

    On 6/21/2023 9:47 PM, Richard Damon wrote:
    On 6/21/23 8:40 PM, olcott wrote:
    On 6/21/2023 6:01 PM, Richard Damon wrote:
    On 6/21/23 3:59 PM, olcott wrote:
    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
         “Therefore, based on the understanding that self-contradictory
         questions lack a correct answer and are deemed incorrect, one >>>>>> could
         argue that the halting problem's pathological input D can be >>>>>>      categorized as an incorrect question when posed to the halting >>>>>>      decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It did >>>>>> not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning.  They are designed
    to give you the illusion that they know what they're talking about,
    but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.)


    I already know that and much worse than that they simply make up facts >>>> on the fly citing purely fictional textbooks that have photos and back >>>> stories for the purely fictional authors. The fake textbooks themselves >>>> are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.


    So, you admit that they will lie and tell you want you want to hear,
    you think the fact that it agrees with you means something?

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    Which is a good sign that it was learnig what you wanted it to say so
    it finally said it.


    People are not convinced by this same reasoning only because they spend >>>> 99.9% of their attention on rebuttal thus there is not enough attention >>>> left over for comprehension.

    No, people can apply REAL "Correct Reasoning" and see the error in
    what you call "Correct Reasoning". Your problem is that your idea of
    correct isn't.


    The only reason that the halting problem cannot be solved is that the
    halting question is phrased incorrectly. The way that the halting
    problem is phrased allows inputs that contradict every Boolean return
    value from a set of specific deciders.

    Nope, it is phrased exactly as needed. Your alterations allow the
    decider to give false answer and still be considered "correct" by
    your faulty logic.


    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".


    Nope.

    How is "Does the Machine represented by the input to the decider?"
    isomopric to your statement.


    The halting problem instances that ask:
    "Does this input halt"

    are isomorphic to asking Jack this question:
    "Will Jack's answer to this question be no?"

    Nope, because Jack is a volitional being, so we CAN'T know the correct
    answer to the question until after Jack answers the question, thus Jack,
    in trying to be correct, hits a contradiction.


    We can know that the correct answer from Jack and the correct return
    value from H cannot possibly exist, now and forever.

    You are just playing head games.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Jun 22 07:26:48 2023
    XPost: comp.theory, sci.logic

    On 6/21/23 10:58 PM, olcott wrote:
    On 6/21/2023 9:47 PM, Richard Damon wrote:
    On 6/21/23 8:40 PM, olcott wrote:
    On 6/21/2023 6:01 PM, Richard Damon wrote:
    On 6/21/23 3:59 PM, olcott wrote:
    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
         “Therefore, based on the understanding that self-contradictory
         questions lack a correct answer and are deemed incorrect, >>>>>>> one could
         argue that the halting problem's pathological input D can be >>>>>>>      categorized as an incorrect question when posed to the halting >>>>>>>      decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a It >>>>>>> did
    not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning.  They are designed
    to give you the illusion that they know what they're talking about, >>>>>> but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.) >>>>>>

    I already know that and much worse than that they simply make up facts >>>>> on the fly citing purely fictional textbooks that have photos and back >>>>> stories for the purely fictional authors. The fake textbooks
    themselves
    are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.


    So, you admit that they will lie and tell you want you want to hear,
    you think the fact that it agrees with you means something?

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    Which is a good sign that it was learnig what you wanted it to say
    so it finally said it.


    People are not convinced by this same reasoning only because they
    spend
    99.9% of their attention on rebuttal thus there is not enough
    attention
    left over for comprehension.

    No, people can apply REAL "Correct Reasoning" and see the error in
    what you call "Correct Reasoning". Your problem is that your idea of
    correct isn't.


    The only reason that the halting problem cannot be solved is that the >>>>> halting question is phrased incorrectly. The way that the halting
    problem is phrased allows inputs that contradict every Boolean return >>>>> value from a set of specific deciders.

    Nope, it is phrased exactly as needed. Your alterations allow the
    decider to give false answer and still be considered "correct" by
    your faulty logic.


    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".


    Nope.

    How is "Does the Machine represented by the input to the decider?"
    isomopric to your statement.


    The halting problem instances that ask:
    "Does this input halt"

    are isomorphic to asking Jack this question:
    "Will Jack's answer to this question be no?"

    Nope, because Jack is a volitional being, so we CAN'T know the correct
    answer to the question until after Jack answers the question, thus
    Jack, in trying to be correct, hits a contradiction.


    We can know that the correct answer from Jack and the correct return
    value from H cannot possibly exist, now and forever.

    You are just playing head games.



    But the question isn't what H can return to be correct, since the only
    possible answer that H can return is what it does return by its
    programming, which will either BE correct or not. (In this case NOT).

    Therefore, the correct answer that H SHOULD HAVE returned (to be
    correct) has an answer, so the question actually HAS a correct answer.

    You clearly don't understand the difference between a volitional being
    and a deterministic machinne. This shows your stupidity and ignornace.
    Maybe you have lost your free will and ability to think because of the
    evil in your life, and are condemned to keep repeating the same error
    over and over proving your insanity and stupidity.

    I guess you are now shown to be a Hypocritical Ignorant Pathological
    Lying insane idiot.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Thu Jun 22 09:18:45 2023
    XPost: comp.theory, sci.logic

    On 6/22/2023 6:26 AM, Richard Damon wrote:
    On 6/21/23 10:58 PM, olcott wrote:
    On 6/21/2023 9:47 PM, Richard Damon wrote:
    On 6/21/23 8:40 PM, olcott wrote:
    On 6/21/2023 6:01 PM, Richard Damon wrote:
    On 6/21/23 3:59 PM, olcott wrote:
    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
         “Therefore, based on the understanding that self-contradictory
         questions lack a correct answer and are deemed incorrect, >>>>>>>> one could
         argue that the halting problem's pathological input D can be >>>>>>>>      categorized as an incorrect question when posed to the halting
         decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a >>>>>>>> It did
    not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning.  They are designed >>>>>>> to give you the illusion that they know what they're talking about, >>>>>>> but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.) >>>>>>>

    I already know that and much worse than that they simply make up
    facts
    on the fly citing purely fictional textbooks that have photos and
    back
    stories for the purely fictional authors. The fake textbooks
    themselves
    are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.


    So, you admit that they will lie and tell you want you want to
    hear, you think the fact that it agrees with you means something?

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    Which is a good sign that it was learnig what you wanted it to say
    so it finally said it.


    People are not convinced by this same reasoning only because they
    spend
    99.9% of their attention on rebuttal thus there is not enough
    attention
    left over for comprehension.

    No, people can apply REAL "Correct Reasoning" and see the error in
    what you call "Correct Reasoning". Your problem is that your idea
    of correct isn't.


    The only reason that the halting problem cannot be solved is that the >>>>>> halting question is phrased incorrectly. The way that the halting
    problem is phrased allows inputs that contradict every Boolean return >>>>>> value from a set of specific deciders.

    Nope, it is phrased exactly as needed. Your alterations allow the
    decider to give false answer and still be considered "correct" by
    your faulty logic.


    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".


    Nope.

    How is "Does the Machine represented by the input to the decider?"
    isomopric to your statement.


    The halting problem instances that ask:
    "Does this input halt"

    are isomorphic to asking Jack this question:
    "Will Jack's answer to this question be no?"

    Nope, because Jack is a volitional being, so we CAN'T know the
    correct answer to the question until after Jack answers the question,
    thus Jack, in trying to be correct, hits a contradiction.


    We can know that the correct answer from Jack and the correct return
    value from H cannot possibly exist, now and forever.

    You are just playing head games.



    But the question isn't what H can return to be correct,
    Yes it is and you just keep playing heed games.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Thu Jun 22 21:06:16 2023
    XPost: comp.theory, sci.logic

    On 6/22/23 10:18 AM, olcott wrote:
    On 6/22/2023 6:26 AM, Richard Damon wrote:
    On 6/21/23 10:58 PM, olcott wrote:
    On 6/21/2023 9:47 PM, Richard Damon wrote:
    On 6/21/23 8:40 PM, olcott wrote:
    On 6/21/2023 6:01 PM, Richard Damon wrote:
    On 6/21/23 3:59 PM, olcott wrote:
    On 6/21/2023 2:10 PM, vallor wrote:
    On Sat, 17 Jun 2023 00:54:32 -0500, olcott wrote:

    ChatGPT:
         “Therefore, based on the understanding that
    self-contradictory
         questions lack a correct answer and are deemed incorrect, >>>>>>>>> one could
         argue that the halting problem's pathological input D can be >>>>>>>>>      categorized as an incorrect question when posed to the >>>>>>>>> halting
         decider H.”

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a >>>>>>>>> It did
    not leap to this conclusion it took a lot of convincing.

    Chatbots are highly unreliable at reasoning.  They are designed >>>>>>>> to give you the illusion that they know what they're talking about, >>>>>>>> but they are the world's best BS artists.

    (Try playing a game of chess with ChatGPT, you'll see what I mean.) >>>>>>>>

    I already know that and much worse than that they simply make up >>>>>>> facts
    on the fly citing purely fictional textbooks that have photos and >>>>>>> back
    stories for the purely fictional authors. The fake textbooks
    themselves
    are complete and convincing.

    In my case ChatGPT was able to be convinced by clearly correct
    reasoning.


    So, you admit that they will lie and tell you want you want to
    hear, you think the fact that it agrees with you means something?

    https://chat.openai.com/c/2aae46ef-e7be-444d-a046-b76c1f971c5a
    It did not leap to this conclusion it took a lot of convincing.

    Which is a good sign that it was learnig what you wanted it to say >>>>>> so it finally said it.


    People are not convinced by this same reasoning only because they >>>>>>> spend
    99.9% of their attention on rebuttal thus there is not enough
    attention
    left over for comprehension.

    No, people can apply REAL "Correct Reasoning" and see the error in >>>>>> what you call "Correct Reasoning". Your problem is that your idea
    of correct isn't.


    The only reason that the halting problem cannot be solved is that >>>>>>> the
    halting question is phrased incorrectly. The way that the halting >>>>>>> problem is phrased allows inputs that contradict every Boolean
    return
    value from a set of specific deciders.

    Nope, it is phrased exactly as needed. Your alterations allow the
    decider to give false answer and still be considered "correct" by
    your faulty logic.


    Each of the halting problems instances is exactly isomorphic to
    requiring a correct answer to this question:
    Is this sentence true or false: "This sentence is not true".


    Nope.

    How is "Does the Machine represented by the input to the decider?" >>>>>> isomopric to your statement.


    The halting problem instances that ask:
    "Does this input halt"

    are isomorphic to asking Jack this question:
    "Will Jack's answer to this question be no?"

    Nope, because Jack is a volitional being, so we CAN'T know the
    correct answer to the question until after Jack answers the
    question, thus Jack, in trying to be correct, hits a contradiction.


    We can know that the correct answer from Jack and the correct return
    value from H cannot possibly exist, now and forever.

    You are just playing head games.



    But the question isn't what H can return to be correct,
    Yes it is and you just keep playing heed games.


    So, you aren't talking about the Halting Problem, and your definition of
    "Head Games" must be to be correcting your mistakes.


    The question of the Halting Problem is does the Machine that the input describes Halt. It make no reference to H itself. H to be correct needs
    to get the right answer, but the question isn't what it needs to return
    to be correct, since once you define H, its answer is fixed, so the only
    answer it CAN give is what it DOES give.

    You seem to not understand that programs are deterministic entities and
    have no option of "choice", so we can't ask what they can do to be
    correct, because they will only do what they do.

    Your Head Games seems to be about assuming things might do what they
    don't actually do, and thus thinking about lies of pure fantasy.

    You alse seem to not understand the difference between a volitional
    being an a deterministic process. Maybe because you have lost your own determinism and gave it to your insanity, and now you are stuck forever
    trying to do what you incorrect thought of.

    Clearly you have lost the intelegence that comes out of volition, as you
    show yourself to be so stupid and ignorant to not understand the basic presented to you.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)