• Does the halting problem actually limit what computers can do?

    From olcott@21:1/5 to All on Sun Oct 29 12:30:11 2023
    XPost: sci.math, sci.logic, comp.theory

    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 11:12:49 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 10:30 AM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.


    Good that you admit that.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    Except that it should be H(D,D), since you need to give H the input that
    D needs to be given.

    So, your "Correct" function is false since H(D,D) will, as you just
    agreed, never return the right answer for the D designed for it.

    Note also, the FUNCTION Correct must return the value false if the H as
    its input doesn't return a value in a finite number of steps, as that
    makes H not actually a decider, so it is not a "correct decider".


    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    Nope, try to give the case. You are just LYING here and showing your
    ignorance.

    Remember, each H above is a SPECIFIC Turing machine (and for each H
    there will be a SPECIFC D, based on that SPECIFIC H, for which that
    SPECIFIC H will get the answer wrong.

    Remember, for EVERY actual SPECIFIC Turing Machine D (with input x) D(x)
    will either Halt or Not.

    For every actual SPECIFIC Turing Machine H, it will either give the
    correct answer, so Correct will answer True, of H will either not answer
    or give an incorrect answer, so Correct will answer False.

    There is no case for a SPECIFIC H, and a SPECIFIC D that Correct(H(D))
    doesn't have a True or False answer. Try to show the case.

    Remember H is a SPECIFIC TM, (since H ∈ TM) not a "set" of Turing
    Machines. Your "Correct" predicate doesn't take a "set" of Turing
    Machines, but an individual Turing Machine, and the "Pathological" D
    isn't built on a "Set" of Turing Machine, but an individual one.

    The actual question is about a specific input, and that ALWAYS has a
    correct answer, its just that some machihes won't get it right. And we
    can show that for EVERY decider we can make, there WILL be some specific
    input (depending on the specific decider we are looking at) that the
    decider WILL get wrong.

    Thus, non-computable valid problems exist, as shown by theory.


    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.


    Nope, again your ignorance of the probem.


    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    Right, THAT question has no correct answer.

    Does D halt, HAS a correct answer, H just doesn't give it.

    DIFFERENCE.

    Shows you don't understand the problem.


    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    But there IS a "Correct Answer", so the QUESTION isn't actualy self-contradictory.

    You are showing your stupidity,


    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    Sure does, but you are too stupd to understand.


    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    Nope. ZFC handled Russell's Paradox by deciding that we can't actually logically talk about a truely "Unversal" set of all possible sets.

    At best, your equivalence is just the admission that there IS a
    limitation to computabilty, that there exist a class of properties of
    Turing Machine that does exist and is valid (as the property is defined
    for all Turing Machines) but can not be computed by another Turing
    machine, given a proper description of the machine to be decided on.

    That is EXACTLY the statement you have been trying to DISPROVE for all
    these years, but seem to now be accepting, but still saying it doesn't
    affect anything.

    You are ADMITTING some things are not computable, and then saying this
    fact doesn't limit what a computation can do.

    That is like saying I know I can't get this car over 80 MPH, but there
    is no limit to how fast this car can go.

    Just a pitiful LIE.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jim Burns@21:1/5 to olcott on Sun Oct 29 14:26:27 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 1:30 PM, olcott wrote:

    [Subject: Does the halting problem
    actually limit what computers can do?]

    The inability to correctly answer
    an incorrect question places
    no actual limit on anyone or anything.

    The inability of a computer program
    to correctly answer all halting-questions
    *places* no actual limit on anyone or anything.

    That's not how a theorem works.

    Nothing which a theorem is about _changes_
    in response to a proof.

    _We_ change in response to a proof.
    Our state of knowledge changes.

    Before we know that
    no computer program decides all halting questions,
    no computer program decides all halting questions.

    The difference, before and after,
    is in _what we know_

    ----
    We finites are able to learn of
    the existence of a wall of infinitely-many bricks
    without our having stacked infinitely-many bricks
    one on another.

    All I am saying is:
    Nice!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 13:36:55 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    Every H of the infinite set of all Turing machines gets the wrong
    answer on their corresponding input D because this input D
    essentially derives a self-contradictory thus incorrect question
    for this H.

    Like the question: What time is it (yes or no)?
    the blame for the lack of a correct answer goes to the question
    and not the one attempting to answer it.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 11:39:34 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 10:30 AM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    Noticed that I misread what "Correct" was defined as.

    Note, that Correct(H(D) == value), where value is True/False can only be
    true for the one value that H(D) does return, and the other, it can
    NEVER be true.

    Correct, as you have defined it, can't be used to determine if a
    question actually has a correct value, only if H is correct in giving
    its answer.


    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
    First, that ISN'T necessarily a true statement, unless you are stating
    that D is a dependent variable such that:

    for all H ∈ TM, there exist a D ∈ representation(TM) such that (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    So, all you are saying here is that for all H there exists a D that H(D) happens to get the wrong answer. So what.

    To point out the limitiation of your "Correct" predicate imagine that if
    H instead of being a Halt Detector, was a Prime detector, but was
    incorrectly programmed and it though 2 was not prime, then

    H(2) == False

    Correct(H(2) == true) is false since H(2) doesn't return 2, so it wasn't correct in saying 2, and

    Correct(H(2) == false) is false, since 2 is prime, so H is not correct
    in saying it is not prime.

    Thus: (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    Doesn't say that the question is invalid, just that H got the answer wrong.

    The fact that you can say the same for ALL possible Turing Macines,
    still doesn't make the question "Wrong", just uncomputable.

    You don't seem to understand that H(D) is a FIXED VALUE based on the
    program of H, and that can ligitimately be WRONG


    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 13:44:16 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.




    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    Every H of the infinite set of all Turing machines gets the wrong
    answer on their corresponding input D because this input D
    essentially derives a self-contradictory thus incorrect question
    for this H.

    Like the question: What time is it (yes or no)?
    the blame for the lack of a correct answer goes to the question
    and not the one attempting to answer it.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 12:14:25 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 11:44 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.




    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    So?

    Who says the need to be able to do it?

    That is EXACTLY what the Theorem is proving, and which you admit, but
    you want to refuse the logical consequence of it, because you don't


    Every H of the infinite set of all Turing machines gets the wrong
    answer on their corresponding input D because this input D
    essentially derives a self-contradictory thus incorrect question
    for this H.

    Nope, you are confused by mixing sets with objects in the set.

    Nice Category error there.


    Every question in that set had a correct answer, that might have been
    given by some members of the deciders in that set. That shows that the
    actual QUESTION is VALID and not "self-contradictory"

    The fact that every instance of the question has a correct answer, makes
    it VALID.

    The fact that every decider has such a question that it can't answer,
    makes it uncomputable.

    The fact that your Strawman version (What can H return to be correct)
    doesn't have an answer is just part of the proof that the actual theorem
    is proven, and just shows your ignorance of the subject.


    Like the question: What time is it (yes or no)?
    the blame for the lack of a correct answer goes to the question
    and not the one attempting to answer it.



    Nope, Strawman. you like Strawman, I guess because they are just as
    smart as you.

    What time is it (yes or no)? doesn't have an answer.

    Does a particual D(D) Halt, DOES have an answer, and it will always be
    the opposite of what the H(D,D) returns for the SPECIFIC H that D was
    built to refute.

    Having an answer that ONE machine can't answer correctly is not like a
    question that doesn't actually have a answer (due to a category error in
    this case) is not the same.

    Your thinking they are the same just proves your stupidity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 14:25:00 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 1:44 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.




    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    Every H of the infinite set of all Turing machines gets the wrong
    answer on their corresponding input D because this input D
    essentially derives a self-contradictory thus incorrect question
    for this H.


    Changing the subject to a different H for this same input D is
    the strawman deception.

    Ignoring the context of who is asked the question deceptively
    changes the meaning of the question.

    Like the question: What time is it (yes or no)?
    the blame for the lack of a correct answer goes to the question
    and not the one attempting to answer it.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 13:03:07 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 12:25 PM, olcott wrote:
    On 10/29/2023 1:44 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.




    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    Every H of the infinite set of all Turing machines gets the wrong
    answer on their corresponding input D because this input D
    essentially derives a self-contradictory thus incorrect question
    for this H.


    Changing the subject to a different H for this same input D is
    the strawman deception.

    YOU'RE the one that said "for all H", so the strawman is YOURS


    Ignoring the context of who is asked the question deceptively
    changes the meaning of the question.

    Except that when the question's answer isn't affected by the context it
    is asked,

    "Does a SPECIFIED D(D) Halt?" is INDEPENDENT of whou you ask.

    So, you are just showing you deceitfulness because will the question is
    each time, about a SPECIFIC input, you try to change it to the input
    associated with the decider deciding it, which is not a valid input.

    You are just showing your stupidity by the form of your arguments.


    Like the question: What time is it (yes or no)?
    the blame for the lack of a correct answer goes to the question
    and not the one attempting to answer it.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 15:10:27 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    Every H of the infinite set of all Turing machines gets the wrong
    answer

    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D

    because this input D essentially derives a self-contradictory thus
    incorrect question for this H.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 15:15:33 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.



    Every H of the infinite set of all Turing machines gets the wrong
    answer

    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D

    because this input D
    because this input D
    because this input D
    because this input D
    because this input D

    essentially derives a self-contradictory thus

    incorrect question for this H.
    incorrect question for this H.
    incorrect question for this H.
    incorrect question for this H.
    incorrect question for this H.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 13:36:42 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 1:10 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    Every H of the infinite set of all Turing machines gets the wrong
    answer

    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D

    because this input D essentially derives a self-contradictory thus
    incorrect question for this H.



    Almost, but each is a DIFFERENT Question, and all the questions have
    answer, and thus are VALID.

    D isn't "Self-Contradictory", it is contadictory to a DIFFERENT machine
    then itself.

    I guess you are just showing you dont' know the meaning of "self"
    because you are too stupid.

    (and acting like a two year old in repeating your erroneous claim over
    and over as a BIG LIE thinking that makes it more correct.

    You still refuse to actually try to point out the actual errors in my
    statement but continue to repeat your proven wrong statements, showing
    that you are just a pitiful logical idiot.

    "Does (a specific) D(D) as specified by the input Halt?" is a valid
    question as it has a correct answer.

    The fact we can come up with a D (different in each case) for ANY H, as
    you have admitted, means the question is not computable.


    Maybe you should try to prove your point with more that just an appeal
    to a (proven incorrrect) authority (namely you).

    Try starting out with some actual accepted definition of the terms and
    use some sound logic (not sure you know any) to try to make you point.

    Remember, the question you are trying to prove invalid is:

    "Does the specific computation described by the input Halt when run?"

    and not "What does H need to return to get the right answer?" (which is
    an invalid question, as for ANY specific H, it CAN only return the
    answer that its algorithm will compute, and a given H has a specified
    specific algorithm).

    and also not, "Does an H exist that can return the right value for the
    D(D) derived fron it?" as that is asking not about a specific input, but
    about the existance of a machine to compute something. Non-existance of machines to do something is NOT a "error", but a sign the problem is uncomputable, which is exactly the type of questin that Computabilyt
    Theory investigates. What sort of questions ARE computable, and which
    are not. Not being computable is an acceptable state for a problem.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 13:40:00 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 1:15 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.



    Every H of the infinite set of all Turing machines gets the wrong
    answer

    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D
    on their corresponding input D

    because this input D
    because this input D
    because this input D
    because this input D
    because this input D

    essentially derives a self-contradictory thus

    incorrect question for this H.
    incorrect question for this H.
    incorrect question for this H.
    incorrect question for this H.
    incorrect question for this H.



    Repeating your answer because you weren't two-yearold enough first time?

    You also have a category error as you are conflating H as an "every"
    machine of the set with THIS machie of the set.

    For THIS machine of the set, and THIS D of the set, there IS an answer,
    so the question is valid.

    and, but each is a DIFFERENT Question, and all the questions have
    answer, and thus are also VALID.

    D isn't "Self-Contradictory", it is contradictory to a DIFFERENT machine
    then itself.

    I guess you are just showing you dont' know the meaning of "self"
    because you are too stupid.

    (and acting like a two year old in repeating your erroneous claim over
    and over as a BIG LIE thinking that makes it more correct.

    You still refuse to actually try to point out the actual errors in my
    statement but continue to repeat your proven wrong statements, showing
    that you are just a pitiful logical idiot.

    "Does (a specific) D(D) as specified by the input Halt?" is a valid
    question as it has a correct answer.

    The fact we can come up with a D (different in each case) for ANY H, as
    you have admitted, means the question is not computable.


    Maybe you should try to prove your point with more that just an appeal
    to a (proven incorrrect) authority (namely you).

    Try starting out with some actual accepted definition of the terms and
    use some sound logic (not sure you know any) to try to make you point.

    Remember, the question you are trying to prove invalid is:

    "Does the specific computation described by the input Halt when run?"

    and not "What does H need to return to get the right answer?" (which is
    an invalid question, as for ANY specific H, it CAN only return the
    answer that its algorithm will compute, and a given H has a specified
    specific algorithm).

    and also not, "Does an H exist that can return the right value for the
    D(D) derived fron it?" as that is asking not about a specific input, but
    about the existance of a machine to compute something. Non-existance of machines to do something is NOT a "error", but a sign the problem is uncomputable, which is exactly the type of questin that Computabilyt
    Theory investigates. What sort of questions ARE computable, and which
    are not. Not being computable is an acceptable state for a problem. [http://www.mozilla.com/thunderbird/]
    [Options]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 15:58:11 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    The only rebuttals to this in the last two years rely
    on one form of the strawman deception of another.

    *Stupid or dishonest people may say otherwise*
    That every D has a halt decider has nothing to do with
    the claim that every H has an undecidable input.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 14:45:58 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 1:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    So, you are just showing that you don't know what "satisfiable" means in
    logic, just showing off your ignorance (even though you have been told
    before, I guess you are to stupid to learn).

    You also seem to not understand what the "self" part of
    "self-contradictory" means, again, because you are too stupid to
    understand when taught.

    You also are repeating your category error by confusing specific
    questions for sets of questios.


    The only rebuttals to this in the last two years rely
    on one form of the strawman deception of another.


    Nope, your failure to actually point to an error shows that you don't understand how logic works.

    If my replies are strawman, you can point to the claim that isn't
    actually correct, and reference the accepted definition of the problem
    to show where they differ.

    The problem here is that you are just projecting, as a fundamental part
    of the problem is you try to change the fundamental nature of the
    problem by building your own strawman, and when I knock them down, you
    claim my reassertion of the actual problem is a strawman, because you
    can't recognise the actual problem.

    *Stupid or dishonest people may say otherwise*
    That every D has a halt decider has nothing to do with
    the claim that every H has an undecidable input.


    So, more stupid errors.

    the "input" is not "undecidable", as for every specific H, there is a
    specific D(D), and that input has a definite behavior so the quesiton of
    its Halt is valid.

    Also, due to the limited nature of your H's design, that inputs behavior
    IS decidable by another decider, and "decidable" just requires that
    there exist SOME decider (which doesn't need to be your H) that can
    answer the question correctly, and that exists, you have even shown how
    to build it (your H1).

    Thus, it isn't that the "input" is undecidable, it is that the PROBLEM
    isn't, as no one machine can compute the answer for every possible input.

    AGAIN, you are showing your STUPIDITY and IGNORANCE.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 17:38:17 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 16:29:34 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 3:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*



    Anonymous experts are not "evidence" and no "expert" can contradict the
    actual definitions.

    Especially when you don't even quote the actual words used, since you
    have shown youself to misinterprete what they are saying or have used misleading wording where they will interpret your words to mean what
    they are supposed to mean, and not you corrupted meaning.

    You are just continuing to prove that you do not understand how logic
    works, and by not even trying to refute the rebuttal are accepting them
    as correct responses, and thus admitting you are just a stupid liar.

    As pointed out, the actual questions DO have answer, so you are just an
    unsound liar by your arguements that they do not.

    You are just making sure that you name will be MUD for as long as it is remembered, until it falls in the trash heap of history.

    This will also mean that any good ideas you might have had have been
    poisoned and worthless.

    You have just gas-lighted your self into being just a babbling idiot
    that can only repeat the lies he convinced himself of, with no actual
    logical backing.

    Too bad.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 18:43:54 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Anonymous experts are not "evidence"
    and no "expert" can contradict the
    actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 17:44:42 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 4:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

       Anonymous experts are not "evidence"
       and no "expert" can contradict the
       actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Since you are so bad at the actual definition of words, it seems more
    like you are imagining things that aren't there.

    If you HAVE found an actual "nuances" that hasn't been noticed before,
    maybe if you try an actual step by step proof showing that "nuance".

    I don't think you can, and this is just another case of an idiot
    shooting at a target that just doesn't exist.


    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    Maybe it is the philosophers that don't understand that undeciability is
    a PRECISELY defined quantity.

    The thing that you don't seem to understand is that in Formal Systems,
    the rules are very important, and the things you are talking about are
    well established by those rules.

    If you want to change the "Rules" of the system, then you are in a very
    real sense needing to START OVER and buid back up from the ground up.

    It seems that you are so ignorant, that you don't understand that many
    of your "new" ideas are actually existing, but becuase of the discovered limitations, just parts of fringe systems.

    Yes, you can have systems where all true statements are provable, but
    the resulting system ends up very limited in scope, and can't be used to
    form anything like the mathematics that support things like Computation
    Theory.


    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    And is a pile of rubbish, because you don't actually seem to know what
    the thigs actually mean.

    Maybe if you were willing to actually LEARN about the systems you want
    to talk about, but your stated fear of "Learning error by rote" as put
    you in the state of Being in Error by Ignorance.

    Your idea of building a system from "First Principles" requires you to
    first actually LEARN those "First Principles". And for a "Formal Logic
    System" that means at least enough to know all the basic rules and
    definitons of the system. Things you have at time just admitted you
    never knew, which sort of negates any "First Principle" developement you
    might have done.

    I will say that many of your errors where known about 100 years ago, so
    it shows a glaring hole in your education.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 19:57:28 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

       Anonymous experts are not "evidence"
       and no "expert" can contradict the
       actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 18:08:02 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 5:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    Except it isn't, becuase you don't understand the logic.


    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    Which it isn't, and you don't understand the term.


    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    yes, your "epiphany" is just your delusion from stupidity.

    You have PROVEN you don't understand a thing about what you are talking
    about and thus prove yourself a liar.

    As I mentioned, if you really think you have something, try to actually
    show it with a real formal proof starting from the actual accepted
    definitions.

    Your problem seems to be that you just don't understand the fields well
    enough to know what you can actually start with, or logic enough to
    actually form a real logical proof.

    Your just repeating your INCORRECT claims, just proves that you have
    gas-light yourself into beliving your lies, and that you actually have
    nothing to base your work on, except your own stupid lies.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 20:19:26 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 18:37:27 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 6:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    Then you are admtting that you can't do the work in the formal system,
    so any claim you make about anything IN the system is just invalid.

    IF you want to try to change the definitions, you need to just redrive
    the system from the ground up with your new rules. (I doubt you can do
    that).

    Or, you could try to get some help by trying to clearly explain the
    error in the fundamental rules you think are wrong.

    Note, to do that you need to actually show the real problem that the
    rule is causing.

    Your idea that undecidable problem are actually invalid isn't going to
    fly, as many of the undecidable problems are actually quite important.

    The fact that you can't understand that, means you are going to have a
    hard time convincing others or your ideas.


    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    You have very bad professors if they only apply "undeciability" to just
    the Halting Problem, as MANY problems are "undecidable".


    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


    And, until your provide the names and actual statements, this claim is
    worth exactly NOTHING.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 20:44:22 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


    Then you are admtting that you can't do the
    work in the formal system, so any claim you
    make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 19:02:20 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 6:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>>> answering it.

    When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no >>>>>>> actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


       Then you are admtting that you can't do the
       work in the formal system, so any claim you
       make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Maybe in a non-formal system or setting, but in Computability Theory, it
    means, and EXACTLY means that there does not exist a Turing Machine that
    can compute the "function".


    What "nuances" are you claiming?


    Remember also, that the "Function" mentioned is nothing more than a mathematical mapping of input objects to output values, defined for all elements of the input domain.


    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    Nope. You still don't understand the meaning of the words.

    Completeness, means PRECISELY and nothing more, that all true statements
    in the system can be proven in the system.

    Incompleteness, thus, means that there exists, at least ONE true
    statement in the system that can not be proven in that system.

    For Godels proof, that statement is "that there does not exist a natural
    number g that satisfies a particular Primative Recursive Relationship"
    that was derived in a meta-system of the system, but said PRR is fully
    defined in that system.


    What is "self-contradictory" of that statement?


    Remeber, all the arguments about provability doen't exist in the system,
    and "self-contrdiction" is a property in the system being discussed.

    Your problem is you don't understand the logic of the proof enough to understand what the statement actually is.


    Go ahead, try to actually answer one of the questions with an actual
    logical answer based on FACTS,

    My guess is you are going to again, just restate your FALSE claims and
    thus prove that you don't actually have any true basis for your claims.

    DARE YOU to try to answer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 21:12:16 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>>> answering it.

    When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no >>>>>>> actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


       Then you are admtting that you can't do the
       work in the formal system, so any claim you
       make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*
    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to Richard Damon on Sun Oct 29 19:05:01 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 7:02 PM, Richard Damon wrote:
    On 10/29/23 6:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Maybe in a non-formal system or setting, but in Computability Theory, it means, and EXACTLY means that there does not exist a Turing Machine that
    can compute the "function".


    What "nuances" are you claiming?


    Remember also, that the "Function" mentioned is nothing more than a mathematical mapping of input objects to output values, defined for all elements of the input domain.


    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    Nope. You still don't understand the meaning of the words.

    Completeness, means PRECISELY and nothing more, that all true statements
    in the system can be proven in the system.

    Incompleteness, thus, means that there exists, at least ONE true
    statement in the system that can not be proven in that system.

    For Godels proof, that statement is "that there does not exist a natural number g that satisfies a particular Primative Recursive Relationship"
    that was derived in a meta-system of the system, but said PRR is fully defined in that system.


    What is "self-contradictory" of that statement?


    Remeber, all the arguments about provability doen't exist in the system,
    and "self-contrdiction" is a property in the system being discussed.

    Your problem is you don't understand the logic of the proof enough to understand what the statement actually is.


    Go ahead, try to actually answer one of the questions with an actual
    logical answer based on FACTS,

    My guess is you are going to again, just restate your FALSE claims and
    thus prove that you don't actually have any true basis for your claims.

    DARE YOU to try to answer.

    I will add, that "The results proves something I don't like" is not
    grounds for saying something is wrong.

    You need to show an ACTUAL contradiction in the system by the
    definitions in the system (not something added)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 21:27:52 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 19:21:14 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 7:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    So, as predicted, you couldn't answer any of the question put to you and
    you just repeated your LIE agian, thus proving you argument has no basis.

    You still don't understand that "self-contradictory" needs to refer to
    "self", but nothing in the Halting Problem proof actually "refered" to
    "self"

    And that the question possed, does have a single correct answer, so it
    can't be "contridtory".

    Thus proving you are just a LIAR.

    Your refusal to actually answer any of the errors pointed out is just
    hammering nails into the coffin of your argument, which died years ago,
    and you have spent your last years just beating a dead red herring.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 19:50:02 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 7:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value
    from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    You haven't given a "correct" reason, but only things based on incorrect definitions, as are unsound,

    And anonymous supports without even quoting exactly what they agreed to
    just makes you look foolish

    My guess is you aren't going to quote what they actually said as you
    know you are misinterpreting statements and don't want that pointed out,
    like you error with Prof Sipser.


    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    That is what YOU are doing. I give reasons based on the actual
    definitions, and logical argument. You give "reasons" based on your
    incorrect definitions that you can not support, and don't even try to
    build an Formal Argument.

    If you want to get out of unsound not-rebutting mode, maybe you should
    try to answer some of the questions put to you.

    Until then, you are just proving yourself to be the idiot.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 22:01:24 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value
    from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    We had to boil it down to its sound bite form to
    sharply focus attention on a single point so that
    rebuttals based on the strawman deception or ad
    hominem are easily seen as having no basis what-so-ever.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 20:36:38 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 8:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    It doesn't take more than a glance to see your errors.

    Your failure to actually point out an error in my statements says that
    you don't even attempt an "incorrect rebuttal" but are just accepting
    the errors I have pointed out as actual errors.

    YOU seem to be the one just taking a glance at MY words.

    You do seem to project a lot of your errors on others, just like Trump.

    You actually remind me a lot of him, even though you claim to be
    fighting him, you use the similar methods to those you claim to be
    trying to fight.


    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Except that you haven't show how it CAN be true, since there actually is
    no "self-reference" to lead to the "self-contradictory" question.

    Thus, it can't be true.

    If you want to try to show an actual contradiction in the QUESTION
    ITSELF go ahead and try.

    THe problem is that the actual question has an definite provable answer,
    so it gets very hard to show that to be "contradictiory"

    Remember, each H gives a DIFFERENT question, as it creates a DIFFERENT
    progran D to decide on, and for each of those D(D)'s there is a correct
    answer.

    If a given H(D,D) returns false, saying it predicts its input to be non-halting, then we can show that D(D) will in fact be halting, so
    there is no oppertunithy for the answer to have a contradiction.

    Remember, the question says nothing about what the decider actually
    does, only what answer it need to be correct, without requiring it to be correct (that just means that this machine isn't actually a correct halt decider).

    This is the flaw in your argument, you somehow want to force that the
    Halting Function must actually be decidable, and THAT assumption leads
    to the contradiction, whcih shows that such an assumption must be incorrect.


    We had to boil it down to its sound bite form to
    sharply focus attention on a single point so that
    rebuttals based on the strawman deception or ad
    hominem are easily seen as having no basis what-so-ever.


    No, trying to over simplify it into a "sound byte" is removing any of
    your ability to actual try to express your logic.

    Saying a short LIE doesn't help your cause.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 22:53:48 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:


    Except that you haven't show how it CAN be true,
    since there actually is no "self-reference" to
    lead to the "self-contradictory" question.

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    No computer program H can correctly predict what
    another computer program D will do when D has been
    programmed to do the opposite of whatever H says.

    The fact that D contradicts both values that every
    corresponding H can possibly return proves that input
    D is isomorphic to a self-contradictory question for H.

    If D would only contradict one of these values then D
    would be a contradictory question. Since D contradicts
    both of these values that makes D self-contradictory.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Oct 29 21:01:55 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/23 8:53 PM, olcott wrote:
    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another >>>>>>>>>>> computer
    program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question
    places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite >>>>>>>>>> set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:


       Except that you haven't show how it CAN be true,
       since there actually is no "self-reference" to
       lead to the "self-contradictory" question.

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    No, and you are just proving you have the maturity of a two year old for repeating the same LIES.

    What is wrong with the answer that I have give you.

    Remember, the "correct answer" to the Halting quesiton doesn't need to
    be the result given by H, and the correct answer of "Halting" (for the
    H's you have proposed) IS the correct answer.


    No computer program H can correctly predict what
    another computer program D will do when D has been
    programmed to do the opposite of whatever H says.

    So you agree with the Halting Theorem


    The fact that D contradicts both values that every
    corresponding H can possibly return proves that input
    D is isomorphic to a self-contradictory question for H.

    What BOTH. That shows your stupidity.

    A given H can only give ONE value for a given input, the value its
    algorithm produces.


    If D would only contradict one of these values then D
    would be a contradictory question. Since D contradicts
    both of these values that makes D self-contradictory.


    Nope, it only needs to counterdict the ONE answer that this H can give.

    Your logic just proves you totally don't umderstand what a computer
    program is.

    And it prove you to be a total IDIOT.

    TRY TO PROVE ME WRONG.

    Show me an H that could possible give me both answers FOR THE EXACT SAME PROGRAM.

    TRY IT, I DOUBLE DARE YOU.

    You are just a chicken idiot,

    You are just digging a deeper hole to bury your stupidity into.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Sun Oct 29 23:22:05 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
    Any yes/no question that contradicts both yes/no answers.

    Every D derives a self-contradictory question for every
    corresponding H in that:
    (a) when each H says that its D will halt, D loops
    (b) when each H that says its D will loop it halts.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Stockbauer@21:1/5 to olcott on Mon Oct 30 05:41:37 2023
    On Sunday, October 29, 2023 at 11:22:10 PM UTC-5, olcott wrote:
    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means* >>>>>>>>>> The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these >>>>>>>>>> pathological
    inputs the same way that ZFC handled Russell's Paradox. >>>>>>>>>>

    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite >>>>>>>>> set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory >>>>>>>>> thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Anonymous experts are not "evidence"
    and no "expert" can contradict the
    actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


    Then you are admtting that you can't do the
    work in the formal system, so any claim you
    make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS >>>>

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*
    *A self-contradictory question is defined as*
    Any yes/no question that contradicts both yes/no answers.

    Every D derives a self-contradictory question for every
    corresponding H in that:
    (a) when each H says that its D will halt, D loops
    (b) when each H that says its D will loop it halts.
    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer


    right up there at the front you talk about one program doing the opposite of what another program does

    so let's say I have a program that adds up 10 numbers. What's the opposite of that?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 11:12:43 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


    *A self-contradictory question is defined as*
    Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that:
    (a) When each H says that its D will halt, D loops
    (b) When each H that says its D will loop it halts.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 11:18:04 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite
    set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 11:29:51 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) Correct(H(D)==false) means that H(D) is correct that D does not halt Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
    Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 09:39:29 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 9:12 AM, olcott wrote:
    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another >>>>>>>>>>> computer
    program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question
    places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite >>>>>>>>>> set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


    *A self-contradictory question is defined as*
      Any yes/no question that contradicts both yes/no answers.

    And it doesn't for the ACTUAL question

    When we ask "Does the computation described by the input Halt" by
    calling H(D,D), we are asking about this particular D(D).

    Since H(D,D) has a DEFINED value value for the SPECIFIC H, we can find
    the behavior of the SPECIFIC D (designed for that H) when invoked as D(D).

    Thus, the actual question has a definite answer and you claim is wrong,
    and because you have been told this many times, it becomes a LIE as you
    should know better. Perhaps it shows you to be a TOTAL IDIOT.


    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that:
    (a) When each H says that its D will halt, D loops
    (b) When each H that says its D will loop it halts.



    Right, so if the H that this D was built on says that D will Halt, then
    the correct answer Non-Halting, and if the H that this D was built on
    syas that D will not Halt, then the correct answer is Halting.

    Since "The H that this D was built on" is a specific computation, it has
    a definite answer so we know which branch of the logic to use, and which
    answer is correct.

    Thus, there IS a correct answer.

    You don't seem to understand this fundamental fact about programs, that
    for a given program we have deterministic results from it.

    You can't talk about a SPECIFIC H giving both answers, as it just can't,
    it will only give one.

    When you start to say "for every H" we are introducing not a single
    question to answer, but a set of questions to answer, and the answers do
    not need to be the same.

    Each of those questions HAS an answer, so NONE of the questions were contradictory.

    It seems that the "self' that you are trying to describe is some
    "infinite set", but the actual question isn't about a "set of inputs"
    but about a specific input, so your argument is just another stupid
    category error.

    It does turn out that your "self-contradictory" question is sort of like
    one asked in the proof, and maybe that is what is getting you confused.
    The proof asks if we can make an H that could answer a machine of this
    form, and the contradiction that comes out when we assume we can, shows
    that we can't make an H to answer an input formed this way. This shows
    that the problem is uncomputable, not self-contradictory, as questions
    about computability to NOT imply that the function is, in fact, computable.

    You are just, AGAIN, showing your ignorance of the topic, and logic in
    general.

    You just continue to spout off your lies about your incorrect logic, and
    never answer the errors pointed out.

    I sort of suspect the issue is you are too ignorant of the subject to understand the corretions being given, and to you they sound just like babbling, so you just act like your two-year-old self and just repeat
    your errors over and over because you are incapable of learning what
    things mean (perhaps in part due to self-inflicted deliberate ignorance).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 09:45:53 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 9:18 AM, olcott wrote:
    On 10/29/2023 10:01 PM, olcott wrote:
    On 10/29/2023 9:27 PM, olcott wrote:
    On 10/29/2023 9:12 PM, olcott wrote:
    On 10/29/2023 8:44 PM, olcott wrote:
    On 10/29/2023 8:19 PM, olcott wrote:
    On 10/29/2023 7:57 PM, olcott wrote:
    On 10/29/2023 6:43 PM, olcott wrote:
    On 10/29/2023 5:38 PM, olcott wrote:
    On 10/29/2023 3:58 PM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another >>>>>>>>>>> computer
    program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question
    places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    The halting problem proofs merely show that the problem
    definition is unsatisfiable because every H of the infinite >>>>>>>>>> set of all Turing Machines has an input that makes the
    question: Does your input halt? into a self-contradictory
    thus incorrect question for this H.

    I now have two University professors that agree with this.
    My words may need some technical improvement...

    [problem specification] is unsatisfiable

    The idea is to convey the essence of many technical
    papers in a single sound bite:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        Anonymous experts are not "evidence"
        and no "expert" can contradict the
        actual definitions.

    The whole thing is a matter of these definitions
    semantically entailing additional nuances of meaning
    that no one ever noticed before.

    Computer scientists almost never pay any attention
    at all to the philosophical underpinnings of the
    foundations of concepts such as undecidability.

    All of my related work in the last twenty years
    has focused on these foundational underpinnings.


    In the same way that incompleteness is proven whenever
    any WFF of a formal system cannot be proven or refuted
    in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
    SELF-CONTRADICTORY

    The notion of undecidability is determined even when the
    decider is required to correctly answer a self-contradictory
    (thus incorrect) question.

    This is the epiphany of my work for the last 20 years and
    two professors agree that this does apply to the halting
    problem specification.


    I cannot form a proof on the basis of the conventional
    definitions because the issue is that one of these
    definitions semantically entails more meaning than
    anyone ever noticed before.

    That this applies generically to the notion of undecidability
    seems to be an extension of these sames ideas that these
    professors only applied to the halting problem specification.

    The lead of these two professors and I exchanged fifty emails
    where he confirmed my verbatim paraphrase of his ideas using
    my own terms such as "incorrect questions".


        Then you are admtting that you can't do the
        work in the formal system, so any claim you
        make about anything IN the system is just invalid.

    That the "term undecidability" semantically entails
    previously unnoticed nuances of meaning can be understood
    on the basis of the reasoning of myself and these two professors.

    Just like incompleteness includes self-contradictory
    expressions in its measure of incompleteness, undecidability
    includes problem specifications that entail self-contradictory
    questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    Once you pay enough attention to see that the reasoning does
    entail this then you will know that I and the two professors are
    correct.

    If you only want to provide a rebuttal no matter what the actual truth
    is then you will continue to pretend that you don't see this.


    When you just glance at my words to form a superficial basis
    for an incorrect rebuttal you won't see this.

    When we hypothesize that this <is> literally true then it
    has enormous consequences:

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*



    So?

    For each D, and the question is ALWAYS about a specific D, there is a
    correct answer, so the question is not contradictory.

    The fact that for every H there is some input that it gets wrong, just
    shows that no H is correct for every input, and thus no H is a correct
    decider and thus the problem is uncomputable.

    For the problem to be contradictory, there would need to be a SPECIFIC
    input that didn't have an answer to the question, but every specific
    input has a specific answer, it just isn't the answer that the specific
    H that this specific input was built on gives.

    You are just making the same category error of confusing sets of
    deciders and inputs for a single input.

    I guess to you an large set of somethings is the exact same things as on
    of the somethings.

    That is that same as saying that everyone is the same person.

    That is the stupidity of your logic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 09:47:11 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 9:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
      Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*




    So?

    For each D, and the question is ALWAYS about a specific D, there is a
    correct answer, so the question is not contradictory.

    The fact that for every H there is some input that it gets wrong, just
    shows that no H is correct for every input, and thus no H is a correct
    decider and thus the problem is uncomputable.

    For the problem to be contradictory, there would need to be a SPECIFIC
    input that didn't have an answer to the question, but every specific
    input has a specific answer, it just isn't the answer that the specific
    H that this specific input was built on gives.

    You are just making the same category error of confusing sets of
    deciders and inputs for a single input.

    I guess to you an large set of somethings is the exact same things as on
    of the somethings.

    That is that same as saying that everyone is the same person.

    That is the stupidity of your logic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 11:57:04 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
      Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 10:05:14 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 9:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    And since of a specifiec D, based on a specific H, that H will only
    answer one of the ways, there IS a correct answer, as D has definite
    behavior,


    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*



    Nope, since each specific question HAS a correct answer, it shows that,
    by your own definition, it isn't "Self-Contradictory"

    You are just proving your stupidity by repeating this category error.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 12:23:06 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no actual >>> limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


    Nope, since each specific question HAS
    a correct answer, it shows that, by your
    own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 10:58:06 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 10:23 AM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


       Nope, since each specific question HAS
       a correct answer, it shows that, by your
       own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    Nope, How is your statement that "Self-Contradictiory questions have no
    correct answer" plus the fact that the quesiton is about a SPECIFIC
    input being asked about (and each input is thus a seperate question to
    answer) and the fact that for each of these input, there is a correct
    answer.

    We thus have the logical argument

    Define A: The problem is self-contradictory
    Define B: The Problem has no correct answer.

    Your statement: A -> B

    Because every question does have a correct answer, we have ~B

    From the definition of Implication

    A -> B
    ~B

    Therefore ~A

    so the problem can not be self-contradictory.

    Don't you understand basic logic?


    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.



    Where does it say that a Turing Machine must exsit to do it?

    That is the definition of Decidability/Computability of the Problem, not validity of the problem.

    The issue is that for every instance D, there IS a correct answer, and
    thus the problem is VALID.

    As shown above, your claim that the problem is self-contradictory has
    been prove FALSE, and thus your whole logic turns unsound.

    (As seems to be your whold mind).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 13:08:04 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus
    isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one
    answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


       Nope, since each specific question HAS
       a correct answer, it shows that, by your
       own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

    Where does it say that a Turing
    Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 12:00:35 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 11:08 AM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

       Where does it say that a Turing
       Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


    Nope. UNSOUND logic (from an unsound mind) as has been explained, and
    your refusal to understand it shows your stupidity, and ignorance of how
    logic works.

    Your repeating it shows that you have the maturity of a Two-year-old.

    The issue that you ignore is that you are confalting a set of questions
    with a question, and are baseing your logic on a strawman, which by your
    own statements make you a stinking lying bastard.

    The ACTUAL question, is does the SPECIFIC input to the decider,
    decscribe a SPECIFIC computation that will Halt in finite time when
    performed. For ANY of these Ds your reference above, there IS an answer
    for that D, and thus the question has an answer and is valid and thus
    can not be "self-contradictory".

    Your "Strawman Question" of what can H return to get the right answer,
    is just that, a Strawman question, and in fact, an illogical question,
    as a given H can only return on answer for a given input, the one that
    its algorithm will generate. It giving some other answer is just self-contradictory.

    The fact that you keep repeating this LIE just shows that you are either
    a total idiot incapable of understanding even the basics of logic, or
    that you are just a pathological liar that has gas-lite himself into
    believing his own lies.

    Either way, you have disqualified your self from being a reliable source
    to look to about questions of logic.

    Sorry, that is just the facts.

    Your refusal to actually try to deal with the errors pointed out just
    shows your utter lack of intelligence in this field, and your moral
    bankrupcy by your attempt to use a Big Lie to press your point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 15:11:50 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the
    question: Does your input halt? is essentially a self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>> inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

       Where does it say that a Turing
       Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    The issue that you ignore is that you are
    confalting a set of questions with a question,
    and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 17:10:15 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H* >>>>
    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

       The issue that you ignore is that you are
       confalting a set of questions with a question,
       and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

    It *CAN* if the question ask something about
    the person being questioned.

    But it *CAN'T* if the question doesn't in any
    way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 14:53:56 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 1:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer
    program D will do when D has been programmed to do the opposite of >>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from H(D) >>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification thus >>>>>> isomorphic to a question that has been defined to have no correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>> answering it.

    When we understand that there are some inputs to every TM H that
    contradict both Boolean return values that H could return then the >>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no
    actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>> inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H* >>>>
    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

       The issue that you ignore is that you are
       confalting a set of questions with a question,
       and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    It *CAN* if the question ask something about the person being questioned.

    But it *CAN'T* if the question doesn't in any way reffer to who you ask.

    If you ask what is 1 + 2? it doesn't matter who you ask, the answer is
    always 3.

    If you ask what is the third planet around the star "Sol", the answer is
    always Earth.

    If you ask "if the specific program D, based on the specific program H
    that when invoked as H(D,D) will return false, when invoked as D(D) will Halt?", the answer is always True.

    If you ask "if the specific program D, based on the specific program H
    that when invoked as H(D,D) will return True, when invoked as D(D) will
    Halt?", the answer is always False.

    Thus, since any instance of the halting problem in your set is one of
    those last two questions, there is alwaysa correct answer to the
    question, so THAT question is not "Contradictory", as "Contradictory
    Questions" never have a correct answer. (from your own definition)



    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Nope, because H CAN only go to one of H.false or H.true based on its programing.

    THAT being the wrong answer doesn't make the problem invalid.

    You are just DECEPTIVELY assuming a property of H that just doesn't exist.

    Note, An H1 that goes to H1.true when given D1 is a different program
    AND a different input then the H2 that goes to H2.false when given D2

    So, you can't treat this as the same question to try to show that you
    have a contradiction.


    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    Yes, ONE of the answers would have been correct for the D given.

    It will be the one that the H that D was built on didn't go to.

    THAT is valid, and results in a valid question.



    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.


    Nope. Remember, to compare the questions "Does your input Halt?" you
    need to given them the exact same input,

    A given input D is built on one SPECIFIC H, not whatever H we are giving
    the input to.

    Remember also, the ACTUAL question is: "Does the machine represented by
    your input Halt?" The D in the input has specific behavior, and thus the
    actual answer for does D(D) Halt is defined, and the same for all
    deciders given this exact same D.

    Note, this is why Linz uses the ^ notation. Given a decider by what ever
    name, we can make the ^ program from it, thus H1 is given H1^, H2 is
    given H2^ and thus it is clear that each different decider has a
    different input.

    You are just being deceptive trying to call all the different inputs by
    the same name. That or you are just too dumb to understand the error in
    doing so,

    I will challange you to write an actual program that meets the
    requirements of a computation that can change its behavior based on who
    is deciding on it.

    Note, with your example "H/D" program, the H that D calls is part of the definition of D, so when you give it to a decider, you need to give to
    said decider.

    You are just proving your utter stupidity by repeating factually
    incorrect claims with no backing other than the clearly flawed reasoning.

    If you want to show my reasoning is incorrect, quote the message and
    show the actual logical error in the statement (not just that you think
    it is wrong)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 17:46:52 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>>> answering it.

    When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no >>>>>>> actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for
    each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

       It *CAN* if the question ask something about
       the person being questioned.

       But it *CAN'T* if the question doesn't in any
       way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


    The QUESTION doesn't refer to the person
    being asked?

    That D calls H doesn't REFER to the asker,
    but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

    For the H1/D pair D does not refer to the specific
    machine being asked: Does your input halt?
    D does not know about or reference H1.

    If these things were not extremely difficult to
    understand they would have been addressed before
    publication in 1936.




    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 15:27:01 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 3:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not halt >>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt

    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the
    question. In this case we know to blame the question and not the one >>>>>>> answering it.

    When we understand that there are some inputs to every TM H that >>>>>>> contradict both Boolean return values that H could return then the >>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places no >>>>>>> actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these pathological >>>>>>> inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for
    each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*
    *for every Turing Machine of the infinite set of all Turing machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

       It *CAN* if the question ask something about
       the person being questioned.

       But it *CAN'T* if the question doesn't in any
       way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


    The QUESTION doesn't refer to the person being asked?

    That D calls H doesn't REFER to the asker, but to a specific machine.

    Thus, nothing in the question refers to the asker.

    Does "What is Joe Blows age?" depend on who you are asking? Even if you
    are asking Joe Blow?

    NO.

    So, you are just continuing to prove your stupidity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 15:58:30 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 3:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for
    each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>
    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H* >>>>
    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

        It *CAN* if the question ask something about
        the person being questioned.

        But it *CAN'T* if the question doesn't in any
        way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


       The QUESTION doesn't refer to the person
       being asked?

       That D calls H doesn't REFER to the asker,
       but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

    Nope. The question does this input representing D(D) Halt does NOT refer
    to any particular decider, just what ever one this is given to.


    For the H1/D pair D does not refer to the specific
    machine being asked: Does your input halt?
    D does not know about or reference H1.

    If these things were not extremely difficult to
    understand they would have been addressed before
    publication in 1936.


    They are only "exteremly difficult to understand" because they are FALSE statements,

    You are just too stupid to understand that the Halting question is:

    "Does the computation represented by the input Halt?" doesn't have
    ANYTHING in it that refers to the machine doing the deciding, and the
    input being represented also doesn't refer to the machine doing the
    deciding, but only a particular decider that it is designed to foil.

    Just be cause we give it to that one, doesn't make it "refer" to the one
    being asked.

    You are just FAILING basic logic theory, because you are showing
    yourself to be a total idiot.

    Please find references for you "claims" and definition that are reliable sources.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 16:44:36 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 4:17 PM, olcott wrote:
    On 10/30/2023 5:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value
    from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers. >>>>>>>>
    For every H in the set of all Turing Machines there exists a D >>>>>>>> that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for >>>>>>>> each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>>
    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H* >>>>>
    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

        It *CAN* if the question ask something about
        the person being questioned.

        But it *CAN'T* if the question doesn't in any
        way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


        The QUESTION doesn't refer to the person
        being asked?

        That D calls H doesn't REFER to the asker,
        but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

      Nope. The question does this input representing
      D(D) Halt does NOT refer to any particular decider,
      just what ever one this is given to.

    *You can ignore that D calls H none-the-less when D*
    *calls H this does mean that D <is> referencing H*

    The only way that I can tell that I am proving my point
    is that rebuttals from people that are stuck in rebuttal
    mode become increasingly nonsensical.


    CALLING H doesn't REFER to the decider deciding it.

    Note key difference, a Turing machine can have a copy of the code for
    another machine, but it doesn't "Refer" to it. as any changes to that
    machine after making the first machine doesn't change it.

    That is the key point you miss.

    D has the code for the H that you are claiming to give the right value,
    when you try to vary it to prove something, that DOESN'T change D, as D
    had a copy of the original code of H, not a "reference" to H,

    Also, D has a copy of the code of a very specific H, and if you give D
    to a different decider, that doesn't change what D does.

    Thus the behavior of D(D) is NOT dependent on the machine that is
    deciding it, and thus, the answer to the question "Does the machine
    represented by the input halt?" can't change based on the decider.

    That is like saying that for most machine 1+2 = 3, but for some, it
    might be correct to say 1+2 = 4.

    YOU FAIL.

    You are showing you just don't understand basic English even, let alone
    the technical language of Computability Theory or Logic.

    You are just proving you are too Stupid to understand what is being
    talked about.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 18:17:55 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 5:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value from >>>>>>>> H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable specification >>>>>>>> thus
    isomorphic to a question that has been defined to have no correct >>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>> question. In this case we know to blame the question and not the >>>>>>>> one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>> question: Does your input halt? is essentially a self-contradictory >>>>>>>> (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers.

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for
    each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>
    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H* >>>>
    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

        It *CAN* if the question ask something about
        the person being questioned.

        But it *CAN'T* if the question doesn't in any
        way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


       The QUESTION doesn't refer to the person
       being asked?

       That D calls H doesn't REFER to the asker,
       but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

    Nope. The question does this input representing
    D(D) Halt does NOT refer to any particular decider,
    just what ever one this is given to.

    *You can ignore that D calls H none-the-less when D*
    *calls H this does mean that D <is> referencing H*

    The only way that I can tell that I am proving my point
    is that rebuttals from people that are stuck in rebuttal
    mode become increasingly nonsensical.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 19:04:38 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 6:17 PM, olcott wrote:
    On 10/30/2023 5:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>> program D will do when D has been programmed to do the opposite of >>>>>>>>> whatever H says.

    H(D) is functional notation that specifies the return value
    from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does not >>>>>>>>> halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>> contradict both Boolean return values that H could return then the >>>>>>>>> question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers. >>>>>>>>
    For every H in the set of all Turing Machines there exists a D >>>>>>>> that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for >>>>>>>> each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>> *for every Turing Machine of the infinite set of all Turing machines* >>>>>>
    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for each H* >>>>>
    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

        It *CAN* if the question ask something about
        the person being questioned.

        But it *CAN'T* if the question doesn't in any
        way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


        The QUESTION doesn't refer to the person
        being asked?

        That D calls H doesn't REFER to the asker,
        but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

      Nope. The question does this input representing
      D(D) Halt does NOT refer to any particular decider,
      just what ever one this is given to.

    *You can ignore that D calls H none-the-less when D*
    *calls H this does mean that D <is> referencing H*

    The only way that I can tell that I am proving my point
    is that rebuttals from people that are stuck in rebuttal
    mode become increasingly nonsensical.


    "CALLING H doesn't REFER to the decider deciding it."

    Sure it does with H(D,D) D is calling the decider deciding it.

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 17:29:50 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 5:04 PM, olcott wrote:
    On 10/30/2023 6:17 PM, olcott wrote:
    On 10/30/2023 5:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers. >>>>>>>>>
    For every H in the set of all Turing Machines there exists a D >>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>> (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for >>>>>>>>> each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing
    machines*
    *for every Turing Machine of the infinite set of all Turing
    machines*
    *for every Turing Machine of the infinite set of all Turing
    machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for
    each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

        It *CAN* if the question ask something about
        the person being questioned.

        But it *CAN'T* if the question doesn't in any
        way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


        The QUESTION doesn't refer to the person
        being asked?

        That D calls H doesn't REFER to the asker,
        but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

       Nope. The question does this input representing
       D(D) Halt does NOT refer to any particular decider,
       just what ever one this is given to.

    *You can ignore that D calls H none-the-less when D*
    *calls H this does mean that D <is> referencing H*

    The only way that I can tell that I am proving my point
    is that rebuttals from people that are stuck in rebuttal
    mode become increasingly nonsensical.


       "CALLING H doesn't REFER to the decider deciding it."

    Sure it does with H(D,D) D is calling the decider deciding it.


    Nope, D is calling the original H, no matter WHAT decider is deciding it.

    Coincidence is NOT "Reference"

    The computation of D(D) can be given to ANY decider, and it always will
    use the H that it was originally defined for.

    So, it does not "Reference" the decider deciding it. It uses the decider
    it was defined for.

    So, it seems you still don't understand what it means to "Refer to the
    Decider Deciding it".

    Just becuase we happen to have the same decider there doesn't mean it
    was referenced.

    IT seems you just don't understand what a reference is.

    For example

    THe code

    x = 1;
    y = 1;

    does not leave y with a "reference" to X, but you seem to think it does,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to olcott on Mon Oct 30 19:39:51 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/2023 7:04 PM, olcott wrote:
    On 10/30/2023 6:17 PM, olcott wrote:
    On 10/30/2023 5:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another computer >>>>>>>>>> program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means*
    The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no correct >>>>>>>>>> answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>> contradict both Boolean return values that H could return then >>>>>>>>>> the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question places >>>>>>>>>> no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers. >>>>>>>>>
    For every H in the set of all Turing Machines there exists a D >>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>> (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for >>>>>>>>> each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because
    *for every Turing Machine of the infinite set of all Turing
    machines*
    *for every Turing Machine of the infinite set of all Turing
    machines*
    *for every Turing Machine of the infinite set of all Turing
    machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for
    each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

        It *CAN* if the question ask something about
        the person being questioned.

        But it *CAN'T* if the question doesn't in any
        way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


        The QUESTION doesn't refer to the person
        being asked?

        That D calls H doesn't REFER to the asker,
        but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

       Nope. The question does this input representing
       D(D) Halt does NOT refer to any particular decider,
       just what ever one this is given to.

    *You can ignore that D calls H none-the-less when D*
    *calls H this does mean that D <is> referencing H*

    The only way that I can tell that I am proving my point
    is that rebuttals from people that are stuck in rebuttal
    mode become increasingly nonsensical.


       "CALLING H doesn't REFER to the decider deciding it."

    Sure it does with H(D,D) D is calling the decider deciding it.


    Nope, D is calling the original H, no matter
    WHAT decider is deciding it.

    Duh? calling the original decider when
    the original decider is deciding it

    Because the halting problem and Tarski Undefinability
    (attempting to formalize the notion of truth itself)
    are different aspects of the same problem:

    My same ideas can be used to automatically divide
    truth from disinformation so that climate change
    denial does not cause humans to become extinct.

    Are you going to perpetually play head games?

    --
    Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
    hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Oct 30 17:58:30 2023
    XPost: sci.math, sci.logic, comp.theory

    On 10/30/23 5:39 PM, olcott wrote:
    On 10/30/2023 7:04 PM, olcott wrote:
    On 10/30/2023 6:17 PM, olcott wrote:
    On 10/30/2023 5:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another >>>>>>>>>>> computer
    program D will do when D has been programmed to do the
    opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

    *No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question
    places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these
    pathological
    inputs the same way that ZFC handled Russell's Paradox.


    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
       Any yes/no question that contradicts both yes/no answers. >>>>>>>>>>
    For every H in the set of all Turing Machines there exists a D >>>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>>> (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for >>>>>>>>>> each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


        Nope, since each specific question HAS
        a correct answer, it shows that, by your
        own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because >>>>>>>> *for every Turing Machine of the infinite set of all Turing
    machines*
    *for every Turing Machine of the infinite set of all Turing
    machines*
    *for every Turing Machine of the infinite set of all Turing
    machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

        Where does it say that a Turing
        Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D
    that derives a self-contradictory question for this H in that
    (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for
    each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

        The issue that you ignore is that you are
        confalting a set of questions with a question,
        and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

        It *CAN* if the question ask something about
        the person being questioned.

        But it *CAN'T* if the question doesn't in any
        way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


        The QUESTION doesn't refer to the person
        being asked?

        That D calls H doesn't REFER to the asker,
        but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

       Nope. The question does this input representing
       D(D) Halt does NOT refer to any particular decider,
       just what ever one this is given to.

    *You can ignore that D calls H none-the-less when D*
    *calls H this does mean that D <is> referencing H*

    The only way that I can tell that I am proving my point
    is that rebuttals from people that are stuck in rebuttal
    mode become increasingly nonsensical.


        "CALLING H doesn't REFER to the decider deciding it."

    Sure it does with H(D,D) D is calling the decider deciding it.


       Nope, D is calling the original H, no matter
       WHAT decider is deciding it.

    Duh? calling the original decider when
    the original decider is deciding it

    Which doesn't mean the problem has a REFERENCE, because code it uses
    doesn't change.

    I guess you DO think that the following code make Y a referece to X

    x = 1;
    y = 1;

    Which proves your stupidity.


    Because the halting problem and Tarski Undefinability
    (attempting to formalize the notion of truth itself)
    are different aspects of the same problem:

    So? Where does "Because" apply here.


    My same ideas can be used to automatically divide
    truth from disinformation so that climate change
    denial does not cause humans to become extinct.


    But clearly it isn't as you are spreading disinformation, as has been
    proven.


    Are you going to perpetually play head games?


    No, I will continue to point out actual Truth.

    YOU are the one playing Head Games.

    TO be a "Reference" it needs to always end up using the thing
    referenced, which ish't what happens here.

    You are just showing how IGNORANT you are of basic facts.

    How can you possible think you can determine what is truth when you
    continue to base you arguments on LIES.

    Or, is your intent to get rid of "Disinformation" by just saying it
    doesn't exist because anything we want to be true we can make true.

    That seems to be the basis of your logic.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Don Stockbauer@21:1/5 to Richard Damon on Tue Oct 31 05:57:49 2023
    On Monday, October 30, 2023 at 7:58:35 PM UTC-5, Richard Damon wrote:
    On 10/30/23 5:39 PM, olcott wrote:
    On 10/30/2023 7:04 PM, olcott wrote:
    On 10/30/2023 6:17 PM, olcott wrote:
    On 10/30/2023 5:46 PM, olcott wrote:
    On 10/30/2023 5:10 PM, olcott wrote:
    On 10/30/2023 3:11 PM, olcott wrote:
    On 10/30/2023 1:08 PM, olcott wrote:
    On 10/30/2023 12:23 PM, olcott wrote:
    On 10/30/2023 11:57 AM, olcott wrote:
    On 10/30/2023 11:29 AM, olcott wrote:
    On 10/29/2023 12:30 PM, olcott wrote:
    *Everyone agrees that this is impossible*
    No computer program H can correctly predict what another >>>>>>>>>>> computer
    program D will do when D has been programmed to do the >>>>>>>>>>> opposite of
    whatever H says.

    H(D) is functional notation that specifies the return value >>>>>>>>>>> from H(D)
    Correct(H(D)==false) means that H(D) is correct that D does >>>>>>>>>>> not halt
    Correct(H(D)==true) means that H(D) is correct that D does halt >>>>>>>>>>>
    For all H ∈ TM there exists input D such that
    (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false >>>>>>>>>>>
    *No one pays attention to what this impossibility means* >>>>>>>>>>> The halting problem is defined as an unsatisfiable
    specification thus
    isomorphic to a question that has been defined to have no >>>>>>>>>>> correct
    answer.

    What time is it (yes or no)?
    has no correct answer because there is something wrong with the >>>>>>>>>>> question. In this case we know to blame the question and not >>>>>>>>>>> the one
    answering it.

    When we understand that there are some inputs to every TM H that >>>>>>>>>>> contradict both Boolean return values that H could return >>>>>>>>>>> then the
    question: Does your input halt? is essentially a
    self-contradictory
    (thus incorrect) question in these cases.

    The inability to correctly answer an incorrect question >>>>>>>>>>> places no actual
    limit on anyone or anything.

    This insight opens up an alternative treatment of these >>>>>>>>>>> pathological
    inputs the same way that ZFC handled Russell's Paradox. >>>>>>>>>>>

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    *A self-contradictory question is defined as*
    Any yes/no question that contradicts both yes/no answers. >>>>>>>>>>
    For every H in the set of all Turing Machines there exists a D >>>>>>>>>> that derives a self-contradictory question for this H in that >>>>>>>>>> (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for >>>>>>>>>> each H*

    *proving that this is literally true*
    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*


    Nope, since each specific question HAS
    a correct answer, it shows that, by your
    own definition, it isn't "Self-Contradictory"

    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*
    *That is a deliberate strawman deception paraphrase*

    There does not exist a solution to the halting problem because >>>>>>>> *for every Turing Machine of the infinite set of all Turing >>>>>>>> machines*
    *for every Turing Machine of the infinite set of all Turing >>>>>>>> machines*
    *for every Turing Machine of the infinite set of all Turing >>>>>>>> machines*

    there exists a D that makes the question:
    Does your input halt?
    a self-contradictory thus incorrect question.

    Where does it say that a Turing
    Machine must exsit to do it?

    *The only reason that no such Turing Machine exists is*

    For every H in the set of all Turing Machines there exists a D >>>>>>> that derives a self-contradictory question for this H in that >>>>>>> (a) If this H says that its D will halt, D loops
    (b) If this H that says its D will loop it halts.
    *Thus the question: Does D halt? is contradicted by some D for >>>>>>> each H*

    *therefore*

    *The halting problem proofs merely show that*
    *self-contradictory questions have no correct answer*

    The issue that you ignore is that you are
    confalting a set of questions with a question,
    and are baseing your logic on a strawman,

    It is not my mistake. Linguists understand that the
    context of who is asked a question changes the meaning
    of the question.

    This can easily be shown to apply to decision problem
    instances as follows:

    In that H.true and H.false are the wrong answer when
    D calls H to do the opposite of whatever value that
    either H returns.

    Whereas exactly one of H1.true or H1.false is correct
    for this exact same D.

    This proves that the question: "Does your input halt?"
    has a different meaning across the H and H1 pairs.

    It *CAN* if the question ask something about
    the person being questioned.

    But it *CAN'T* if the question doesn't in any
    way reffer to who you ask.

    D calls H thus D DOES refer to H
    D does not call H1 therefore D does not refer to H1


    The QUESTION doesn't refer to the person
    being asked?

    That D calls H doesn't REFER to the asker,
    but to a specific machine.

    For the H/D pair D does refer to the specific
    machine being asked: Does your input halt?
    D knows about and references H.

    Nope. The question does this input representing
    D(D) Halt does NOT refer to any particular decider,
    just what ever one this is given to.

    *You can ignore that D calls H none-the-less when D*
    *calls H this does mean that D <is> referencing H*

    The only way that I can tell that I am proving my point
    is that rebuttals from people that are stuck in rebuttal
    mode become increasingly nonsensical.


    "CALLING H doesn't REFER to the decider deciding it."

    Sure it does with H(D,D) D is calling the decider deciding it.


    Nope, D is calling the original H, no matter
    WHAT decider is deciding it.

    Duh? calling the original decider when
    the original decider is deciding it
    Which doesn't mean the problem has a REFERENCE, because code it uses
    doesn't change.

    I guess you DO think that the following code make Y a referece to X
    x = 1;
    y = 1;
    Which proves your stupidity.

    Because the halting problem and Tarski Undefinability
    (attempting to formalize the notion of truth itself)
    are different aspects of the same problem:
    So? Where does "Because" apply here.

    My same ideas can be used to automatically divide
    truth from disinformation so that climate change
    denial does not cause humans to become extinct.

    But clearly it isn't as you are spreading disinformation, as has been proven.
    Are you going to perpetually play head games?

    No, I will continue to point out actual Truth.

    YOU are the one playing Head Games.

    TO be a "Reference" it needs to always end up using the thing
    referenced, which ish't what happens here.

    You are just showing how IGNORANT you are of basic facts.

    How can you possible think you can determine what is truth when you
    continue to base you arguments on LIES.

    Or, is your intent to get rid of "Disinformation" by just saying it
    doesn't exist because anything we want to be true we can make true.

    That seems to be the basis of your logic.

    Instead of arguing like this, what you two could do to put an end to this infinite argument is there's a Scandinavian form of fighting where one man's right arm is tied to the other man's left arm facing each other, and they're giving small knives to use
    the stab each other to death.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)