• =?utf-8?Q?Re:_When_the_Linz_=C4=A4_is_required_to_report_on_its_own_beh

    From Mikko@21:1/5 to olcott on Thu Feb 8 16:33:32 2024
    On 2024-02-08 14:14:55 +0000, olcott said:

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn // wrong answer

    The above pair of templates specify every encoding of Ĥ that can
    possibly exist, an infinite set of Turing machines such that each one
    gets the wrong answer when it is required to report its own halt status. https://www.liarparadox.org/Linz_Proof.pdf

    This proves that the halting problem counter-example
    <is> isomorphic to the Liar Paradox.

    Ĥ is not required to report anything. Linz only specifies how Ĥ is constructed but not what it should do.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Thu Feb 8 17:11:59 2024
    On 2024-02-08 14:39:05 +0000, olcott said:

    On 2/8/2024 8:33 AM, Mikko wrote:
    On 2024-02-08 14:14:55 +0000, olcott said:

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer

    The above pair of templates specify every encoding of Ĥ that can
    possibly exist, an infinite set of Turing machines such that each one
    gets the wrong answer when it is required to report its own halt status. >>> https://www.liarparadox.org/Linz_Proof.pdf

    This proves that the halting problem counter-example
    <is> isomorphic to the Liar Paradox.

    Ĥ is not required to report anything. Linz only specifies how Ĥ is
    constructed but not what it should do.


    *Clearly you didn't read what he said on the link*

    The point is not what he said but what he didn't say. He didn't
    say what Ĥ is required to do.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Fri Feb 9 09:07:17 2024
    On 2024-02-08 15:15:54 +0000, olcott said:

    On 2/8/2024 9:11 AM, Mikko wrote:
    On 2024-02-08 14:39:05 +0000, olcott said:

    On 2/8/2024 8:33 AM, Mikko wrote:
    On 2024-02-08 14:14:55 +0000, olcott said:

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer

    The above pair of templates specify every encoding of Ĥ that can
    possibly exist, an infinite set of Turing machines such that each one >>>>> gets the wrong answer when it is required to report its own halt status. >>>>> https://www.liarparadox.org/Linz_Proof.pdf

    This proves that the halting problem counter-example
    <is> isomorphic to the Liar Paradox.

    Ĥ is not required to report anything. Linz only specifies how Ĥ is
    constructed but not what it should do.


    *Clearly you didn't read what he said on the link*

    The point is not what he said but what he didn't say. He didn't
    say what Ĥ is required to do.


    He did say what Ĥ is required to do
    and you simply didn't read what he said.

    No, he didn't. Otherwise you would show where in that text is the word "require" or something that means the same. But you don't because he
    didn's say.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Sat Feb 10 10:30:06 2024
    On 2024-02-09 14:37:26 +0000, olcott said:

    On 2/9/2024 1:07 AM, Mikko wrote:
    On 2024-02-08 15:15:54 +0000, olcott said:

    On 2/8/2024 9:11 AM, Mikko wrote:
    On 2024-02-08 14:39:05 +0000, olcott said:

    On 2/8/2024 8:33 AM, Mikko wrote:
    On 2024-02-08 14:14:55 +0000, olcott said:

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer

    The above pair of templates specify every encoding of Ĥ that can >>>>>>> possibly exist, an infinite set of Turing machines such that each one >>>>>>> gets the wrong answer when it is required to report its own halt status.
    https://www.liarparadox.org/Linz_Proof.pdf

    This proves that the halting problem counter-example
    <is> isomorphic to the Liar Paradox.

    Ĥ is not required to report anything. Linz only specifies how Ĥ is >>>>>> constructed but not what it should do.


    *Clearly you didn't read what he said on the link*

    The point is not what he said but what he didn't say. He didn't
    say what Ĥ is required to do.


    He did say what Ĥ is required to do
    and you simply didn't read what he said.

    No, he didn't. Otherwise you would show where in that text is the word
    "require" or something that means the same. But you don't because he
    didn's say.


    We can therefore legitimately ask what would happen if Ĥ is
    applied to ŵ. (middle of page 3)
    https://www.liarparadox.org/Linz_Proof.pdf

    In my notational conventions it would be: Ĥ applied to ⟨Ĥ⟩.

    When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
    wrong answer for every possible Ĥ applied to ⟨Ĥ⟩.

    Linz says nothing about "Ĥ is to report". If you interpreted Ĥ(<Ĥ>) halting in state Ĥ.qn to mean that Ĥ(<Ĥ>) does not halt that is obviously worong but there is no reason to interprete it that way. Ĥ.qy cannot be an answer
    because Ĥ never halts in that sate. Anysay, whether Ĥ halts in Ĥ.qn or some other state or not at all it does not violate requirement specified by
    Linz.

    When every possible Ĥ of the infinite set of Ĥ is applied to
    its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
    to be self-contradictory.

    The term "self-contradictory" is not really applicable to Ĥ
    because Ĥ is a Turing machine and a Turing machine cannot have
    a truth value.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Sun Feb 11 11:00:10 2024
    On 2024-02-10 16:18:05 +0000, olcott said:

    On 2/10/2024 9:29 AM, Richard Damon wrote:
    On 2/10/24 10:06 AM, olcott wrote:
    On 2/10/2024 7:35 AM, Richard Damon wrote:
    On 2/10/24 12:33 AM, olcott wrote:
    On 2/9/2024 11:15 PM, Richard Damon wrote:
    On 2/9/24 11:24 PM, olcott wrote:
    On 2/9/2024 6:09 PM, Richard Damon wrote:
    On 2/9/24 9:50 AM, olcott wrote:
    On 2/9/2024 6:05 AM, Richard Damon wrote:
    On 2/9/24 12:22 AM, olcott wrote:
    On 2/8/2024 9:44 PM, Richard Damon wrote:
    On 2/8/24 10:34 PM, olcott wrote:
    On 2/8/2024 8:40 PM, Richard Damon wrote:
    On 2/8/24 7:48 PM, olcott wrote:
    On 2/8/2024 5:50 PM, Richard Damon wrote:
    On 2/8/24 1:28 PM, olcott wrote:
    On 2/8/2024 12:15 PM, immibis wrote:
    On 8/02/24 19:09, olcott wrote:
    On 2/8/2024 10:32 AM, immibis wrote:
    On 8/02/24 15:14, olcott wrote:
    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer

    The above pair of templates specify every encoding of Ĥ that can
    possibly exist, an infinite set of Turing machines such that each one
    gets the wrong answer when it is required to report its own halt status.

    This proves that it is impossible to for any Ĥ to give the right answer
    on all inputs.

    It proves that asking Ĥ whether it halts or not is an incorrect
    question where both yes and no are the wrong answer. >>>>>>>>>>>>>>>>>>
    No, it proves the right answer is the opposite of what it says.


    *This seems to be over your head*
    A self-contradictory question never has any correct answer. >>>>>>>>>>>>>>>>>

    So the Halting Question, does the computation described by the input
    Halt? isn't a self-contradictory question, as it always has a correct
    answer, the opposite of what H gives (if it gives one). >>>>>>>>>>>>>>>>
    Thus, your premise is false.

    Maybe you need to carefully reread this fifty to sixty times before you
    get it? (it took me twenty years to get it this simple) >>>>>>>>>>>>>>>
    When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
    wrong answer for every possible Ĥ applied to ⟨Ĥ⟩. >>>>>>>>>>>>>>

    But Ĥ doesn't need to report on anything, the copy of H that is in it does.


    Do you understand that every possible element of an infinite set is
    more than one element?


    Right, so the set isn't a specific input, so not the thing that Halting
    quesiton is about.

    The Haltig problem is about making a decider that answers the Halting
    QUestion which asks the decider about the SPECIFIC COMPUTATION (a
    specific program/data) that the input describes.

    Not about "sets" of Decider / Inputs


    When an infinite set of decider/input pairs has no correct >>>>>>>>>>>>> answer then the question is rigged.


    Except that EVERY element of that set had a correct answer, just not
    the one the decider gave.

    When Ĥ applied to ⟨Ĥ⟩ has been intentionally defined to contradict
    every value that each embedded_H returns for the infinite set of >>>>>>>>>>> every Ĥ that can possibly exist then each and every element of >>>>>>>>>>> these Ĥ / ⟨Ĥ⟩ pairs is isomorphic to a self-contradictory question.

    No, YOUR POOP question, is self-contradictory.

    The Halting Question is not, as EVERY element of that set you talk >>>>>>>>>> about has a correct answer to it, as every specific input describes a
    Halting Computation or not.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    When every possible Ĥ of the infinite set of Ĥ is applied to >>>>>>>>> its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
    to be self-contradictory.

    So?

    Note, every possible Ĥ means every possible H, so all H are wrong. >>>>>>>>
    The issue is not that the most powerful model of computation is >>>>>>>>> too weak. The issue is that an input was intentionally defined >>>>>>>>> to be self-contradictory.

    But it shows that the simple problem, for which we have good reasons >>>>>>>> for wanting an answer, can not be computed by this most powerful model >>>>>>>> of computation.


    Ĥ applied to ⟨Ĥ⟩ is asking Ĥ:
    Do you halt on your own Turing Machine Description?

    No, it is asking if the computation described by the input will halt when run.


    Linz and I have been referring to the actual computation of
    Ĥ applied to ⟨Ĥ⟩ with no simulators involved.

    Right, and since your Ĥ (Ĥ) will Halt since your H (Ĥ) (Ĥ) goes to qn >>>> to say the computation that its input (Ĥ) (Ĥ) represents, that is Ĥ (Ĥ)
    will not halt.

    Thus you H is just WRONG.


    embedded_H could be encoded with every detail all of knowledge that can
    be expressed using language. This means that embedded_H is not
    restricted by typical conventions. embedded_H could output a text string >>> swearing at you in English for trying to trick it. This would not be a
    wrong answer.

    Embedded H is restricted to only be able to do what is computable.

    Since Embedded_H is (at least by your claims) an exact copy of the
    Turing Machine H, it can only do what a Turing Machine can do.


    When Embedded_H has encoded within it all of human knowledge that can
    be encoded within language then it ceases to be restricted to Boolean.
    This enables Embedded_H to do anything that a human mind can do.

    A human mind cannot solve the halting problem of a complex program with
    complex input.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to Richard Damon on Sun Feb 11 11:34:39 2024
    On 2024-02-11 03:27:05 +0000, Richard Damon said:

    On 2/10/24 9:24 PM, olcott wrote:

    When a machine contradicts every answer that this same machine
    provides this is a ruse to try to show that computation is limited.

    In other words, you don't understand what you are talking about.

    You don't understand what a computation IS, so you don't understand
    their limits.

    Is there any evidence that Olcott can understand anything at all?

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Sun Feb 11 11:18:43 2024
    On 2024-02-10 16:18:05 +0000, olcott said:

    On 2/10/2024 9:29 AM, Richard Damon wrote:
    On 2/10/24 10:06 AM, olcott wrote:
    On 2/10/2024 7:35 AM, Richard Damon wrote:
    On 2/10/24 12:33 AM, olcott wrote:
    On 2/9/2024 11:15 PM, Richard Damon wrote:
    On 2/9/24 11:24 PM, olcott wrote:
    On 2/9/2024 6:09 PM, Richard Damon wrote:
    On 2/9/24 9:50 AM, olcott wrote:
    On 2/9/2024 6:05 AM, Richard Damon wrote:
    On 2/9/24 12:22 AM, olcott wrote:
    On 2/8/2024 9:44 PM, Richard Damon wrote:
    On 2/8/24 10:34 PM, olcott wrote:
    On 2/8/2024 8:40 PM, Richard Damon wrote:
    On 2/8/24 7:48 PM, olcott wrote:
    On 2/8/2024 5:50 PM, Richard Damon wrote:
    On 2/8/24 1:28 PM, olcott wrote:
    On 2/8/2024 12:15 PM, immibis wrote:
    On 8/02/24 19:09, olcott wrote:
    On 2/8/2024 10:32 AM, immibis wrote:
    On 8/02/24 15:14, olcott wrote:
    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ // wrong answer
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn   // wrong answer

    The above pair of templates specify every encoding of Ĥ that can
    possibly exist, an infinite set of Turing machines such that each one
    gets the wrong answer when it is required to report its own halt status.

    This proves that it is impossible to for any Ĥ to give the right answer
    on all inputs.

    It proves that asking Ĥ whether it halts or not is an incorrect
    question where both yes and no are the wrong answer. >>>>>>>>>>>>>>>>>>
    No, it proves the right answer is the opposite of what it says.


    *This seems to be over your head*
    A self-contradictory question never has any correct answer. >>>>>>>>>>>>>>>>>

    So the Halting Question, does the computation described by the input
    Halt? isn't a self-contradictory question, as it always has a correct
    answer, the opposite of what H gives (if it gives one). >>>>>>>>>>>>>>>>
    Thus, your premise is false.

    Maybe you need to carefully reread this fifty to sixty times before you
    get it? (it took me twenty years to get it this simple) >>>>>>>>>>>>>>>
    When Ĥ is to report on its own behavior both Ĥ.qy and Ĥ.qn are the
    wrong answer for every possible Ĥ applied to ⟨Ĥ⟩. >>>>>>>>>>>>>>

    But Ĥ doesn't need to report on anything, the copy of H that is in it does.


    Do you understand that every possible element of an infinite set is
    more than one element?


    Right, so the set isn't a specific input, so not the thing that Halting
    quesiton is about.

    The Haltig problem is about making a decider that answers the Halting
    QUestion which asks the decider about the SPECIFIC COMPUTATION (a
    specific program/data) that the input describes.

    Not about "sets" of Decider / Inputs


    When an infinite set of decider/input pairs has no correct >>>>>>>>>>>>> answer then the question is rigged.


    Except that EVERY element of that set had a correct answer, just not
    the one the decider gave.

    When Ĥ applied to ⟨Ĥ⟩ has been intentionally defined to contradict
    every value that each embedded_H returns for the infinite set of >>>>>>>>>>> every Ĥ that can possibly exist then each and every element of >>>>>>>>>>> these Ĥ / ⟨Ĥ⟩ pairs is isomorphic to a self-contradictory question.

    No, YOUR POOP question, is self-contradictory.

    The Halting Question is not, as EVERY element of that set you talk >>>>>>>>>> about has a correct answer to it, as every specific input describes a
    Halting Computation or not.

    When Ĥ is applied to ⟨Ĥ⟩
    Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    When every possible Ĥ of the infinite set of Ĥ is applied to >>>>>>>>> its own machine description: ⟨Ĥ⟩ then Ĥ is intentionally defined
    to be self-contradictory.

    So?

    Note, every possible Ĥ means every possible H, so all H are wrong. >>>>>>>>
    The issue is not that the most powerful model of computation is >>>>>>>>> too weak. The issue is that an input was intentionally defined >>>>>>>>> to be self-contradictory.

    But it shows that the simple problem, for which we have good reasons >>>>>>>> for wanting an answer, can not be computed by this most powerful model >>>>>>>> of computation.


    Ĥ applied to ⟨Ĥ⟩ is asking Ĥ:
    Do you halt on your own Turing Machine Description?

    No, it is asking if the computation described by the input will halt when run.


    Linz and I have been referring to the actual computation of
    Ĥ applied to ⟨Ĥ⟩ with no simulators involved.

    Right, and since your Ĥ (Ĥ) will Halt since your H (Ĥ) (Ĥ) goes to qn >>>> to say the computation that its input (Ĥ) (Ĥ) represents, that is Ĥ (Ĥ)
    will not halt.

    Thus you H is just WRONG.


    embedded_H could be encoded with every detail all of knowledge that can
    be expressed using language. This means that embedded_H is not
    restricted by typical conventions. embedded_H could output a text string >>> swearing at you in English for trying to trick it. This would not be a
    wrong answer.

    Embedded H is restricted to only be able to do what is computable.

    Since Embedded_H is (at least by your claims) an exact copy of the
    Turing Machine H, it can only do what a Turing Machine can do.


    When Embedded_H has encoded within it all of human knowledge that can
    be encoded within language then it ceases to be restricted to Boolean.
    This enables Embedded_H to do anything that a human mind can do.

    So, it CAN'T do what you claim, so you are a LIAR.

    enum Boolean {
       TRUE,
       FALSE,
       NEITHER
    };

    Boolean True(English, "this sentence is not true")
    would be required to do this same sort of thing.



    But CAN it? Remember, programs can only do what programs can do, which
    is based on the instructions they are composed of

    You are just too stupid to understand this.

    It is not that I am stupid it is that you cannot think outside the box
    of conventional wisdom. There is nothing impossible about a TM that
    can communicate in English and understand the meaning of words to the
    same extent that human experts do.

    Everyone whose opinion matters can easily see after reading some of
    Olcott's old ( e.g. https://groups.google.com/g/comp.theory/c/eMq0M4JuqIE )
    and new posts can easily see whether he really is so stupid.

    A Turing machine that can communicate in English and understand the
    meaning to the same extent that human experts do cannot solve halting
    problem.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Mon Feb 12 12:34:13 2024
    On 2024-02-11 14:57:55 +0000, olcott said:

    On 2/11/2024 6:37 AM, Richard Damon wrote:
    On 2/10/24 10:45 PM, olcott wrote:
    On 2/10/2024 9:26 PM, Richard Damon wrote:
    On 2/10/24 9:59 PM, olcott wrote:

    Mechanical and organic thinkers are either coherent or incorrect.


    "Mechanical things" don't "think" in the normal sense it us used.

    They COMPUTE, based on fixed pre-defined rules.



    LLMs can reconfigure themselves on the fly redefining
    their own rules within a single dialogue.


    But only in accordance to its existing programming, or your system
    isn't a Computation.


    The point is that they can reprogram themselves on the fly using modern machine learning. LLMs learn on their own.

    They can only learn what they are programmed to learn.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Tue Feb 13 10:09:25 2024
    On 2024-02-12 20:44:01 +0000, olcott said:

    On 2/12/2024 1:42 PM, immibis wrote:
    On 12/02/24 19:37, olcott wrote:
    On 2/12/2024 12:29 PM, immibis wrote:
    On 12/02/24 19:14, olcott wrote:

    Math and computer science are anchored in fundamental misconceptions >>>>> of the way that analytical truth really works.

    It may seem that way to everyone that does not understand math,
    computer science, and analytical truth.


    Very few people understand analytical truth, most simply
    disbelieve that it exists on the basis of Quine's nonsense
    rebuttal.

    Many people understand the halting problem. You are not one of them.


    Many people understand that the halting problem proof has
    no errors within the conventional notion of undecidability.

    Very few people understand that conventional notion of
    undecidability is itself incoherent.

    In particular, Olcott does not understand that. Otherwise he would
    demostrate that understanding.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to Richard Damon on Tue Feb 13 10:18:09 2024
    On 2024-02-13 02:10:47 +0000, Richard Damon said:

    On 2/12/24 1:37 PM, olcott wrote:
    On 2/12/2024 12:29 PM, immibis wrote:
    On 12/02/24 19:14, olcott wrote:

    Math and computer science are anchored in fundamental misconceptions
    of the way that analytical truth really works.

    It may seem that way to everyone that does not understand math,
    computer science, and analytical truth.


    Very few people understand analytical truth, most simply
    disbelieve that it exists on the basis of Quine's nonsense
    rebuttal.

    Two Dogmas of Empiricism Willard Van Orman Quine (1951)
    https://michaelreno.org/wp-content/uploads/2020/01/QuineTwoDogmas.pdf



    Its clear that YOU don't understand what you are talking about.

    It is ANALYTICALLY TRUE, as PROVEN, that Halting is Undecidable.

    It is possible that somebody does not understand the word "undecidable"
    so it it better to say that "No Turing machine is a halting decider",
    which is proven and therefore analytically true.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Wed Feb 14 13:58:32 2024
    On 2024-02-14 02:54:57 +0000, olcott said:

    On 2/13/2024 7:55 PM, immibis wrote:
    On 13/02/24 23:53, olcott wrote:
    On 2/13/2024 2:25 PM, immibis wrote:
    On 13/02/24 01:11, olcott wrote:
    On 2/12/2024 5:08 PM, immibis wrote:
    On 12/02/24 22:49, olcott wrote:
    On 2/12/2024 3:41 PM, immibis wrote:
    On 12/02/24 21:34, olcott wrote:
    On 2/12/2024 2:12 PM, Shvili, the Kookologist wrote:
    On 2024-02-12, olcott <polcott2@gmail.com> wrote:
    [...]

    Self-contradictory inputs must be rejected as invalid.
    Math and computer science don't understand this.

    I'm curious... How can you possibly write things like this and not see
    that you are (or at least will be seen as) a deluded crackpot? >>>>>>>>>>

    *This proves that Gödel did not understand that*
    ...14 Every epistemological antinomy can likewise be used for a similar
    undecidability proof...(Gödel 1931:43)

    After you acknowledge that you understand that epistemological >>>>>>>>> antinomies cannot be used as the basis of any proof I will
    elaborate further.


    Linz paper is all concrete computer science. There are no
    "epistemological antinomies", only computer science.

    PhD computer science professors
    Stoddart, Hehner and Macias disagree thus proving that
    I am not a crank.


    Unlike you, I actually read the Macias paper you referenced. He does >>>>>> not agree with you and he does not prove anything.

    The other two directly agree with me Macias is a little more indirect. >>>>>
        Now run BAD(BAD) and consider what happens: ...
        Note that these are the only two possible cases, and in either case
        (whether HALT returns 0 or 1), HALT′s behavior is incorrect, >>>>>     i.e., HALT fails to answer the Halting Problem correctly (Macias:2014)



    So Macias agrees the halting problem cannot be solved by any program.


    Three PhD computer science professors agree with my 2004
    position that:

    *the only reason the halting problem cannot be*
    *solved is that there is something wrong with it*

    So you agree it cannot be solved. Case closed. You can stop posting now.

    Does the halting problem place an actual limit on computation? https://www.researchgate.net/publication/374806722_Does_the_halting_problem_place_an_actual_limit_on_computation


    *Incorrect questions place to limit on anyone or anything*
    *Incorrect questions place to limit on anyone or anything*
    *Incorrect questions place to limit on anyone or anything*

    If you think so then show a Turing machine that can correctly answer
    at least some incorrect question.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Wed Feb 14 13:56:02 2024
    On 2024-02-13 15:06:29 +0000, olcott said:

    On 2/13/2024 2:35 AM, Mikko wrote:
    On 2024-02-13 00:11:58 +0000, olcott said:

    The other two directly agree with me Macias is a little more indirect.

        Now run BAD(BAD) and consider what happens: ...
        Note that these are the only two possible cases, and in either case >>>     (whether HALT returns 0 or 1), HALT′s behavior is incorrect,
        i.e., HALT fails to answer the Halting Problem correctly (Macias:2014)

    That is now eay indirect, that is a direct statement of an important
    part of the proof of undecidability of halting.


    Macias says this is what's wrong with the halting problem specification:
    But there is a class of computer functions whose behavior is
    dependent on the context in which they are called or used:
    these may be called Context-Dependent Functions (CDFs). (Macias:2014)

    Thus three PhD computer science professors agree (with me) that
    there is something wrong with the halting problem specification.

    One can say that certain computer functions are context dependent
    but no Turing machine is. Therefore that statement does not apply
    to the halting problem of Turring machines.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)