• Re: Linz's proofs and other undecidable decision problems

    From Richard Damon@21:1/5 to olcott on Thu Feb 29 21:28:34 2024
    XPost: sci.logic

    On 2/29/24 5:29 PM, olcott wrote:
    On 2/29/2024 4:24 PM, wij wrote:
    On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
    On 2/29/2024 4:06 PM, wij wrote:
    On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
    On 2/29/2024 3:50 PM, wij wrote:
    On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
    On 2/29/2024 3:15 PM, wij wrote:
    On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
    On 2/29/2024 3:00 PM, wij wrote:
    On Thu, 2024-02-29 at 14:51 -0600, olcott wrote:
    On 2/29/2024 2:48 PM, wij wrote:
    On Thu, 2024-02-29 at 13:46 -0600, olcott wrote:
    On 2/29/2024 1:37 PM, Mikko wrote:
    On 2024-02-29 15:51:56 +0000, olcott said:

    H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs to
    report on

    A Turing machine is not in any memory space.


    That no memory space is specified because Turing machines >>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>> machines is the human memory where these ideas are located. >>>>>>>>>>>>>
    The entire notion of undecidability when it depends on >>>>>>>>>>>>> epistemological antinomies is incoherent.

    People that learn these things by rote never notice this. >>>>>>>>>>>>> Philosophers that examine these things looking for
    incoherence find it.

    ...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>

    So, do you agree what GUR says?

    People believes GUR. Why struggle so painfully, playing >>>>>>>>>>>> idiot everyday ?
    Give in, my friend.

    Graphical User Robots?
    The survival of the species depends on a correct
    understanding of truth.

    People believes GUR are going to survive.
    People does not believe GUR are going to vanish.

    What the Hell is GUR ?

    Selective memory?
    https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ >>>>>>>>
    Basically, GUR says that no one even your god can defy that HP >>>>>>>> is undecidable.

    I simplify that down to this.

    ...14 Every epistemological antinomy can likewise be used for
    a similar undecidability proof...(Gödel 1931:43)

    The general notion of decision problem undecidability is
    fundamentally
    flawed in all of those cases where a decider is required to
    correctly
    answer a self-contradictory (thus incorrect) question.

    When we account for this then epistemological antinomies are always >>>>>>> excluded from the domain of every decision problem making all of >>>>>>> these decision problems decidable.


    It seems you try to change what the halting problem again.

    https://en.wikipedia.org/wiki/Halting_problem
    In computability theory, the halting problem is the problem of
    determining, from a description
    of
    an
    arbitrary computer program and an input, whether the program will
    finish running, or continue
    to
    run
    forever....

    This wiki definition had been shown many times. But, since your
    English is
    terrible, you often read it as something else (actually, deliberately >>>>>> interpreted it differently, so called 'lie')

    If you want to refute Halting Problem, you must first understand
    what the
    problem is about, right? You never hit the target that every one
    can see, but POOP.




    Note: My email was delivered strangely. It swapped to sci.logic !!!

    If we have the decision problem that no one can answer this question: >>>>> Is this sentence true or false: "What time is it?"

    This is not the halting problem.

    Someone has to point out that there is something wrong with it.


    This is another problem (not the HP neither)


    The halting problem is one of many problems that is
    only "undecidable" because the notion of decidability
    incorrectly requires a correct answer to a self-contradictory
    (thus incorrect) question.


    What is the 'correct answer' to all HP like problems ?


    The correct answer to all undecidable decision problems
    that rely on self-contradictory input to determine
    undecidability is to reject this input as outside of the
    domain of any and all decision problems. This applies
    to the Halting Problem and many others.



    In other words, just define that some Turing Machines aren't actually
    Turing Machines, or aren't Turing Machines if they are given certain inputs.

    That is just admitting that the system isn't actually decidable, by
    trying to outlaw the problems.

    The issue then is, you can't tell if a thing that looks like and acts
    lie a Turing Machine is actually a PO-Turing Machine, until you can
    confirm that it doesn't have any of these contradictory properties.

    My guess is that detecting that is probably non-computable, so you can't
    tell for sure if what you have is actually a PO-Turing Machine or not

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From immibis@21:1/5 to olcott on Fri Mar 1 12:35:05 2024
    On 1/03/24 01:17, olcott wrote:
    All incorrect questions are rejected as invalid input.

    Turing machines are defined where every sequence of alphabet symbols is
    valid input. If you think that should not be true, you need to invent a
    new type of machine and say what the valid inputs are. You can call it
    an Olcott machine, if you like.

    Turing machines are useful because everything is completely specified.
    There are no unknowns. There are no gaps. There is no ambiguity. There
    is no self-contradiction or anything else. Can you make Olcott machines
    like this?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Mar 1 12:19:04 2024
    XPost: sci.logic

    On 3/1/24 12:03 PM, olcott wrote:
    On 3/1/2024 5:19 AM, Mikko wrote:
    On 2024-03-01 02:28:34 +0000, Richard Damon said:

    On 2/29/24 5:29 PM, olcott wrote:
    On 2/29/2024 4:24 PM, wij wrote:
    On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
    On 2/29/2024 4:06 PM, wij wrote:
    On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
    On 2/29/2024 3:50 PM, wij wrote:
    On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
    On 2/29/2024 3:15 PM, wij wrote:
    On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
    On 2/29/2024 3:00 PM, wij wrote:
    On Thu, 2024-02-29 at 14:51 -0600, olcott wrote:
    On 2/29/2024 2:48 PM, wij wrote:
    On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
    On 2024-02-29 15:51:56 +0000, olcott said:

    H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs to
    report on

    A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>

    That no memory space is specified because Turing machines >>>>>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>> machines is the human memory where these ideas are located. >>>>>>>>>>>>>>>>
    The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>> epistemological antinomies is incoherent.

    People that learn these things by rote never notice this. >>>>>>>>>>>>>>>> Philosophers that examine these things looking for >>>>>>>>>>>>>>>> incoherence find it.

    ...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>

    So, do you agree what GUR says?

    People believes GUR. Why struggle so painfully, playing >>>>>>>>>>>>>>> idiot everyday ?
    Give in, my friend.

    Graphical User Robots?
    The survival of the species depends on a correct
    understanding of truth.

    People believes GUR are going to survive.
    People does not believe GUR are going to vanish.

    What the Hell is GUR ?

    Selective memory?
    https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ

    Basically, GUR says that no one even your god can defy that >>>>>>>>>>> HP is undecidable.

    I simplify that down to this.

    ...14 Every epistemological antinomy can likewise be used for >>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)

    The general notion of decision problem undecidability is
    fundamentally
    flawed in all of those cases where a decider is required to >>>>>>>>>> correctly
    answer a self-contradictory (thus incorrect) question.

    When we account for this then epistemological antinomies are >>>>>>>>>> always
    excluded from the domain of every decision problem making all of >>>>>>>>>> these decision problems decidable.


    It seems you try to change what the halting problem again.

    https://en.wikipedia.org/wiki/Halting_problem
    In computability theory, the halting problem is the problem of >>>>>>>>> determining, from a description
    of
    an
    arbitrary computer program and an input, whether the program >>>>>>>>> will finish running, or continue
    to
    run
    forever....

    This wiki definition had been shown many times. But, since your >>>>>>>>> English is
    terrible, you often read it as something else (actually,
    deliberately
    interpreted it differently, so called 'lie')

    If you want to refute Halting Problem, you must first
    understand what the
    problem is about, right? You never hit the target that every >>>>>>>>> one can see, but POOP.




    Note: My email was delivered strangely. It swapped to sci.logic !!! >>>>>>>
    If we have the decision problem that no one can answer this
    question:
    Is this sentence true or false: "What time is it?"

    This is not the halting problem.

    Someone has to point out that there is something wrong with it. >>>>>>>>

    This is another problem (not the HP neither)


    The halting problem is one of many problems that is
    only "undecidable" because the notion of decidability
    incorrectly requires a correct answer to a self-contradictory
    (thus incorrect) question.


    What is the 'correct answer' to all HP like problems ?


    The correct answer to all undecidable decision problems
    that rely on self-contradictory input to determine
    undecidability is to reject this input as outside of the
    domain of any and all decision problems. This applies
    to the Halting Problem and many others.



    In other words, just define that some Turing Machines aren't actually
    Turing Machines, or aren't Turing Machines if they are given certain
    inputs.

    That is just admitting that the system isn't actually decidable, by
    trying to outlaw the problems.

    The issue then is, you can't tell if a thing that looks like and acts
    lie a Turing Machine is actually a PO-Turing Machine, until you can
    confirm that it doesn't have any of these contradictory properties.

    My guess is that detecting that is probably non-computable, so you
    can't tell for sure if what you have is actually a PO-Turing Machine
    or not

    If the restrictions on the acceptability of a Turing macine are
    sufficiently
    strong both the restricted halting problem and the membership or the
    restricted domain are Turing solvable. For example, if the head can
    only move
    in one direction.


    I have reverted to every detail of the original halting problem
    thus now accept that a halt decider must report on the behavior
    of the direct execution of its input.

    Except that you don't have Ĥ.H being the same machine as H as somehow
    they an give different answers.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt

    Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
    correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    As long as some computable criteria exists for Ĥ.H to transition
    to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    H simply looks for whatever wrong answer that Ĥ.H returns and
    reports on the halting or not halting behavior of that.


    And thus isn't Ĥ.H, and so you LIE that you are following "every detail"

    You are just provving that you are a PATHOLOGICAL LIAR.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Fri Mar 1 15:57:37 2024
    XPost: sci.logic

    On 3/1/24 3:44 PM, olcott wrote:
    On 3/1/2024 11:19 AM, Richard Damon wrote:
    And thus isn't Ĥ.H, and so you LIE that you are following "every detail"

    You are just proving that you are a PATHOLOGICAL LIAR.

    You (and everyone else here) knows that I honestly
    believe what I say thus you lie when you call me a liar.
    You have been called out on this by others before.




    Yes, and that makes you a PATHOLOGICAL LIAR, as you believe your own
    lies despite seeing that they can not be true.

    Blantant disregard for the truth does not absole you of being a LIAR.

    It also makes you an IDIOT.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Sat Mar 2 12:40:27 2024
    On 2024-03-01 17:03:39 +0000, olcott said:

    On 3/1/2024 5:19 AM, Mikko wrote:
    On 2024-03-01 02:28:34 +0000, Richard Damon said:

    On 2/29/24 5:29 PM, olcott wrote:
    On 2/29/2024 4:24 PM, wij wrote:
    On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
    On 2/29/2024 4:06 PM, wij wrote:
    On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
    On 2/29/2024 3:50 PM, wij wrote:
    On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
    On 2/29/2024 3:15 PM, wij wrote:
    On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
    On 2/29/2024 3:00 PM, wij wrote:
    On Thu, 2024-02-29 at 14:51 -0600, olcott wrote:
    On 2/29/2024 2:48 PM, wij wrote:
    On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
    On 2024-02-29 15:51:56 +0000, olcott said:

    H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs to report on

    A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>

    That no memory space is specified because Turing machines >>>>>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>> machines is the human memory where these ideas are located. >>>>>>>>>>>>>>>>
    The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>> epistemological antinomies is incoherent.

    People that learn these things by rote never notice this. >>>>>>>>>>>>>>>> Philosophers that examine these things looking for >>>>>>>>>>>>>>>> incoherence find it.

    ...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>

    So, do you agree what GUR says?

    People believes GUR. Why struggle so painfully, playing idiot everyday ?
    Give in, my friend.

    Graphical User Robots?
    The survival of the species depends on a correct understanding of truth.

    People believes GUR are going to survive.
    People does not believe GUR are going to vanish.

    What the Hell is GUR ?

    Selective memory?
    https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ

    Basically, GUR says that no one even your god can defy that HP is undecidable.

    I simplify that down to this.

    ...14 Every epistemological antinomy can likewise be used for >>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)

    The general notion of decision problem undecidability is fundamentally
    flawed in all of those cases where a decider is required to correctly
    answer a self-contradictory (thus incorrect) question.

    When we account for this then epistemological antinomies are always >>>>>>>>>> excluded from the domain of every decision problem making all of >>>>>>>>>> these decision problems decidable.


    It seems you try to change what the halting problem again.

    https://en.wikipedia.org/wiki/Halting_problem
    In computability theory, the halting problem is the problem of >>>>>>>>> determining, from a description
    of
    an
    arbitrary computer program and an input, whether the program will >>>>>>>>> finish running, or continue
    to
    run
    forever....

    This wiki definition had been shown many times. But, since your English is
    terrible, you often read it as something else (actually, deliberately >>>>>>>>> interpreted it differently, so called 'lie')

    If you want to refute Halting Problem, you must first understand what the
    problem is about, right? You never hit the target that every one can >>>>>>>>> see, but POOP.




    Note: My email was delivered strangely. It swapped to sci.logic !!! >>>>>>>
    If we have the decision problem that no one can answer this question: >>>>>>>> Is this sentence true or false: "What time is it?"

    This is not the halting problem.

    Someone has to point out that there is something wrong with it. >>>>>>>>

    This is another problem (not the HP neither)


    The halting problem is one of many problems that is
    only "undecidable" because the notion of decidability
    incorrectly requires a correct answer to a self-contradictory
    (thus incorrect) question.


    What is the 'correct answer' to all HP like problems ?


    The correct answer to all undecidable decision problems
    that rely on self-contradictory input to determine
    undecidability is to reject this input as outside of the
    domain of any and all decision problems. This applies
    to the Halting Problem and many others.



    In other words, just define that some Turing Machines aren't actually
    Turing Machines, or aren't Turing Machines if they are given certain
    inputs.

    That is just admitting that the system isn't actually decidable, by
    trying to outlaw the problems.

    The issue then is, you can't tell if a thing that looks like and acts
    lie a Turing Machine is actually a PO-Turing Machine, until you can
    confirm that it doesn't have any of these contradictory properties.

    My guess is that detecting that is probably non-computable, so you
    can't tell for sure if what you have is actually a PO-Turing Machine or
    not

    If the restrictions on the acceptability of a Turing macine are sufficiently >> strong both the restricted halting problem and the membership or the
    restricted domain are Turing solvable. For example, if the head can only move
    in one direction.


    I have reverted to every detail of the original halting problem
    thus now accept that a halt decider must report on the behavior
    of the direct execution of its input.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

    Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
    correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    Hard to do if Ĥ.H says the same as H.
    Hard to ensure that Ĥ.H does not say the same as H.

    As long as some computable criteria exists for Ĥ.H to transition
    to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    That is not very long.

    H simply looks for whatever wrong answer that Ĥ.H returns and
    reports on the halting or not halting behavior of that.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sat Mar 2 16:53:05 2024
    On 3/2/24 11:24 AM, olcott wrote:
    On 3/2/2024 4:40 AM, Mikko wrote:
    On 2024-03-01 17:03:39 +0000, olcott said:

    On 3/1/2024 5:19 AM, Mikko wrote:
    On 2024-03-01 02:28:34 +0000, Richard Damon said:

    On 2/29/24 5:29 PM, olcott wrote:
    On 2/29/2024 4:24 PM, wij wrote:
    On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
    On 2/29/2024 4:06 PM, wij wrote:
    On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
    On 2/29/2024 3:50 PM, wij wrote:
    On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
    On 2/29/2024 3:15 PM, wij wrote:
    On Thu, 2024-02-29 at 15:07 -0600, olcott wrote:
    On 2/29/2024 3:00 PM, wij wrote:
    On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:
    On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
    On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>
    H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely needs
    to report on

    A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>

    That no memory space is specified because Turing machines >>>>>>>>>>>>>>>>>> are imaginary fictions does not entail that they have no >>>>>>>>>>>>>>>>>> memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>>>> machines is the human memory where these ideas are >>>>>>>>>>>>>>>>>> located.

    The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>>>> epistemological antinomies is incoherent.

    People that learn these things by rote never notice this. >>>>>>>>>>>>>>>>>> Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>> incoherence find it.

    ...14 Every epistemological antinomy can likewise be used >>>>>>>>>>>>>>>>>> for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>

    So, do you agree what GUR says?

    People believes GUR. Why struggle so painfully, playing >>>>>>>>>>>>>>>>> idiot everyday ?
    Give in, my friend.

    Graphical User Robots?
    The survival of the species depends on a correct >>>>>>>>>>>>>>>> understanding of truth.

    People believes GUR are going to survive.
    People does not believe GUR are going to vanish.

    What the Hell is GUR ?

    Selective memory?
    https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ

    Basically, GUR says that no one even your god can defy that >>>>>>>>>>>>> HP is undecidable.

    I simplify that down to this.

    ...14 Every epistemological antinomy can likewise be used for >>>>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)

    The general notion of decision problem undecidability is >>>>>>>>>>>> fundamentally
    flawed in all of those cases where a decider is required to >>>>>>>>>>>> correctly
    answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>
    When we account for this then epistemological antinomies are >>>>>>>>>>>> always
    excluded from the domain of every decision problem making >>>>>>>>>>>> all of
    these decision problems decidable.


    It seems you try to change what the halting problem again. >>>>>>>>>>>
    https://en.wikipedia.org/wiki/Halting_problem
    In computability theory, the halting problem is the problem >>>>>>>>>>> of determining, from a description
    of
    an
    arbitrary computer program and an input, whether the program >>>>>>>>>>> will finish running, or continue
    to
    run
    forever....

    This wiki definition had been shown many times. But, since >>>>>>>>>>> your English is
    terrible, you often read it as something else (actually, >>>>>>>>>>> deliberately
    interpreted it differently, so called 'lie')

    If you want to refute Halting Problem, you must first
    understand what the
    problem is about, right? You never hit the target that every >>>>>>>>>>> one can see, but POOP.




    Note: My email was delivered strangely. It swapped to sci.logic >>>>>>>>> !!!

    If we have the decision problem that no one can answer this >>>>>>>>>> question:
    Is this sentence true or false: "What time is it?"

    This is not the halting problem.

    Someone has to point out that there is something wrong with it. >>>>>>>>>>

    This is another problem (not the HP neither)


    The halting problem is one of many problems that is
    only "undecidable" because the notion of decidability
    incorrectly requires a correct answer to a self-contradictory
    (thus incorrect) question.


    What is the 'correct answer' to all HP like problems ?


    The correct answer to all undecidable decision problems
    that rely on self-contradictory input to determine
    undecidability is to reject this input as outside of the
    domain of any and all decision problems. This applies
    to the Halting Problem and many others.



    In other words, just define that some Turing Machines aren't
    actually Turing Machines, or aren't Turing Machines if they are
    given certain inputs.

    That is just admitting that the system isn't actually decidable, by
    trying to outlaw the problems.

    The issue then is, you can't tell if a thing that looks like and
    acts lie a Turing Machine is actually a PO-Turing Machine, until
    you can confirm that it doesn't have any of these contradictory
    properties.

    My guess is that detecting that is probably non-computable, so you
    can't tell for sure if what you have is actually a PO-Turing
    Machine or not

    If the restrictions on the acceptability of a Turing macine are
    sufficiently
    strong both the restricted halting problem and the membership or the
    restricted domain are Turing solvable. For example, if the head can
    only move
    in one direction.


    I have reverted to every detail of the original halting problem
    thus now accept that a halt decider must report on the behavior
    of the direct execution of its input.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt

    Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
    correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    Hard to do if Ĥ.H says the same as H.
    Hard to ensure that Ĥ.H does not say the same as H.


    Both H and Ĥ.H simulate their inputs until they see that these
    inputs must be aborted to prevent their own infinite execution.
    When they find that they must abort the simulation of their
    inputs they transition to their NO state.

    This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning
    to H.qy. I have already empirically proved that two identical
    machines on identical input can transition to different final
    states when one of these identical machines has a pathological
    relationship with its input and the other does not

    Why did they differ?

    Your "emperical test" just show that your H and H1 were never
    computations and you have been a ignorant pathological lying idiot for
    years.


    *This principle seems to be sound*
    Two identical machines must derive the same result when
    applied to the same input.

    *Yet seems contradicted by the execution trace shown below*

    Nope. because you INTENTIONALLY HID the point of the difference.

    H and H1 use a HIDDEN INPUT, thus making them non-computations and you
    whole argument just a LIE.


    Because D calls H and D does not call H1 the inputs are not
    actually identical even though they have identical machine
    code bytes.


    But that isn't enought, as some machine code instructions extract
    "hidden" input data (the address of the code).

    So, you are just proving that you are a ignoranant pathological lying idiot.

    H sees D(D) call itself; this forces H to abort D.
    H1 does not see D(D) call itself; this does not force H1 to abort D.

    int D(int (*x)())
    {
      int Halt_Status = H(x, x);
      if (Halt_Status)
        HERE: goto HERE;
      return Halt_Status;
    }

    int main()
    {
      Output("Input_Halts = ", H1(D,D));
    }

     machine   stack     stack     machine    assembly
     address   address   data      code       language
     ========  ========  ========  =========  ============= [00001d42][00102fe9][00000000] 55         push ebp      ; begin main()
    [00001d43][00102fe9][00000000] 8bec       mov  ebp,esp [00001d45][00102fe5][00001d12] 68121d0000 push 00001d12 ; push D [00001d4a][00102fe1][00001d12] 68121d0000 push 00001d12 ; push D [00001d4f][00102fdd][00001d54] e8eef6ffff call 00001442 ; call H1(D,D)

    H1: Begin Simulation   Execution Trace Stored at:113095
    Address_of_H1:1442
    [00001d12][00113081][00113085] 55         push ebp      ; begin D
    [00001d13][00113081][00113085] 8bec       mov  ebp,esp [00001d15][0011307d][00103051] 51         push ecx [00001d16][0011307d][00103051] 8b4508     mov  eax,[ebp+08] [00001d19][00113079][00001d12] 50         push eax      ; push D [00001d1a][00113079][00001d12] 8b4d08     mov  ecx,[ebp+08] [00001d1d][00113075][00001d12] 51         push ecx      ; push D [00001d1e][00113071][00001d23] e81ff8ffff call 00001542 ; call H(D,D)

    H: Begin Simulation   Execution Trace Stored at:15dabd
    Address_of_H:1542
    [00001d12][0015daa9][0015daad] 55         push ebp      ; begin D
    [00001d13][0015daa9][0015daad] 8bec       mov  ebp,esp [00001d15][0015daa5][0014da79] 51         push ecx [00001d16][0015daa5][0014da79] 8b4508     mov  eax,[ebp+08] [00001d19][0015daa1][00001d12] 50         push eax      ; push D [00001d1a][0015daa1][00001d12] 8b4d08     mov  ecx,[ebp+08] [00001d1d][0015da9d][00001d12] 51         push ecx      ; push D [00001d1e][0015da99][00001d23] e81ff8ffff call 00001542 ; call H(D,D)
    H: Recursive Simulation Detected Simulation Stopped (return 0 to caller)

    [00001d23][0011307d][00103051] 83c408     add esp,+08   ; returned to D [00001d26][0011307d][00000000] 8945fc     mov [ebp-04],eax [00001d29][0011307d][00000000] 837dfc00   cmp dword [ebp-04],+00 [00001d2d][0011307d][00000000] 7402       jz 00001d31 [00001d31][0011307d][00000000] 8b45fc     mov eax,[ebp-04] [00001d34][00113081][00113085] 8be5       mov esp,ebp [00001d36][00113085][00001541] 5d         pop ebp [00001d37][00113089][00001d12] c3         ret           ; exit D
    H1: End Simulation   Input Terminated Normally (return 1 to caller)

    [00001d54][00102fe9][00000000] 83c408     add  esp,+08 [00001d57][00102fe5][00000001] 50         push eax     ; H1 return value
    [00001d58][00102fe1][00000763] 6863070000 push 00000763 ; string address [00001d5d][00102fe1][00000763] e820eaffff call 00000782 ; call Output Input_Halts = 1
    [00001d62][00102fe9][00000000] 83c408     add esp,+08 [00001d65][00102fe9][00000000] 33c0       xor eax,eax [00001d67][00102fed][00000018] 5d         pop ebp [00001d68][00102ff1][00000000] c3         ret           ; exit main()
    Number of Instructions Executed(470247) == 7019 Pages


    As long as some computable criteria exists for Ĥ.H to transition
    to Ĥ.Hqy or Ĥ.Hqn, then H has its basis to correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    That is not very long.

    H simply looks for whatever wrong answer that Ĥ.H returns and
    reports on the halting or not halting behavior of that.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Mar 3 07:15:19 2024
    XPost: sci.logic

    On 3/2/24 10:15 PM, olcott wrote:
    On 3/2/2024 3:53 PM, Richard Damon wrote:
    On 3/2/24 11:24 AM, olcott wrote:
    On 3/2/2024 4:40 AM, Mikko wrote:
    On 2024-03-01 17:03:39 +0000, olcott said:

    On 3/1/2024 5:19 AM, Mikko wrote:
    On 2024-03-01 02:28:34 +0000, Richard Damon said:

    On 2/29/24 5:29 PM, olcott wrote:
    On 2/29/2024 4:24 PM, wij wrote:
    On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
    On 2/29/2024 4:06 PM, wij wrote:
    On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
    On 2/29/2024 3:50 PM, wij wrote:
    On Thu, 2024-02-29 at 15:27 -0600, olcott wrote:
    On 2/29/2024 3:15 PM, wij wrote:
    On Thu, 2024-02-29 at 15:07 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 3:00 PM, wij wrote:
    On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:
    On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote:
    On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>>>
    H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely >>>>>>>>>>>>>>>>>>>>>> needs to report on

    A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>>>

    That no memory space is specified because Turing >>>>>>>>>>>>>>>>>>>> machines
    are imaginary fictions does not entail that they >>>>>>>>>>>>>>>>>>>> have no
    memory space. The actual memory space of actual Turing >>>>>>>>>>>>>>>>>>>> machines is the human memory where these ideas are >>>>>>>>>>>>>>>>>>>> located.

    The entire notion of undecidability when it depends on >>>>>>>>>>>>>>>>>>>> epistemological antinomies is incoherent. >>>>>>>>>>>>>>>>>>>>
    People that learn these things by rote never notice >>>>>>>>>>>>>>>>>>>> this.
    Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>>>> incoherence find it.

    ...14 Every epistemological antinomy can likewise be >>>>>>>>>>>>>>>>>>>> used
    for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>>>

    So, do you agree what GUR says?

    People believes GUR. Why struggle so painfully, >>>>>>>>>>>>>>>>>>> playing idiot everyday ?
    Give in, my friend.

    Graphical User Robots?
    The survival of the species depends on a correct >>>>>>>>>>>>>>>>>> understanding of truth.

    People believes GUR are going to survive.
    People does not believe GUR are going to vanish. >>>>>>>>>>>>>>>>
    What the Hell is GUR ?

    Selective memory?
    https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ

    Basically, GUR says that no one even your god can defy >>>>>>>>>>>>>>> that HP is undecidable.

    I simplify that down to this.

    ...14 Every epistemological antinomy can likewise be used for >>>>>>>>>>>>>> a similar undecidability proof...(Gödel 1931:43)

    The general notion of decision problem undecidability is >>>>>>>>>>>>>> fundamentally
    flawed in all of those cases where a decider is required >>>>>>>>>>>>>> to correctly
    answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>>>
    When we account for this then epistemological antinomies >>>>>>>>>>>>>> are always
    excluded from the domain of every decision problem making >>>>>>>>>>>>>> all of
    these decision problems decidable.


    It seems you try to change what the halting problem again. >>>>>>>>>>>>>
    https://en.wikipedia.org/wiki/Halting_problem
    In computability theory, the halting problem is the problem >>>>>>>>>>>>> of determining, from a description
    of
    an
    arbitrary computer program and an input, whether the >>>>>>>>>>>>> program will finish running, or continue
    to
    run
    forever....

    This wiki definition had been shown many times. But, since >>>>>>>>>>>>> your English is
    terrible, you often read it as something else (actually, >>>>>>>>>>>>> deliberately
    interpreted it differently, so called 'lie')

    If you want to refute Halting Problem, you must first >>>>>>>>>>>>> understand what the
    problem is about, right? You never hit the target that >>>>>>>>>>>>> every one can see, but POOP.




    Note: My email was delivered strangely. It swapped to
    sci.logic !!!

    If we have the decision problem that no one can answer this >>>>>>>>>>>> question:
    Is this sentence true or false: "What time is it?"

    This is not the halting problem.

    Someone has to point out that there is something wrong with it. >>>>>>>>>>>>

    This is another problem (not the HP neither)


    The halting problem is one of many problems that is
    only "undecidable" because the notion of decidability
    incorrectly requires a correct answer to a self-contradictory >>>>>>>>>> (thus incorrect) question.


    What is the 'correct answer' to all HP like problems ?


    The correct answer to all undecidable decision problems
    that rely on self-contradictory input to determine
    undecidability is to reject this input as outside of the
    domain of any and all decision problems. This applies
    to the Halting Problem and many others.



    In other words, just define that some Turing Machines aren't
    actually Turing Machines, or aren't Turing Machines if they are
    given certain inputs.

    That is just admitting that the system isn't actually decidable, >>>>>>> by trying to outlaw the problems.

    The issue then is, you can't tell if a thing that looks like and >>>>>>> acts lie a Turing Machine is actually a PO-Turing Machine, until >>>>>>> you can confirm that it doesn't have any of these contradictory
    properties.

    My guess is that detecting that is probably non-computable, so
    you can't tell for sure if what you have is actually a PO-Turing >>>>>>> Machine or not

    If the restrictions on the acceptability of a Turing macine are
    sufficiently
    strong both the restricted halting problem and the membership or the >>>>>> restricted domain are Turing solvable. For example, if the head
    can only move
    in one direction.


    I have reverted to every detail of the original halting problem
    thus now accept that a halt decider must report on the behavior
    of the direct execution of its input.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt

    Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to
    correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    Hard to do if Ĥ.H says the same as H.
    Hard to ensure that Ĥ.H does not say the same as H.


    Both H and Ĥ.H simulate their inputs until they see that these
    inputs must be aborted to prevent their own infinite execution.
    When they find that they must abort the simulation of their
    inputs they transition to their NO state.

    This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning
    to H.qy. I have already empirically proved that two identical
    machines on identical input can transition to different final
    states when one of these identical machines has a pathological
    relationship with its input and the other does not

    Why did they differ?

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt

    Execution trace of Ĥ applied to ⟨Ĥ⟩
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
    (b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process

    Simulation invariant: ⟨Ĥ⟩ correctly simulated by Ĥ.H never
    reaches its own simulated final state of ⟨Ĥ.qn⟩

    So?


    Humans can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by Ĥ.H
    cannot possibly terminate unless this simulation is aborted.

    Humans can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does abort
    its simulation then Ĥ will halt.

    It seems quite foolish to believe that computers cannot
    possibly ever see this too.



    We are not "Computations", and in particular, we are not H.

    And Yes, (if we are smart) we can see that there is no answer that H can
    give and be correct. We can also see that for evvery possible program
    that could be put in as an (incorrect) H, that H^ (H^) will have a
    specific behavior, just one that H doesn't give as its answer, thus the question about what it does is valid.

    Most Humans can also tell that you logic is just broken and you have
    been nothing but an ignorant pathological liar.

    Your "Logic" just shows how little you understand about what you talk
    about, and thus no one (with any intelegence) is apt to look into your
    ideas about truth, as clearly you don't understand what truth actually is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Mar 3 15:40:56 2024
    XPost: sci.logic

    On 3/3/24 2:05 PM, olcott wrote:
    On 3/3/2024 6:15 AM, Richard Damon wrote:
    On 3/2/24 10:15 PM, olcott wrote:
    On 3/2/2024 3:53 PM, Richard Damon wrote:
    On 3/2/24 11:24 AM, olcott wrote:
    On 3/2/2024 4:40 AM, Mikko wrote:
    On 2024-03-01 17:03:39 +0000, olcott said:

    On 3/1/2024 5:19 AM, Mikko wrote:
    On 2024-03-01 02:28:34 +0000, Richard Damon said:

    On 2/29/24 5:29 PM, olcott wrote:
    On 2/29/2024 4:24 PM, wij wrote:
    On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
    On 2/29/2024 4:06 PM, wij wrote:
    On Thu, 2024-02-29 at 15:59 -0600, olcott wrote:
    On 2/29/2024 3:50 PM, wij wrote:
    On Thu, 2024-02-29 at 15:27 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 3:15 PM, wij wrote:
    On Thu, 2024-02-29 at 15:07 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 3:00 PM, wij wrote:
    On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:
    On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote: >>>>>>>>>>>>>>>>>>>>>>> On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>>>>>
    H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely
    needs to report on

    A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>>>>>

    That no memory space is specified because Turing >>>>>>>>>>>>>>>>>>>>>> machines
    are imaginary fictions does not entail that they >>>>>>>>>>>>>>>>>>>>>> have no
    memory space. The actual memory space of actual >>>>>>>>>>>>>>>>>>>>>> Turing
    machines is the human memory where these ideas are >>>>>>>>>>>>>>>>>>>>>> located.

    The entire notion of undecidability when it >>>>>>>>>>>>>>>>>>>>>> depends on
    epistemological antinomies is incoherent. >>>>>>>>>>>>>>>>>>>>>>
    People that learn these things by rote never >>>>>>>>>>>>>>>>>>>>>> notice this.
    Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>>>>>> incoherence find it.

    ...14 Every epistemological antinomy can likewise >>>>>>>>>>>>>>>>>>>>>> be used
    for a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>>>>>

    So, do you agree what GUR says?

    People believes GUR. Why struggle so painfully, >>>>>>>>>>>>>>>>>>>>> playing idiot everyday ?
    Give in, my friend.

    Graphical User Robots?
    The survival of the species depends on a correct >>>>>>>>>>>>>>>>>>>> understanding of truth.

    People believes GUR are going to survive. >>>>>>>>>>>>>>>>>>> People does not believe GUR are going to vanish. >>>>>>>>>>>>>>>>>>
    What the Hell is GUR ?

    Selective memory?
    https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ

    Basically, GUR says that no one even your god can defy >>>>>>>>>>>>>>>>> that HP is undecidable.

    I simplify that down to this.

    ...14 Every epistemological antinomy can likewise be >>>>>>>>>>>>>>>> used for
    a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>
    The general notion of decision problem undecidability is >>>>>>>>>>>>>>>> fundamentally
    flawed in all of those cases where a decider is required >>>>>>>>>>>>>>>> to correctly
    answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>>>>>
    When we account for this then epistemological antinomies >>>>>>>>>>>>>>>> are always
    excluded from the domain of every decision problem >>>>>>>>>>>>>>>> making all of
    these decision problems decidable.


    It seems you try to change what the halting problem again. >>>>>>>>>>>>>>>
    https://en.wikipedia.org/wiki/Halting_problem
    In computability theory, the halting problem is the >>>>>>>>>>>>>>> problem of determining, from a description
    of
    an
    arbitrary computer program and an input, whether the >>>>>>>>>>>>>>> program will finish running, or continue
    to
    run
    forever....

    This wiki definition had been shown many times. But, >>>>>>>>>>>>>>> since your English is
    terrible, you often read it as something else (actually, >>>>>>>>>>>>>>> deliberately
    interpreted it differently, so called 'lie')

    If you want to refute Halting Problem, you must first >>>>>>>>>>>>>>> understand what the
    problem is about, right? You never hit the target that >>>>>>>>>>>>>>> every one can see, but POOP.




    Note: My email was delivered strangely. It swapped to >>>>>>>>>>>>> sci.logic !!!

    If we have the decision problem that no one can answer >>>>>>>>>>>>>> this question:
    Is this sentence true or false: "What time is it?"

    This is not the halting problem.

    Someone has to point out that there is something wrong >>>>>>>>>>>>>> with it.


    This is another problem (not the HP neither)


    The halting problem is one of many problems that is
    only "undecidable" because the notion of decidability
    incorrectly requires a correct answer to a self-contradictory >>>>>>>>>>>> (thus incorrect) question.


    What is the 'correct answer' to all HP like problems ?


    The correct answer to all undecidable decision problems
    that rely on self-contradictory input to determine
    undecidability is to reject this input as outside of the
    domain of any and all decision problems. This applies
    to the Halting Problem and many others.



    In other words, just define that some Turing Machines aren't >>>>>>>>> actually Turing Machines, or aren't Turing Machines if they are >>>>>>>>> given certain inputs.

    That is just admitting that the system isn't actually
    decidable, by trying to outlaw the problems.

    The issue then is, you can't tell if a thing that looks like >>>>>>>>> and acts lie a Turing Machine is actually a PO-Turing Machine, >>>>>>>>> until you can confirm that it doesn't have any of these
    contradictory properties.

    My guess is that detecting that is probably non-computable, so >>>>>>>>> you can't tell for sure if what you have is actually a
    PO-Turing Machine or not

    If the restrictions on the acceptability of a Turing macine are >>>>>>>> sufficiently
    strong both the restricted halting problem and the membership or >>>>>>>> the
    restricted domain are Turing solvable. For example, if the head >>>>>>>> can only move
    in one direction.


    I have reverted to every detail of the original halting problem
    thus now accept that a halt decider must report on the behavior
    of the direct execution of its input.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not
    halt

    Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to >>>>>>> correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    Hard to do if Ĥ.H says the same as H.
    Hard to ensure that Ĥ.H does not say the same as H.


    Both H and Ĥ.H simulate their inputs until they see that these
    inputs must be aborted to prevent their own infinite execution.
    When they find that they must abort the simulation of their
    inputs they transition to their NO state.

    This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning
    to H.qy. I have already empirically proved that two identical
    machines on identical input can transition to different final
    states when one of these identical machines has a pathological
    relationship with its input and the other does not

    Why did they differ?

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt

    Execution trace of Ĥ applied to ⟨Ĥ⟩
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
    (b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process

    Simulation invariant: ⟨Ĥ⟩ correctly simulated by Ĥ.H never
    reaches its own simulated final state of ⟨Ĥ.qn⟩

    So?


    Humans can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by Ĥ.H
    cannot possibly terminate unless this simulation is aborted.

    Humans can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does abort
    its simulation then Ĥ will halt.

    It seems quite foolish to believe that computers cannot
    possibly ever see this too.



    We are not "Computations", and in particular, we are not H.

    And Yes, (if we are smart) we can see that there is no answer that H
    can give and be correct.

    That there is no Ĥ.H that can correctly decide halting for ⟨Ĥ⟩ ⟨Ĥ⟩ does not actual entail that there is no H that can do this.

    Since they are the EXACT SAME ALGORITHM, it does.


    The key distinction that I recently realized the significance
    of was that with actual Turing Machines the hypothetical halt
    decider must be embedded within its input.

    *The means that the input can only contradict itself*

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt

    We can see that there is no answer that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩
    can derive that corresponds to the actual behavior of Ĥ applied to ⟨Ĥ⟩.

    Both H and Ĥ.H use the same algorithm that correctly detects
    whether or not a correct simulation of their input would cause
    their own infinite execution unless aborted.

    Humans can see that this criteria derives different answers
    for Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ than for H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.

    H merely needs to correctly simulate ⟨Ĥ⟩ ⟨Ĥ⟩ to see that Ĥ
    applied to ⟨Ĥ⟩ halts.



    Nope. If H^.H aborts its simulation on some condition, then H sees
    exactly the same conditions and will abort its simulation and give the
    same wrong answer. Remember, they ARE the same algorithm given the same
    input.

    If H doesn't abort its simulation, because it never say the needed
    condition, then neither will H^.H when it does its simulation (that H is simulating) so will just continue for ever and H will never answer.

    You are just imaging a false state, because you just don't understand
    what happens, because you have gas-lit yourself into being an ignorant pathological lying idiot.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Mar 3 20:20:24 2024
    XPost: sci.logic

    On 3/3/24 8:08 PM, olcott wrote:
    On 3/3/2024 2:40 PM, Richard Damon wrote:
    On 3/3/24 2:05 PM, olcott wrote:
    On 3/3/2024 6:15 AM, Richard Damon wrote:
    On 3/2/24 10:15 PM, olcott wrote:
    On 3/2/2024 3:53 PM, Richard Damon wrote:
    On 3/2/24 11:24 AM, olcott wrote:
    On 3/2/2024 4:40 AM, Mikko wrote:
    On 2024-03-01 17:03:39 +0000, olcott said:

    On 3/1/2024 5:19 AM, Mikko wrote:
    On 2024-03-01 02:28:34 +0000, Richard Damon said:

    On 2/29/24 5:29 PM, olcott wrote:
    On 2/29/2024 4:24 PM, wij wrote:
    On Thu, 2024-02-29 at 16:13 -0600, olcott wrote:
    On 2/29/2024 4:06 PM, wij wrote:
    On Thu, 2024-02-29 at 15:59 -0600, olcott wrote: >>>>>>>>>>>>>>>> On 2/29/2024 3:50 PM, wij wrote:
    On Thu, 2024-02-29 at 15:27 -0600, olcott wrote: >>>>>>>>>>>>>>>>>> On 2/29/2024 3:15 PM, wij wrote:
    On Thu, 2024-02-29 at 15:07 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>> On 2/29/2024 3:00 PM, wij wrote:
    On Thu, 2024-02-29 at 14:51 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>>>> On 2/29/2024 2:48 PM, wij wrote:
    On Thu, 2024-02-29 at 13:46 -0600, olcott wrote: >>>>>>>>>>>>>>>>>>>>>>>> On 2/29/2024 1:37 PM, Mikko wrote: >>>>>>>>>>>>>>>>>>>>>>>>> On 2024-02-29 15:51:56 +0000, olcott said: >>>>>>>>>>>>>>>>>>>>>>>>>
    H ⟨Ĥ⟩ ⟨Ĥ⟩ (in a separate memory space) merely
    needs to report on

    A Turing machine is not in any memory space. >>>>>>>>>>>>>>>>>>>>>>>>>

    That no memory space is specified because Turing >>>>>>>>>>>>>>>>>>>>>>>> machines
    are imaginary fictions does not entail that they >>>>>>>>>>>>>>>>>>>>>>>> have no
    memory space. The actual memory space of actual >>>>>>>>>>>>>>>>>>>>>>>> Turing
    machines is the human memory where these ideas >>>>>>>>>>>>>>>>>>>>>>>> are located.

    The entire notion of undecidability when it >>>>>>>>>>>>>>>>>>>>>>>> depends on
    epistemological antinomies is incoherent. >>>>>>>>>>>>>>>>>>>>>>>>
    People that learn these things by rote never >>>>>>>>>>>>>>>>>>>>>>>> notice this.
    Philosophers that examine these things looking for >>>>>>>>>>>>>>>>>>>>>>>> incoherence find it.

    ...14 Every epistemological antinomy can >>>>>>>>>>>>>>>>>>>>>>>> likewise be used
    for a similar undecidability proof...(Gödel >>>>>>>>>>>>>>>>>>>>>>>> 1931:43)


    So, do you agree what GUR says?

    People believes GUR. Why struggle so painfully, >>>>>>>>>>>>>>>>>>>>>>> playing idiot everyday ?
    Give in, my friend.

    Graphical User Robots?
    The survival of the species depends on a correct >>>>>>>>>>>>>>>>>>>>>> understanding of truth.

    People believes GUR are going to survive. >>>>>>>>>>>>>>>>>>>>> People does not believe GUR are going to vanish. >>>>>>>>>>>>>>>>>>>>
    What the Hell is GUR ?

    Selective memory?
    https://groups.google.com/g/comp.theory/c/_tbCYyMox9M/m/XgvkLGOQAwAJ

    Basically, GUR says that no one even your god can >>>>>>>>>>>>>>>>>>> defy that HP is undecidable.

    I simplify that down to this.

    ...14 Every epistemological antinomy can likewise be >>>>>>>>>>>>>>>>>> used for
    a similar undecidability proof...(Gödel 1931:43) >>>>>>>>>>>>>>>>>>
    The general notion of decision problem undecidability >>>>>>>>>>>>>>>>>> is fundamentally
    flawed in all of those cases where a decider is >>>>>>>>>>>>>>>>>> required to correctly
    answer a self-contradictory (thus incorrect) question. >>>>>>>>>>>>>>>>>>
    When we account for this then epistemological >>>>>>>>>>>>>>>>>> antinomies are always
    excluded from the domain of every decision problem >>>>>>>>>>>>>>>>>> making all of
    these decision problems decidable.


    It seems you try to change what the halting problem again. >>>>>>>>>>>>>>>>>
    https://en.wikipedia.org/wiki/Halting_problem >>>>>>>>>>>>>>>>> In computability theory, the halting problem is the >>>>>>>>>>>>>>>>> problem of determining, from a description
    of
    an
    arbitrary computer program and an input, whether the >>>>>>>>>>>>>>>>> program will finish running, or continue
    to
    run
    forever....

    This wiki definition had been shown many times. But, >>>>>>>>>>>>>>>>> since your English is
    terrible, you often read it as something else >>>>>>>>>>>>>>>>> (actually, deliberately
    interpreted it differently, so called 'lie') >>>>>>>>>>>>>>>>>
    If you want to refute Halting Problem, you must first >>>>>>>>>>>>>>>>> understand what the
    problem is about, right? You never hit the target that >>>>>>>>>>>>>>>>> every one can see, but POOP.




    Note: My email was delivered strangely. It swapped to >>>>>>>>>>>>>>> sci.logic !!!

    If we have the decision problem that no one can answer >>>>>>>>>>>>>>>> this question:
    Is this sentence true or false: "What time is it?" >>>>>>>>>>>>>>>
    This is not the halting problem.

    Someone has to point out that there is something wrong >>>>>>>>>>>>>>>> with it.


    This is another problem (not the HP neither)


    The halting problem is one of many problems that is >>>>>>>>>>>>>> only "undecidable" because the notion of decidability >>>>>>>>>>>>>> incorrectly requires a correct answer to a self-contradictory >>>>>>>>>>>>>> (thus incorrect) question.


    What is the 'correct answer' to all HP like problems ? >>>>>>>>>>>>>

    The correct answer to all undecidable decision problems >>>>>>>>>>>> that rely on self-contradictory input to determine
    undecidability is to reject this input as outside of the >>>>>>>>>>>> domain of any and all decision problems. This applies
    to the Halting Problem and many others.



    In other words, just define that some Turing Machines aren't >>>>>>>>>>> actually Turing Machines, or aren't Turing Machines if they >>>>>>>>>>> are given certain inputs.

    That is just admitting that the system isn't actually
    decidable, by trying to outlaw the problems.

    The issue then is, you can't tell if a thing that looks like >>>>>>>>>>> and acts lie a Turing Machine is actually a PO-Turing
    Machine, until you can confirm that it doesn't have any of >>>>>>>>>>> these contradictory properties.

    My guess is that detecting that is probably non-computable, >>>>>>>>>>> so you can't tell for sure if what you have is actually a >>>>>>>>>>> PO-Turing Machine or not

    If the restrictions on the acceptability of a Turing macine >>>>>>>>>> are sufficiently
    strong both the restricted halting problem and the membership >>>>>>>>>> or the
    restricted domain are Turing solvable. For example, if the >>>>>>>>>> head can only move
    in one direction.


    I have reverted to every detail of the original halting problem >>>>>>>>> thus now accept that a halt decider must report on the behavior >>>>>>>>> of the direct execution of its input.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does
    not halt

    Ĥ contradicts Ĥ.H and does not contradict H, thus H is able to >>>>>>>>> correctly decide ⟨Ĥ⟩ ⟨Ĥ⟩.

    Hard to do if Ĥ.H says the same as H.
    Hard to ensure that Ĥ.H does not say the same as H.


    Both H and Ĥ.H simulate their inputs until they see that these
    inputs must be aborted to prevent their own infinite execution.
    When they find that they must abort the simulation of their
    inputs they transition to their NO state.

    This results in Ĥ.H transitioning to Ĥ.Hqn and H transitioning >>>>>>> to H.qy. I have already empirically proved that two identical
    machines on identical input can transition to different final
    states when one of these identical machines has a pathological
    relationship with its input and the other does not

    Why did they differ?

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn   // Ĥ applied to ⟨Ĥ⟩ does not halt

    Execution trace of Ĥ applied to ⟨Ĥ⟩
    (a) Ĥ.q0 The input ⟨Ĥ⟩ is copied then transitions to Ĥ.H
    (b) Ĥ.H applied ⟨Ĥ⟩ ⟨Ĥ⟩ (input and copy) simulates ⟨Ĥ⟩ applied to ⟨Ĥ⟩
    (c) which begins at its own simulated ⟨Ĥ.q0⟩ to repeat the process >>>>>
    Simulation invariant: ⟨Ĥ⟩ correctly simulated by Ĥ.H never
    reaches its own simulated final state of ⟨Ĥ.qn⟩

    So?


    Humans can see that ⟨Ĥ⟩ ⟨Ĥ⟩ correctly simulated by Ĥ.H
    cannot possibly terminate unless this simulation is aborted.

    Humans can also see that Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ does abort >>>>> its simulation then Ĥ will halt.

    It seems quite foolish to believe that computers cannot
    possibly ever see this too.



    We are not "Computations", and in particular, we are not H.

    And Yes, (if we are smart) we can see that there is no answer that H
    can give and be correct.

    That there is no Ĥ.H that can correctly decide halting for ⟨Ĥ⟩ ⟨Ĥ⟩
    does not actual entail that there is no H that can do this.

    Since they are the EXACT SAME ALGORITHM, it does.

    Both H and Ĥ.H transition to their NO state when a correct and
    complete simulation of their input would cause their own infinite
    execution and otherwise transition to their YES state.

    Humans can see that this criteria derives different answers
    for Ĥ.H applied to ⟨Ĥ⟩ ⟨Ĥ⟩ than for H applied to ⟨Ĥ⟩ ⟨Ĥ⟩.



    But they must be the same algorithm, and thus give the same answer.

    You are just admitting that your H^.H isn't the needed copy and thus
    your whole argument is just a LIE.

    Yes, Humans can see that H is in a bind, but they also know that it is
    the same machine as H^.H so needs to make the same wrong answer (or just doesn't answer), and your statements are just ignorant pathological lies.

    We can also see that H needs to make its decision before it sees the
    decision that H^.H makes as there is no number N such that N+1 < N,
    which would be needed for it to see the results before it makes its
    decision.

    You are just proving that you are just an ignorant pathological liar.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Mon Mar 4 12:12:42 2024
    On 2024-03-02 16:24:45 +0000, olcott said:

    *This principle seems to be sound*
    Two identical machines must derive the same result when
    applied to the same input.

    It quite self-evidently is, as it follows from the meanings of
    "identical" and "same" and other words.

    Of course, two physical machines are never exactly identical
    so one may malfunction in a way the other doesn't.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Mar 4 19:16:58 2024
    On 3/4/24 2:05 PM, olcott wrote:
    On 3/4/2024 4:12 AM, Mikko wrote:
    On 2024-03-02 16:24:45 +0000, olcott said:

    *This principle seems to be sound*
    Two identical machines must derive the same result when
    applied to the same input.

    It quite self-evidently is, as it follows from the meanings of
    "identical" and "same" and other words.

    Of course, two physical machines are never exactly identical
    so one may malfunction in a way the other doesn't.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

    Both H and Ĥ.H transition to their NO state when a correct and
    complete simulation of their input would cause their own infinite
    execution and otherwise transition to their YES state.

    This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in
    a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not
    embedded in such a machine. The infinite loop appended to
    Ĥ.H has no effect on this.


    How does it have diffferent results?

    They are (or at least are claimed to be) the EXACT same algorithm, and
    thus the exact same set of deterministic instructions, processing the
    exact same input.

    I guess you are just admitting that you are either a total idiot
    thinking that the impossible is going to happen, or are just a ignorant pathological lying idiot (or likely BOTH).

    You are just proving your STUPIDITY, and that you have ZERO reguard for
    what is TRUE.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Mar 4 21:12:30 2024
    On 3/4/24 7:56 PM, olcott wrote:
    On 3/4/2024 6:16 PM, Richard Damon wrote:
    On 3/4/24 2:05 PM, olcott wrote:
    On 3/4/2024 4:12 AM, Mikko wrote:
    On 2024-03-02 16:24:45 +0000, olcott said:

    *This principle seems to be sound*
    Two identical machines must derive the same result when
    applied to the same input.

    It quite self-evidently is, as it follows from the meanings of
    "identical" and "same" and other words.

    Of course, two physical machines are never exactly identical
    so one may malfunction in a way the other doesn't.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

    Both H and Ĥ.H transition to their NO state when a correct and
    complete simulation of their input would cause their own infinite
    execution and otherwise transition to their YES state.

    This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in
    a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not
    embedded in such a machine. The infinite loop appended to
    Ĥ.H has no effect on this.


    How does it have diffferent results?

    They are (or at least are claimed to be) the EXACT same algorithm, and
    thus the exact same set of deterministic instructions, processing the
    exact same input.


    The input to Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can cause it to fail to halt.
    The input to Ĥ ⟨Ĥ⟩ ⟨Ĥ⟩ cannot possible cause it to fail to halt. Can you see this?

    IF H wait to see what H^.H does, then H^.H will also wait to see what
    its simulated (H^) (H^) does when it gets to the simualte H^.H (H^) (H^)
    and NOBODY every halts to give an answer.

    You seem to conviently forget this fact, which is just a form of LYING.


    I guess you are just admitting that you are either a total idiot
    thinking that the impossible is going to happen, or are just a
    ignorant pathological lying idiot (or likely BOTH).

    You are just proving your  STUPIDITY, and that you have ZERO reguard
    for what is TRUE.

    Not at all. Correcting the incorrect foundation of the
    notion of analytic truth is my whole reason for pursuing
    these things.

    The DO SO, and not try to work inside a system you claim is incorrect.


    When you take the incorrect foundation as your basis
    you cannot see its error.


    And trying to change the foundation while keeping what was built on it
    is impossibe.

    As I have said, you are WELCOME to start at your new foundation and
    built up just remember, you can't just use anything that was built on
    the foundation you rejected. You need to start ALL OVER.

    I don't think you understand this, because you just don't understand how
    logic works. This is what has turned you into the ignorant pathological
    lying idiot you have made yourself.


    HINT: This means start by listing out ALL of the basic truths you are
    going to accept, and the rules of logic you are going to allow, and then
    see what you can actually prove from it.

    Of course, this means you may need to study the systems you are
    rejecting to understand what parts you might want to keep and what parts
    you are rejecting

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Mikko@21:1/5 to olcott on Tue Mar 5 11:17:22 2024
    On 2024-03-05 00:56:19 +0000, olcott said:

    Correcting the incorrect foundation of the notion of analytic
    truth is my whole reason for pursuing these things.

    Correcting means replacing. If you replace the foundation you
    must also replace the term "analytic truth" as you cannot
    replace the emaning of an existing term.

    --
    Mikko

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Mar 5 06:33:46 2024
    XPost: sci.logic

    On 3/5/24 12:06 AM, olcott wrote:
    On 3/4/2024 8:12 PM, Richard Damon wrote:
    On 3/4/24 7:56 PM, olcott wrote:
    On 3/4/2024 6:16 PM, Richard Damon wrote:
    On 3/4/24 2:05 PM, olcott wrote:
    On 3/4/2024 4:12 AM, Mikko wrote:
    On 2024-03-02 16:24:45 +0000, olcott said:

    *This principle seems to be sound*
    Two identical machines must derive the same result when
    applied to the same input.

    It quite self-evidently is, as it follows from the meanings of
    "identical" and "same" and other words.

    Of course, two physical machines are never exactly identical
    so one may malfunction in a way the other doesn't.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

    Both H and Ĥ.H transition to their NO state when a correct and
    complete simulation of their input would cause their own infinite
    execution and otherwise transition to their YES state.

    This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in >>>>> a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not >>>>> embedded in such a machine. The infinite loop appended to
    Ĥ.H has no effect on this.


    How does it have diffferent results?

    They are (or at least are claimed to be) the EXACT same algorithm,
    and thus the exact same set of deterministic instructions,
    processing the exact same input.


    The input to Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can cause it to fail to halt.
    The input to Ĥ ⟨Ĥ⟩ ⟨Ĥ⟩ cannot possible cause it to fail to halt. >>> Can you see this?

    IF H wait to see what H^.H does, then H^.H will also wait to see what
    its simulated (H^) (H^) does when it gets to the simualte H^.H (H^)
    (H^) and NOBODY every halts to give an answer.


    Both H and Ĥ.H transition to their NO state when a correct and
    complete simulation of their input would cause their own infinite
    execution and otherwise transition to their YES state.

    When we much more clearly understand that H and Ĥ are in
    separate memory addresses of a RASP machine where every
    P knows its own address then it is much easier to see
    that H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ will meet their identical criteria differently.

    A single RASP machine doesn't have multiple memory spaces.

    A single RASP machine is just one singe program

    You are just proving that you are just a stupid ignorant pathological
    lying idiot.


    You seem to conviently forget this fact, which is just a form of LYING.


    I guess you are just admitting that you are either a total idiot
    thinking that the impossible is going to happen, or are just a
    ignorant pathological lying idiot (or likely BOTH).

    You are just proving your  STUPIDITY, and that you have ZERO reguard
    for what is TRUE.

    Not at all. Correcting the incorrect foundation of the
    notion of analytic truth is my whole reason for pursuing
    these things.

    The DO SO, and not try to work inside a system you claim is incorrect.

    That is why I am tentatively switching to RASP machines
    where every P knows its own address.

    Which means you "programs" are no longer necessarily Computations,
    unless you have been careful to include ALL their inputs in their
    definition.



    When you take the incorrect foundation as your basis
    you cannot see its error.


    And trying to change the foundation while keeping what was built on it
    is impossibe.

    As I have said, you are WELCOME to start at your new foundation and
    built up just remember, you can't just use anything that was built on
    the foundation you rejected. You need to start ALL OVER.


    I only reject the limitations of Turing Machines compared
    to RASP machines where every P knows its own address.

    Because you need your P to not be the required computation, so you can
    lie about it.


    I don't think you understand this, because you just don't understand
    how logic works. This is what has turned you into the ignorant
    pathological lying idiot you have made yourself.


    Or I understand that the foundations of logic have errors
    that cause my views to diverge from the herd.

    So, why are you using it?

    You have a choice, use the system as it is defined, or create a totally
    new system. Yu



    HINT: This means start by listing out ALL of the basic truths you are
    going to accept, and the rules of logic you are going to allow, and
    then see what you can actually prove from it.


    For computer science I only need a RASP machine where
    every P knows its own address.

    When we do this then H1 is the decider and H/D is
    the counter-example input.

    Not if the "Decider" used the RASP structure to be a non-computation
    (i.e., use a hidden input, like its address).


    Of course, this means you may need to study the systems you are
    rejecting to understand what parts you might want to keep and what
    parts you are rejecting

    If a TM can do what H1(D,D) can do then my refutation
    of the halting problem does not refute Church/Turing
    otherwise it does refute Church/Turing.


    Nope, because your "Machines" are NOT "Computations", since they use a
    "hidden input".

    Make it clear that your H is actually a function of its own address, and suddenly Church/Turing shows that right result, and your "Counter
    Example" is proven to be a lie.

    You don't seem to understand that LYING about what you are doing (by
    giving the functions a hidden input) does't prove anything.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Tue Mar 5 20:27:14 2024
    XPost: sci.logic

    On 3/5/24 4:56 PM, olcott wrote:
    On 3/5/2024 5:33 AM, Richard Damon wrote:
    On 3/5/24 12:06 AM, olcott wrote:
    On 3/4/2024 8:12 PM, Richard Damon wrote:
    On 3/4/24 7:56 PM, olcott wrote:
    On 3/4/2024 6:16 PM, Richard Damon wrote:
    On 3/4/24 2:05 PM, olcott wrote:
    On 3/4/2024 4:12 AM, Mikko wrote:
    On 2024-03-02 16:24:45 +0000, olcott said:

    *This principle seems to be sound*
    Two identical machines must derive the same result when
    applied to the same input.

    It quite self-evidently is, as it follows from the meanings of >>>>>>>> "identical" and "same" and other words.

    Of course, two physical machines are never exactly identical
    so one may malfunction in a way the other doesn't.


    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqy ∞ // Ĥ applied to ⟨Ĥ⟩ halts
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.Hq0 ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.Hqn // Ĥ applied to ⟨Ĥ⟩ does not halt

    Both H and Ĥ.H transition to their NO state when a correct and
    complete simulation of their input would cause their own infinite >>>>>>> execution and otherwise transition to their YES state.

    This has different results when Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ is embedded in >>>>>>> a machine that copies its input than when H ⟨Ĥ⟩ ⟨Ĥ⟩ is not >>>>>>> embedded in such a machine. The infinite loop appended to
    Ĥ.H has no effect on this.


    How does it have diffferent results?

    They are (or at least are claimed to be) the EXACT same algorithm, >>>>>> and thus the exact same set of deterministic instructions,
    processing the exact same input.


    The input to Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ can cause it to fail to halt.
    The input to Ĥ ⟨Ĥ⟩ ⟨Ĥ⟩ cannot possible cause it to fail to halt.
    Can you see this?

    IF H wait to see what H^.H does, then H^.H will also wait to see
    what its simulated (H^) (H^) does when it gets to the simualte H^.H
    (H^) (H^) and NOBODY every halts to give an answer.


    Both H and Ĥ.H transition to their NO state when a correct and
    complete simulation of their input would cause their own infinite
    execution and otherwise transition to their YES state.

    When we much more clearly understand that H and Ĥ are in
    separate memory addresses of a RASP machine where every
    P knows its own address then it is much easier to see
    that H ⟨Ĥ⟩ ⟨Ĥ⟩ and Ĥ.H ⟨Ĥ⟩ ⟨Ĥ⟩ will meet their identical
    criteria differently.

    A single RASP machine doesn't have multiple memory spaces.

    No machine has multiple memory spaces.

    Nope, the machine you are using has multiple memory spaces created by
    the MMU it is using. That is the major purpose of using a "protected
    mode" opererating system.

    Note, the "RASP" program model doesn't include hooks to do that.

    Note also, a proper simulator will also CREATE a separate (virtual)
    "address space" for the simulated program, so the simulator doesn't
    interfear with the program being simulated.

    This feature is missing in your x86UTM program, which makes it not a
    proper simulator, but that missing virtual address space was needed by
    your design so you can detect the "recursive" simulation by address
    detection. Only by forcing all calls to H to be calls to the same
    specific address can you activate your cheat.


    A single RASP machine is just one singe program


    A point of confusion: two sets of instructions: Unlike the UTM,
    the RASP model has two sets of instructions – the state machine
    table of instructions (the "interpreter") and the "program" in
    the holes. The two sets do not have to be drawn from the same set. https://en.wikipedia.org/wiki/Random-access_stored-program_machine

    Nope, that article is incorrect.

    The RASP MACHINE itself, is a theoretical prob=gramming model (Just like
    a Turing Machine is) with a memory space defined as an "infinite number"
    of cells that can can store any Natural Number in them, indexed by a
    Natural Number to select the memory location to use.

    There is also a set of instructions mapping the Natural values stored in
    the cell pointed to by the "Program Counter" register, (and using
    following locations for some parameters).

    The article is going by the concept that the proof that a Turing Machine
    can do anything that a RASP machine can do is built a RASP machine
    simulator out of a Turing Machine, similar to the concept of a UTM, but simulating a different "langauge" of computation.

    So if you are talking about a "RASP" program as simulated by a Turing
    Machine, YES, you have two programs, at different levels of description.
    The RASP Simulator, described as a Turing Machine, which is a set of
    Tuples of :
    (Current State, Current Symbol, Next State, New Symbol, Tape Operation)

    This is a FIXED program (and perhaps some fixed data on the tape to
    support it) that defines the RASP Machine architecture for the second
    part, the RASP program, which becomes just a description of the program
    in the format needed for the RASP simulator. The he instruction set for
    this WILL be different than the Turing Machine.


    When P also implements an interpreter then itself and
    its slave are not at the exact same physical location.

    And if you do that, then P can't compare the address of the 'H' being
    called to the address of the H that is doing the simulation.


    You are just proving that you are just a stupid ignorant pathological
    lying idiot.

    *By saying that you are proving that you*
    *are biased against an honest dialogue*

    No, I am stating a FACT that you are demonstrating ZERO understanding of
    the things you are talking about, and continue to just LIE about what
    you are doing.

    Start being Honest, and I will stop calling you a LIAR.

    Note, this means actually LOOKING at the rebuttals, and either accepting
    them or showing an actual error in the rebuttal.

    Of course, that also means you need to accept the rules of the logic
    system that you are talking about, even if you think it is wrong.

    You can't be honest and also be lying about the rules, and the rules ARE
    what have been established in the field, unless you are clear that you
    are in another field, and have actually DEFINED that field.



    You seem to conviently forget this fact, which is just a form of LYING. >>>>

    I guess you are just admitting that you are either a total idiot
    thinking that the impossible is going to happen, or are just a
    ignorant pathological lying idiot (or likely BOTH).

    You are just proving your  STUPIDITY, and that you have ZERO
    reguard for what is TRUE.

    Not at all. Correcting the incorrect foundation of the
    notion of analytic truth is my whole reason for pursuing
    these things.

    The DO SO, and not try to work inside a system you claim is incorrect.

    That is why I am tentatively switching to RASP machines
    where every P knows its own address.

    Which means you "programs" are no longer necessarily Computations,
    unless you have been careful to include ALL their inputs in their
    definition.


    Alternatively every program has an implied
    input that cannot possibly be forbidden to it.

    And if it uses it, then it can't be a computation that doesn't have it
    as in input.

    Since Computations are used to generate defined Mappings, if the mapping doesn't have as an input that factor, then the algorithm that is trying
    to compute it can't either.


    I am using your excellent feedback to continuously
    refine my position.

    But you still seem to have no understand of what a Computation actually is.




    When you take the incorrect foundation as your basis
    you cannot see its error.


    And trying to change the foundation while keeping what was built on
    it is impossibe.

    As I have said, you are WELCOME to start at your new foundation and
    built up just remember, you can't just use anything that was built
    on the foundation you rejected. You need to start ALL OVER.


    I only reject the limitations of Turing Machines compared
    to RASP machines where every P knows its own address.

    Because you need your P to not be the required computation, so you can
    lie about it.

    Alternatively every program has an implied
    input that cannot possibly be forbidden to it.

    yes, it can be, if the program is to compute a specific mapping.

    If the mapping doesn't have that parameter, then the algorithm can't
    have it either.

    Would an algorithm that computes the factorial of its input be correct
    if it generated different values depending on where the program was
    loaded in memory, or what day of the week it was?

    In the same way, the "Halting Status" of a given particular Computation
    (fully defined algorithm and input data) doesn't depend on which
    decider, or which version of a given decider, or what special secret
    input that decider got, as none of that affect that answer for that
    SPECIFIC input.

    Only by LYING and changing the question, from the actually OBJECTIVE
    criteria to one that is subjective, can you try to make you point, but
    that is shown to be just a LIE, since you aren't working on the question
    that you claim you are, the ACTUAL Halting Problem.



    I don't think you understand this, because you just don't understand
    how logic works. This is what has turned you into the ignorant
    pathological lying idiot you have made yourself.


    Or I understand that the foundations of logic have errors
    that cause my views to diverge from the herd.

    So, why are you using it?


    If I started from scratch I would be long since
    dead of old age before making any progress.

    Likely.


    I am only redefining tiny key elements of the
    foundations that are causing the errors.

    Doesn't work that way.

    That is what gets you in trouble, and proves that you are just that
    ignorant hypocritical pathoogica; lying idiot.


    For my immediate purposes in this dialogue I only
    need machines to always be able to know their own
    machine address.

    Which then become not the required computation and every thing else just becomes a LIE.

    You are just proving you are so ignorant you don't understand that.


    This can be implemented as simply as every TM is only
    executed by a master UTM that simulates the Turing
    Machine Description of this machine and the machine
    cannot be executed in any other way.

    Nope, you just need to be able to break the rules


    The master UTM becomes an operating system (like x86utm)
    for all of its Turing machines. If this Olcott machine
    can solve problems that Turing machines cannot solve
    then Church/Turing would seem to be refuted.

    But if it WAS actually a UTM, then its behavior wouldn't matter.

    You are just caught in your own ignorant lies cause by your stupidity.


    You have a choice, use the system as it is defined, or create a
    totally new system. Yu



    HINT: This means start by listing out ALL of the basic truths you
    are going to accept, and the rules of logic you are going to allow,
    and then see what you can actually prove from it.


    For computer science I only need a RASP machine where
    every P knows its own address.

    When we do this then H1 is the decider and H/D is
    the counter-example input.

    Not if the "Decider" used the RASP structure to be a non-computation
    (i.e., use a hidden input, like its address).


    If every machine always has access to its own address
    and this cannot be denied to any machine then Olcott
    Machines would still be computations that are possibly
    more powerful than Turing machines.

    But you couldn't actualy express the Halting Question, or any actual
    question we might want to ask, as any Computation could vary its answer
    fromt the correct one just by using its "address'.

    In other words, you need to turn your logic system into a system of LIES.



    Of course, this means you may need to study the systems you are
    rejecting to understand what parts you might want to keep and what
    parts you are rejecting

    If a TM can do what H1(D,D) can do then my refutation
    of the halting problem does not refute Church/Turing
    otherwise it does refute Church/Turing.


    Nope, because your "Machines" are NOT "Computations", since they use a
    "hidden input".


    It simply becomes construed as an input to every machine.

    And thus, your system can't express a mapping that doesn't use that input.

    Only if you actually ALLOW the mapping to say, "THe output must match
    this, for all extra input values".

    Thus the definition of a Halt Decider H would be:

    H(<M>, d, a) goes to Qy if M(d) halts and to Qn if M(d) doesn't halt for
    ALL values of a.

    Thus your H, which could be H(M,d,0) and H1 which could be H(M,d,1)
    would be REQURIED to be a Halt decider to agree on the value for every
    value of that third parameter (since it doesn't affect the behavior of
    the machine described.

    Thus Since H^ uses H(<H^>,<H^>,0) and that goes to qn and thus H^ will
    halt, just because H(<H^>, <H^>,1) got the right answer, H isn't a halt
    decider sincd it doesn't get the answer right for ALL input values,
    including that "Hidden" one.

    Making it explicit means it needs to be right, while when it was hidden,
    you could try to make it only right for a spicific copy (like you were
    doing).


    Make it clear that your H is actually a function of its own address,
    and suddenly Church/Turing shows that right result, and your "Counter
    Example" is proven to be a lie.

    Or Olcott machines simply refute Church/Turing.

    Unlikely.

    Of coarse, as you have admtted, you don't think you have time to fully
    work out the logic.


    You don't seem to understand that LYING about what you are doing (by
    giving the functions a hidden input) does't prove anything.

    When I propose alternatives to the dogma that you
    memorized I am not lying.


    You are if you change existing systems to be not the system they were.

    That is like saying something about the United States, but first presume
    that it has been changed to a dictatorship instead of our current
    "democracy".

    That is just a LIE.

    It is basically a LIE to say you can just change a fundamental and not
    bother looking at the full affects of making that change, because in
    truth, you can't do that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)