• Re: Refuting the HP proofs (adapted for software engineers[ Ordinary so

    From olcott@21:1/5 to Richard Damon on Sun Jun 5 22:41:23 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/5/2022 10:26 PM, Richard Damon wrote:
    On 6/5/22 11:03 PM, olcott wrote:
    On 6/5/2022 9:52 PM, Richard Damon wrote:

    On 6/5/22 10:37 PM, olcott wrote:
    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which
    evaluates the first 10 steps of a computation, and then does >>>>>>>>> something else. What is it doing while evaluating those 10 steps? >>>>>>>>
    What would I call it? POOP! It just goes to show the accuracy
    and flexibility of Ben's acronym for any Peter-related concept. >>>>>>>>

    But PO didn't invent the concept of (partially) simulating a
    computation in order to compute certain properties of that
    computation!  It's been around since Turing's days, and is very >>>>>>> useful.

    Mike.

    That is true. I am apparently the first one that ever thought this >>>>>> through well enough so that machine descriptions matching the
    following pattern could be correctly determined to be non-halting: >>>>>>
          For any program H that might determine if programs halt, a >>>>>> "pathological"
          program P, called with some input, can pass its own source >>>>>> and its input to
          H and then specifically do the opposite of what H predicts P >>>>>> will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the behavior
    of a complete simulation it can be used to recognize infinite
    behavior patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0, which,
    by the DEFINITION of the requirements of the Halting Problem,
    H(P,P) needs to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered.

    Previously no one understood that it was possible for the correct
    simulation of the input to H(P,P) to be computationally distinct
    (thus not equivalent) to the direct execution of P(P).

    By what definition of "Correct" are you using?


    Ordinary software engineering proves that a correct and complete x86
    emulation of the input to H(P,P) never reaches its "ret" instruction.


    So, I guess this just shows that you don't actually know a definition
    that shows this, so this is just another of your lies.

    Ordinary software engineering proves that a correct and complete x86
    emulation of the input to H(P,P) never reaches its "ret" instruction.

    _P()
    [00001352](01) 55 push ebp
    [00001353](02) 8bec mov ebp,esp
    [00001355](03) 8b4508 mov eax,[ebp+08]
    [00001358](01) 50 push eax // push P
    [00001359](03) 8b4d08 mov ecx,[ebp+08]
    [0000135c](01) 51 push ecx // push P
    [0000135d](05) e840feffff call 000011a2 // call H
    [00001362](03) 83c408 add esp,+08
    [00001365](02) 85c0 test eax,eax
    [00001367](02) 7402 jz 0000136b
    [00001369](02) ebfe jmp 00001369
    [0000136b](01) 5d pop ebp
    [0000136c](01) c3 ret
    Size in bytes:(0027) [0000136c]

    It is completely obvious that when H(P,P) correctly emulates its input
    that it must emulate the first seven instructions of P. Because the
    seventh instruction of P repeats this process we can know with complete certainty that the emulated P never reaches its final “ret” instruction, thus never halts.



    --
    Copyright 2022 Pete Olcott

    "Talent hits a target no one else can hit;
    Genius hits a target no one else can see."
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Jun 6 00:17:22 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/5/22 11:41 PM, olcott wrote:
    On 6/5/2022 10:26 PM, Richard Damon wrote:
    On 6/5/22 11:03 PM, olcott wrote:
    On 6/5/2022 9:52 PM, Richard Damon wrote:

    On 6/5/22 10:37 PM, olcott wrote:
    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which
    evaluates the first 10 steps of a computation, and then does >>>>>>>>>> something else. What is it doing while evaluating those 10 steps? >>>>>>>>>
    What would I call it? POOP! It just goes to show the accuracy >>>>>>>>> and flexibility of Ben's acronym for any Peter-related concept. >>>>>>>>>

    But PO didn't invent the concept of (partially) simulating a
    computation in order to compute certain properties of that
    computation!  It's been around since Turing's days, and is very >>>>>>>> useful.

    Mike.

    That is true. I am apparently the first one that ever thought
    this through well enough so that machine descriptions matching
    the following pattern could be correctly determined to be
    non-halting:

          For any program H that might determine if programs halt, a >>>>>>> "pathological"
          program P, called with some input, can pass its own source >>>>>>> and its input to
          H and then specifically do the opposite of what H predicts >>>>>>> P will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the behavior >>>>>>> of a complete simulation it can be used to recognize infinite
    behavior patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0,
    which, by the DEFINITION of the requirements of the Halting
    Problem, H(P,P) needs to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered.

    Previously no one understood that it was possible for the correct
    simulation of the input to H(P,P) to be computationally distinct
    (thus not equivalent) to the direct execution of P(P).

    By what definition of "Correct" are you using?


    Ordinary software engineering proves that a correct and complete x86
    emulation of the input to H(P,P) never reaches its "ret" instruction.


    So, I guess this just shows that you don't actually know a definition
    that shows this, so this is just another of your lies.

    Ordinary software engineering proves that a correct and complete x86 emulation of the input to H(P,P) never reaches its "ret" instruction.

    Nope. Not if H(P,P) returns 0, as simple facts show that if H(P,P)
    returns 0 that P(P) will halt.

    Good engineering NEVER contradicts actual facts.


    _P()
    [00001352](01)  55              push ebp
    [00001353](02)  8bec            mov ebp,esp
    [00001355](03)  8b4508          mov eax,[ebp+08]
    [00001358](01)  50              push eax      // push P [00001359](03)  8b4d08          mov ecx,[ebp+08]
    [0000135c](01)  51              push ecx      // push P [0000135d](05)  e840feffff      call 000011a2 // call H [00001362](03)  83c408          add esp,+08
    [00001365](02)  85c0            test eax,eax
    [00001367](02)  7402            jz 0000136b
    [00001369](02)  ebfe            jmp 00001369
    [0000136b](01)  5d              pop ebp
    [0000136c](01)  c3              ret
    Size in bytes:(0027) [0000136c]

    It is completely obvious that when H(P,P) correctly emulates its input
    that it must emulate the first seven instructions of P. Because the
    seventh instruction of P repeats this process we can know with complete certainty that the emulated P never reaches its final “ret” instruction, thus never halts.


    Maybe to you, but it is wrong (which might be why it is obvious to you).

    Let us start by assuming that H doesn't just abort its simulation until
    it has enough information to be obviously wrong.

    H(P,P) will first emulate the first 7 instructions of P as you say.

    Then it emulates the result of the call to H, so it emulates the start
    of the emulation of the input to this H, which is also P,P.

    That proceeds, as you almost say through the emulation of the emulaiton
    of the first 7 instructions of P. While doing this, the outer emulation
    will notice that the machine it has been emulating has been checking
    conditions along the way.

    We then get to the point that the emulation of the emulation reaches the
    2nd level call to H, and the top level emulator will see that emulator
    it is emulating checking if things have recursed too far.

    At this point, the outer H has seen enough that it can see that if it
    has aborted its simulation as soon as it say P calling H(P,P) and
    matching that input to H to the input IT had, and calling that an
    infinite repeat pattern, it would have been wrong, as a correct
    emulation of that input would have continued and reached THIS point
    where the H it was emulating made that same decision.

    In fact, if the decider was smart enough, it could see that if it
    simulates N levels of calls, and decides that this was enough to call
    the input infinitely recursive, that at the N+1 call (that it won't get
    to but a CORRECT COMPLETE emulator would), the emulator it is emulating
    would reach N calls and do the same as it is, and thus showing that it
    was wrong to make that decision.

    The ONLY way that the correct emulation of the input never reaches the
    ret instruction is if H actually never aborts its emulation, and thus
    never actually gives the answer that the input is non-halting.

    So, the ONLY H that creates a non-halting input is the H that fails to
    give the answer that is only correct for that H.

    Your brain just doesn't seem to be able to keep enough information to
    handle the levels that happen in this problem.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Mon Jun 6 10:28:55 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/5/2022 11:17 PM, Richard Damon wrote:

    On 6/5/22 11:41 PM, olcott wrote:
    On 6/5/2022 10:26 PM, Richard Damon wrote:
    On 6/5/22 11:03 PM, olcott wrote:
    On 6/5/2022 9:52 PM, Richard Damon wrote:

    On 6/5/22 10:37 PM, olcott wrote:
    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which >>>>>>>>>>> evaluates the first 10 steps of a computation, and then does >>>>>>>>>>> something else. What is it doing while evaluating those 10 >>>>>>>>>>> steps?

    What would I call it? POOP! It just goes to show the accuracy >>>>>>>>>> and flexibility of Ben's acronym for any Peter-related concept. >>>>>>>>>>

    But PO didn't invent the concept of (partially) simulating a >>>>>>>>> computation in order to compute certain properties of that
    computation!  It's been around since Turing's days, and is very >>>>>>>>> useful.

    Mike.

    That is true. I am apparently the first one that ever thought
    this through well enough so that machine descriptions matching >>>>>>>> the following pattern could be correctly determined to be
    non-halting:

          For any program H that might determine if programs halt, a >>>>>>>> "pathological"
          program P, called with some input, can pass its own source >>>>>>>> and its input to
          H and then specifically do the opposite of what H predicts >>>>>>>> P will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the behavior >>>>>>>> of a complete simulation it can be used to recognize infinite
    behavior patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0,
    which, by the DEFINITION of the requirements of the Halting
    Problem, H(P,P) needs to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered. >>>>>>
    Previously no one understood that it was possible for the correct
    simulation of the input to H(P,P) to be computationally distinct
    (thus not equivalent) to the direct execution of P(P).

    By what definition of "Correct" are you using?


    Ordinary software engineering proves that a correct and complete x86
    emulation of the input to H(P,P) never reaches its "ret" instruction.


    So, I guess this just shows that you don't actually know a definition
    that shows this, so this is just another of your lies.

    Ordinary software engineering proves that a correct and complete x86
    emulation of the input to H(P,P) never reaches its "ret" instruction.

    Nope. Not if H(P,P) returns 0, as simple facts show that if H(P,P)
    returns 0 that P(P) will halt.

    Good engineering NEVER contradicts actual facts.


    _P()
    [00001352](01)  55              push ebp
    [00001353](02)  8bec            mov ebp,esp
    [00001355](03)  8b4508          mov eax,[ebp+08]
    [00001358](01)  50              push eax      // push P >> [00001359](03)  8b4d08          mov ecx,[ebp+08]
    [0000135c](01)  51              push ecx      // push P >> [0000135d](05)  e840feffff      call 000011a2 // call H
    [00001362](03)  83c408          add esp,+08
    [00001365](02)  85c0            test eax,eax
    [00001367](02)  7402            jz 0000136b
    [00001369](02)  ebfe            jmp 00001369
    [0000136b](01)  5d              pop ebp
    [0000136c](01)  c3              ret
    Size in bytes:(0027) [0000136c]

    It is completely obvious that when H(P,P) correctly emulates its input
    that it must emulate the first seven instructions of P. Because the
    seventh instruction of P repeats this process we can know with
    complete certainty that the emulated P never reaches its final “ret”
    instruction, thus never halts.


    Maybe to you, but it is wrong (which might be why it is obvious to you).



    Software engineers competent in C and the x86 language will verify that
    when H(P,P) correctly emulates its input with an x86 emulator that this emulation would never stop running. This provides the basis for H(P,P)
    to correctly reject its input as non-halting.

    For any program H that might determine if programs halt, a
    "pathological" program P, called with some input, can pass its own
    source and its input to H and then specifically do the opposite of
    what H predicts P will do. No H can exist that handles this case. https://en.wikipedia.org/wiki/Halting_problem

    H determines the halt status of its input by watching the behavior of
    this input when it is correctly simulated by H using an x86 emulator.
    When H correctly matches an infinite behavior pattern it aborts the
    emulation of this input and returns 0.

    #include <stdint.h>
    #define u32 uint32_t

    void P(u32 x)
    {
    if (H(x, x))
    HERE: goto HERE;
    return;
    }

    int main()
    {
    Output("Input_Halts = ", H((u32)P, (u32)P));
    }

    _P()
    [00001352](01) 55 push ebp
    [00001353](02) 8bec mov ebp,esp
    [00001355](03) 8b4508 mov eax,[ebp+08]
    [00001358](01) 50 push eax // push P
    [00001359](03) 8b4d08 mov ecx,[ebp+08]
    [0000135c](01) 51 push ecx // push P
    [0000135d](05) e840feffff call 000011a2 // call H
    [00001362](03) 83c408 add esp,+08
    [00001365](02) 85c0 test eax,eax
    [00001367](02) 7402 jz 0000136b
    [00001369](02) ebfe jmp 00001369
    [0000136b](01) 5d pop ebp
    [0000136c](01) c3 ret
    Size in bytes:(0027) [0000136c]

    It is completely obvious that when H(P,P) correctly emulates its input
    that it must emulate the first seven instructions of P. Because the
    seventh instruction repeats this process we can know with complete
    certainty that the emulated P never reaches its final “ret” instruction, thus never halts.





    --
    Copyright 2022 Pete Olcott

    "Talent hits a target no one else can hit;
    Genius hits a target no one else can see."
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Mon Jun 6 21:04:34 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/6/22 11:28 AM, olcott wrote:
    On 6/5/2022 11:17 PM, Richard Damon wrote:

    On 6/5/22 11:41 PM, olcott wrote:
    On 6/5/2022 10:26 PM, Richard Damon wrote:
    On 6/5/22 11:03 PM, olcott wrote:
    On 6/5/2022 9:52 PM, Richard Damon wrote:

    On 6/5/22 10:37 PM, olcott wrote:
    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which >>>>>>>>>>>> evaluates the first 10 steps of a computation, and then does >>>>>>>>>>>> something else. What is it doing while evaluating those 10 >>>>>>>>>>>> steps?

    What would I call it? POOP! It just goes to show the accuracy >>>>>>>>>>> and flexibility of Ben's acronym for any Peter-related concept. >>>>>>>>>>>

    But PO didn't invent the concept of (partially) simulating a >>>>>>>>>> computation in order to compute certain properties of that >>>>>>>>>> computation!  It's been around since Turing's days, and is >>>>>>>>>> very useful.

    Mike.

    That is true. I am apparently the first one that ever thought >>>>>>>>> this through well enough so that machine descriptions matching >>>>>>>>> the following pattern could be correctly determined to be
    non-halting:

          For any program H that might determine if programs halt, >>>>>>>>> a "pathological"
          program P, called with some input, can pass its own >>>>>>>>> source and its input to
          H and then specifically do the opposite of what H >>>>>>>>> predicts P will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the
    behavior of a complete simulation it can be used to recognize >>>>>>>>> infinite behavior patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0,
    which, by the DEFINITION of the requirements of the Halting
    Problem, H(P,P) needs to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered. >>>>>>>
    Previously no one understood that it was possible for the correct >>>>>>> simulation of the input to H(P,P) to be computationally distinct >>>>>>> (thus not equivalent) to the direct execution of P(P).

    By what definition of "Correct" are you using?


    Ordinary software engineering proves that a correct and complete
    x86 emulation of the input to H(P,P) never reaches its "ret"
    instruction.


    So, I guess this just shows that you don't actually know a
    definition that shows this, so this is just another of your lies.

    Ordinary software engineering proves that a correct and complete x86
    emulation of the input to H(P,P) never reaches its "ret" instruction.

    Nope. Not if H(P,P) returns 0, as simple facts show that if H(P,P)
    returns 0 that P(P) will halt.

    Good engineering NEVER contradicts actual facts.


    _P()
    [00001352](01)  55              push ebp
    [00001353](02)  8bec            mov ebp,esp
    [00001355](03)  8b4508          mov eax,[ebp+08]
    [00001358](01)  50              push eax      // push P >>> [00001359](03)  8b4d08          mov ecx,[ebp+08]
    [0000135c](01)  51              push ecx      // push P >>> [0000135d](05)  e840feffff      call 000011a2 // call H
    [00001362](03)  83c408          add esp,+08
    [00001365](02)  85c0            test eax,eax
    [00001367](02)  7402            jz 0000136b
    [00001369](02)  ebfe            jmp 00001369
    [0000136b](01)  5d              pop ebp
    [0000136c](01)  c3              ret
    Size in bytes:(0027) [0000136c]

    It is completely obvious that when H(P,P) correctly emulates its
    input that it must emulate the first seven instructions of P. Because
    the seventh instruction of P repeats this process we can know with
    complete certainty that the emulated P never reaches its final “ret” >>> instruction, thus never halts.


    Maybe to you, but it is wrong (which might be why it is obvious to you).



    Software engineers competent in C and the x86 language will verify that
    when H(P,P) correctly emulates its input with an x86 emulator that this emulation would never stop running. This provides the basis for H(P,P)
    to correctly reject its input as non-halting.

    Yes, when H is actually defined to correctly emulate its input, which
    means, BY DEFINITON, that it doesn't stop its emulation until it reaches
    a final state, then it can be shown that this emulation never stops running.

    This can NOT be used to show that any other H is correct to say that the
    P built on that other H is non-halting, as it is a different program P,
    as the program P includes the H that it is built on.

    That you keep missing this fact speaks of your mental ability (or lack
    thereof)


    For any program H that might determine if programs  halt, a "pathological" program P, called with some  input, can pass its own source and its input to H and  then specifically do the opposite of
    what H predicts P  will do. No H can exist that handles this case. https://en.wikipedia.org/wiki/Halting_problem

    H determines the halt status of its input by watching the behavior of
    this input when it is correctly simulated by H using an x86 emulator.
    When H correctly matches an infinite behavior pattern it aborts the
    emulation of this input and returns 0.

    Except that is can't see a full correctly simulation of its input, and
    the partial simulation that it sees never shows an actual correctly
    defined infinite behavior pattern, as can be proved by the fact that if
    you include that in H, then P(P) for the P built on the H that includes
    that pattern and returns 0 for H(P,P) will Halt, when the h that it
    calls returns that answer.

    You can't use the partial simulation that H used, as partial simulation
    do not, by themselves, prove non-halting, and for a pattern to be
    correctly determined to be non-halting, ALL programs that show that
    pattern must be non-halting, but as described above ANY pattern that H
    sees in its simulation of its input to H(P,P), if added to its list of
    patterns to call non-halting, will produce a Halting P(P), and thus is
    proved not to be correct.



        #include <stdint.h>
        #define u32 uint32_t

        void P(u32 x)
        {
          if (H(x, x))
            HERE: goto HERE;
          return;
        }

        int main()
        {
          Output("Input_Halts = ", H((u32)P, (u32)P));
        }

        _P()
        [00001352](01)  55              push ebp
        [00001353](02)  8bec            mov ebp,esp
        [00001355](03)  8b4508          mov eax,[ebp+08]
        [00001358](01)  50              push eax      // push P
        [00001359](03)  8b4d08          mov ecx,[ebp+08]
        [0000135c](01)  51              push ecx      // push P
        [0000135d](05)  e840feffff      call 000011a2 // call H
        [00001362](03)  83c408          add esp,+08
        [00001365](02)  85c0            test eax,eax
        [00001367](02)  7402            jz 0000136b
        [00001369](02)  ebfe            jmp 00001369
        [0000136b](01)  5d              pop ebp
        [0000136c](01)  c3              ret
        Size in bytes:(0027) [0000136c]

    It is completely obvious that when H(P,P) correctly emulates its input
    that it must emulate the first seven instructions of P. Because the
    seventh instruction repeats this process we can know with complete
    certainty that the emulated P never reaches its final “ret” instruction, thus never halts.


    It may be "obvious" to you, but "obvious" is the source of many errors.
    You need to deal with PROVEN, not "obvious"

    You apparently don't actually believe in your own rules, don't YOU say
    that all Truths need to be proven, so since you can't actually PROVE
    that rule, you can't claim it to be true.

    The fact that it is proven FALSE, just shows your Hypocrisy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)