• Re: Refuting the HP proofs (adapted for software engineers[ brand new c

    From olcott@21:1/5 to Richard Damon on Sun Jun 5 21:37:53 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which evaluates
    the first 10 steps of a computation, and then does something else.
    What is it doing while evaluating those 10 steps?

    What would I call it? POOP! It just goes to show the accuracy and
    flexibility of Ben's acronym for any Peter-related concept.


    But PO didn't invent the concept of (partially) simulating a
    computation in order to compute certain properties of that
    computation!  It's been around since Turing's days, and is very useful. >>>
    Mike.

    That is true. I am apparently the first one that ever thought this
    through well enough so that machine descriptions matching the
    following pattern could be correctly determined to be non-halting:

          For any program H that might determine if programs halt, a
    "pathological"
          program P, called with some input, can pass its own source and
    its input to
          H and then specifically do the opposite of what H predicts P
    will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the behavior of a
    complete simulation it can be used to recognize infinite behavior
    patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0, which, by
    the DEFINITION of the requirements of the Halting Problem, H(P,P) needs
    to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered.

    Previously no one understood that it was possible for the correct
    simulation of the input to H(P,P) to be computationally distinct (thus
    not equivalent) to the direct execution of P(P).

    Because of this all of the computer science textbooks refer to the
    halting behavior of P(P) as what must be decided by the halt decider.

    This same computer science also knows that a decider must compute the
    mapping of its input finite string to an accept or reject state on the
    basis of a property of this input encoded in this finite string.

    THE FOLLOWING CRITERIA ALWAYS WORKS
    H computes the mapping from its input finite strings to its accept or
    reject state on the basis of the actual behavior specified by the actual
    input as measured by the correct UTM simulation of this input by H.

    It has been completely proven that partial simulations do correctly
    predict the behavior of some complete simulations.

    --
    Copyright 2022 Pete Olcott

    "Talent hits a target no one else can hit;
    Genius hits a target no one else can see."
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 5 22:52:58 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/5/22 10:37 PM, olcott wrote:
    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which evaluates >>>>>> the first 10 steps of a computation, and then does something else. >>>>>> What is it doing while evaluating those 10 steps?

    What would I call it? POOP! It just goes to show the accuracy and
    flexibility of Ben's acronym for any Peter-related concept.


    But PO didn't invent the concept of (partially) simulating a
    computation in order to compute certain properties of that
    computation!  It's been around since Turing's days, and is very useful. >>>>
    Mike.

    That is true. I am apparently the first one that ever thought this
    through well enough so that machine descriptions matching the
    following pattern could be correctly determined to be non-halting:

          For any program H that might determine if programs halt, a
    "pathological"
          program P, called with some input, can pass its own source and >>> its input to
          H and then specifically do the opposite of what H predicts P
    will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the behavior of a
    complete simulation it can be used to recognize infinite behavior
    patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0, which, by
    the DEFINITION of the requirements of the Halting Problem, H(P,P)
    needs to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered.

    Previously no one understood that it was possible for the correct
    simulation of the input to H(P,P) to be computationally distinct (thus
    not equivalent) to the direct execution of P(P).

    By what definition of "Correct" are you using?

    That is like says I can be correct at saying 1+2 = 4 if I just redefine
    1+2 to be 4.

    Note, if you correct deviates IN ANY WAY, from the UTM definition, you
    can't use it in your definiton of the Halting Problem.


    Because of this all of the computer science textbooks refer to the
    halting behavior of P(P) as what must be decided by the halt decider.

    Because that IS what must be decided by the Halt Decider.


    This same computer science also knows that a decider must compute the
    mapping of its input finite string to an accept or reject state on the
    basis of a property of this input encoded in this finite string.

    Right, and the property is the Halting status of UTM(P,P).

    That is a FULLY DEFINED property.


    THE FOLLOWING CRITERIA ALWAYS WORKS
    H computes the mapping from its input finite strings to its accept or
    reject state on the basis of the actual behavior specified by the actual input as measured by the correct UTM simulation of this input by H.

    Right, UTM simulation, and UTM simulation of its input is DEFINED to
    match the behavior of the machine its input represents.

    Thus BY DEFINITON UTM(<M>, w) matches the behavior of M(w). It isn't "computationall distinct" as you tried to claim above.

    So UTM(P,P) matches P(P), and you answer is proved incorrect, since P(P)
    Halts if H(P,P) returns 0, and thus UTM(P,P) will halt, and thus H(P,P)
    needed to accept (return 1), which it didn't so it is wrong.


    It has been completely proven that partial simulations do correctly
    predict the behavior of some complete simulations.


    yes, SOME, but not this one.

    FALLICY of proof by example.

    1st grade error.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Jun 5 22:03:40 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/5/2022 9:52 PM, Richard Damon wrote:

    On 6/5/22 10:37 PM, olcott wrote:
    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which
    evaluates the first 10 steps of a computation, and then does
    something else. What is it doing while evaluating those 10 steps? >>>>>>
    What would I call it? POOP! It just goes to show the accuracy and
    flexibility of Ben's acronym for any Peter-related concept.


    But PO didn't invent the concept of (partially) simulating a
    computation in order to compute certain properties of that
    computation!  It's been around since Turing's days, and is very
    useful.

    Mike.

    That is true. I am apparently the first one that ever thought this
    through well enough so that machine descriptions matching the
    following pattern could be correctly determined to be non-halting:

          For any program H that might determine if programs halt, a
    "pathological"
          program P, called with some input, can pass its own source and >>>> its input to
          H and then specifically do the opposite of what H predicts P >>>> will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the behavior of
    a complete simulation it can be used to recognize infinite behavior
    patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0, which,
    by the DEFINITION of the requirements of the Halting Problem, H(P,P)
    needs to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered.

    Previously no one understood that it was possible for the correct
    simulation of the input to H(P,P) to be computationally distinct (thus
    not equivalent) to the direct execution of P(P).

    By what definition of "Correct" are you using?


    Ordinary software engineering proves that a correct and complete x86
    emulation of the input to H(P,P) never reaches its "ret" instruction.

    _P()
    [00001352](01) 55 push ebp
    [00001353](02) 8bec mov ebp,esp
    [00001355](03) 8b4508 mov eax,[ebp+08]
    [00001358](01) 50 push eax // push P
    [00001359](03) 8b4d08 mov ecx,[ebp+08]
    [0000135c](01) 51 push ecx // push P
    [0000135d](05) e840feffff call 000011a2 // call H
    [00001362](03) 83c408 add esp,+08
    [00001365](02) 85c0 test eax,eax
    [00001367](02) 7402 jz 0000136b
    [00001369](02) ebfe jmp 00001369
    [0000136b](01) 5d pop ebp
    [0000136c](01) c3 ret
    Size in bytes:(0027) [0000136c]


    --
    Copyright 2022 Pete Olcott

    "Talent hits a target no one else can hit;
    Genius hits a target no one else can see."
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to olcott on Sun Jun 5 23:26:56 2022
    XPost: comp.theory, sci.logic, sci.math

    On 6/5/22 11:03 PM, olcott wrote:
    On 6/5/2022 9:52 PM, Richard Damon wrote:

    On 6/5/22 10:37 PM, olcott wrote:
    On 6/5/2022 9:20 PM, Richard Damon wrote:
    On 6/5/22 10:11 PM, olcott wrote:
    On 6/5/2022 8:58 PM, Mike Terry wrote:
    On 06/06/2022 01:24, Jeff Barnett wrote:
    On 6/5/2022 5:59 PM, Mike Terry wrote:
    <..snip..>>>
    Sure.
    The question right now is what you would call a TM which
    evaluates the first 10 steps of a computation, and then does
    something else. What is it doing while evaluating those 10 steps? >>>>>>>
    What would I call it? POOP! It just goes to show the accuracy and >>>>>>> flexibility of Ben's acronym for any Peter-related concept.


    But PO didn't invent the concept of (partially) simulating a
    computation in order to compute certain properties of that
    computation!  It's been around since Turing's days, and is very
    useful.

    Mike.

    That is true. I am apparently the first one that ever thought this
    through well enough so that machine descriptions matching the
    following pattern could be correctly determined to be non-halting:

          For any program H that might determine if programs halt, a >>>>> "pathological"
          program P, called with some input, can pass its own source >>>>> and its input to
          H and then specifically do the opposite of what H predicts P >>>>> will do. No H
          can exist that handles this case.
    https://en.wikipedia.org/wiki/Halting_problem

    In that a partial simulation does correctly predict the behavior of
    a complete simulation it can be used to recognize infinite behavior
    patterns.


    Except that it doesn't since P(P) Halts if H(P,P) returns 0, which,
    by the DEFINITION of the requirements of the Halting Problem, H(P,P)
    needs to accept (return 1) if P(P) Halts,
    This seems to be brand new computer science that I just discovered.

    Previously no one understood that it was possible for the correct
    simulation of the input to H(P,P) to be computationally distinct
    (thus not equivalent) to the direct execution of P(P).

    By what definition of "Correct" are you using?


    Ordinary software engineering proves that a correct and complete x86 emulation of the input to H(P,P) never reaches its "ret" instruction.


    So, I guess this just shows that you don't actually know a definition
    that shows this, so this is just another of your lies.

    Also, it is shown that P(P) Halts if H(P,P) returns 0, and even YOU have accepted that fact, so that also shows that you must be lying, since
    Ordinary Software Engineering doesn't 'prove' lies.

    Remember, correct and complete x86 emulation behaves exactly like the
    x86 program that is being emulated, in this case, P(P).

    Maybe YOUR knowledge of software engineering says that, but that says
    more about your (lack of) knowledge of real software engineering.

    _P()
    [00001352](01)  55              push ebp
    [00001353](02)  8bec            mov ebp,esp
    [00001355](03)  8b4508          mov eax,[ebp+08]
    [00001358](01)  50              push eax      // push P [00001359](03)  8b4d08          mov ecx,[ebp+08]
    [0000135c](01)  51              push ecx      // push P [0000135d](05)  e840feffff      call 000011a2 // call H [00001362](03)  83c408          add esp,+08
    [00001365](02)  85c0            test eax,eax
    [00001367](02)  7402            jz 0000136b
    [00001369](02)  ebfe            jmp 00001369
    [0000136b](01)  5d              pop ebp
    [0000136c](01)  c3              ret
    Size in bytes:(0027) [0000136c]



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)