• Re: Reasoning from first principles [key error]( infinite behavior is p

    From olcott@21:1/5 to All on Tue Mar 1 09:50:57 2022
    XPost: comp.theory, sci.logic, sci.math

    On 2/25/2022 1:00 PM, André G. Isaak wrote:
    On 2022-02-25 10:32, olcott wrote:
    On 2/25/2022 11:17 AM, André G. Isaak wrote:
    On 2022-02-25 09:11, olcott wrote:
    On 2/25/2022 9:45 AM, Richard Damon wrote:
    On 2/25/22 10:26 AM, olcott wrote:

    THIS IS ALWAYS EXACTLY THE SAME
    Simulating halt deciders continue to simulate their input until
    they determine that this simulated input would never reach its
    final state.


    But how do they determine that?


    The fact that humans can see that in both cases the simulated input
    never reaches its final state in any finite number of simulated
    steps conclusively proves that it is possible to correctly detect
    the infinite loop and the infinitely nested simulation.

    What humans can do provides no evidence at all about what algorithms
    can do. Humans are not algorithms.

    If humans can do thing X then thing X is proven to be possible to do.

    People can win olympic pole vaulting competitions. It doesn't follow
    from this that a Turing Machine can.

    And you, the human, are recognizing something making use of a piece of information with which the TM is *NOT* provided; you are aware of the
    fact that the input happens to be a representation of Ĥ, a machine which includes a copy of embedded_H.

    embedded_H, on the other hand, is *not* provided with this information.

    So how does your embedded_H recognize that the input string includes a
    copy of itself?

    (and what humans can do with information x, y, and z tells us even
    left about what an algorithm can do with only x and y).

    If you want to claim it is possible for an algorithm to recognize
    infinitely recursive simulation, you need to actually show how that
    algorithm works.


    The first step of this elaboration requires acknowledgement that:
    If humans can do thing X then thing X is proven to be possible to do.
    ∴ if embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn then embedded_H is correct.

    I can't possibly acknowledge anything about embedded_H if you won't
    provide the details of how it works such as the one I ask about above.

    André


    It is stipulated that the key aspect of simulating halt deciders is that
    they examine the behavior patterns of their simulated input.

    It is universally true that when-so-ever a simulating halt decider must
    abort the simulation of its input to prevent the infinite simulation of
    this input that thus input specifies an infinite sequence of
    configurations.

    On the basis of the above it is self evident that the pure simulation of ⟨Ĥ⟩ ⟨Ĥ⟩ by embedded_H would never stop running, thus infinite behavior
    is proved. Because infinite behavior is proved then a transition to Ĥ.qn
    is proved to be correct.

    The above by itself is much further than anyone has ever gotten with the halting problem.

    Because humans can easily see that ⟨Ĥ⟩ ⟨Ĥ⟩ specifies an infinite sequence of configurations to simulating halt decider embedded_H it is reasonable to conclude that a computation could do likewise.

    --
    Copyright 2021 Pete Olcott "Talent hits a target no one else can hit;
    Genius hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Mar 1 12:32:59 2022
    XPost: comp.theory, sci.logic, sci.math

    On 3/1/2022 11:46 AM, André G. Isaak wrote:
    On 2022-03-01 08:50, olcott wrote:
    On 2/25/2022 1:00 PM, André G. Isaak wrote:
    On 2022-02-25 10:32, olcott wrote:
    On 2/25/2022 11:17 AM, André G. Isaak wrote:
    On 2022-02-25 09:11, olcott wrote:
    On 2/25/2022 9:45 AM, Richard Damon wrote:
    On 2/25/22 10:26 AM, olcott wrote:

    THIS IS ALWAYS EXACTLY THE SAME
    Simulating halt deciders continue to simulate their input until >>>>>>>> they determine that this simulated input would never reach its >>>>>>>> final state.


    But how do they determine that?


    The fact that humans can see that in both cases the simulated
    input never reaches its final state in any finite number of
    simulated steps conclusively proves that it is possible to
    correctly detect the infinite loop and the infinitely nested
    simulation.

    What humans can do provides no evidence at all about what
    algorithms can do. Humans are not algorithms.

    If humans can do thing X then thing X is proven to be possible to do.

    People can win olympic pole vaulting competitions. It doesn't follow
    from this that a Turing Machine can.

    And you, the human, are recognizing something making use of a piece
    of information with which the TM is *NOT* provided; you are aware of
    the fact that the input happens to be a representation of Ĥ, a
    machine which includes a copy of embedded_H.

    embedded_H, on the other hand, is *not* provided with this information.

    So how does your embedded_H recognize that the input string includes
    a copy of itself?

    (and what humans can do with information x, y, and z tells us even
    left about what an algorithm can do with only x and y).

    If you want to claim it is possible for an algorithm to recognize
    infinitely recursive simulation, you need to actually show how that
    algorithm works.


    The first step of this elaboration requires acknowledgement that:
    If humans can do thing X then thing X is proven to be possible to do.
    ∴ if embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn then embedded_H is correct. >>>
    I can't possibly acknowledge anything about embedded_H if you won't
    provide the details of how it works such as the one I ask about above.

    André


    It is stipulated that the key aspect of simulating halt deciders is
    that they examine the behavior patterns of their simulated input.

    "Stipulating" this doesn't magically make it possible.

    In this case it does. You did not pay perfect attention to the exact
    words that I used.

    It is universally true that when-so-ever a simulating halt decider
    must abort the simulation of its input to prevent the infinite
    simulation of this input that thus input specifies an infinite
    sequence of configurations.

    On the basis of the above it is self evident that the pure simulation
    of ⟨Ĥ⟩ ⟨Ĥ⟩ by embedded_H would never stop running, thus infinite >> behavior is proved. Because infinite behavior is proved then a
    transition to Ĥ.qn is proved to be correct.

    The above by itself is much further than anyone has ever gotten with
    the halting problem.

    Because humans can easily see that ⟨Ĥ⟩ ⟨Ĥ⟩ specifies an infinite >> sequence of configurations to simulating halt decider embedded_H it is
    reasonable to conclude that a computation could do likewise.

    That's not a proof. What a human can do tells us nothing about what is possible for an algorithm. And remember that the human has knowledge
    which the Turing Machine does not.


    I have a computer program that can do this too, yet it is simply ruled
    as not counting on the basis that it does not conform to the limitations
    of computable functions.

    So how exactly does is the Turing Machine Ĥ determine whether its input string is a representation of itself?

    A RASP machine simply sees that its input is at its same machine address.

    Unless it has some way of doing
    that, it has no way of recognizing that it is being called recursively.

    André


    https://en.wikipedia.org/wiki/Random-access_stored-program_machine

    If a RASP machine is strictly more powerful than a Turing machine then
    simply abandon the TM model as inherently inferior.


    --
    Copyright 2021 Pete Olcott

    Talent hits a target no one else can hit;
    Genius hits a target no one else can see.
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Mar 1 17:23:26 2022
    XPost: comp.theory, sci.logic, sci.math

    On 3/1/2022 5:02 PM, André G. Isaak wrote:
    On 2022-03-01 13:59, olcott wrote:
    On 3/1/2022 1:59 PM, André G. Isaak wrote:
    On 2022-03-01 11:32, olcott wrote:
    On 3/1/2022 11:46 AM, André G. Isaak wrote:
    On 2022-03-01 08:50, olcott wrote:
    On 2/25/2022 1:00 PM, André G. Isaak wrote:
    On 2022-02-25 10:32, olcott wrote:
    On 2/25/2022 11:17 AM, André G. Isaak wrote:
    On 2022-02-25 09:11, olcott wrote:
    On 2/25/2022 9:45 AM, Richard Damon wrote:
    On 2/25/22 10:26 AM, olcott wrote:

    THIS IS ALWAYS EXACTLY THE SAME
    Simulating halt deciders continue to simulate their input >>>>>>>>>>>> until they determine that this simulated input would never >>>>>>>>>>>> reach its final state.


    But how do they determine that?


    The fact that humans can see that in both cases the simulated >>>>>>>>>> input never reaches its final state in any finite number of >>>>>>>>>> simulated steps conclusively proves that it is possible to >>>>>>>>>> correctly detect the infinite loop and the infinitely nested >>>>>>>>>> simulation.

    What humans can do provides no evidence at all about what
    algorithms can do. Humans are not algorithms.

    If humans can do thing X then thing X is proven to be possible >>>>>>>> to do.

    People can win olympic pole vaulting competitions. It doesn't
    follow from this that a Turing Machine can.

    And you, the human, are recognizing something making use of a
    piece of information with which the TM is *NOT* provided; you are >>>>>>> aware of the fact that the input happens to be a representation
    of Ĥ, a machine which includes a copy of embedded_H.

    embedded_H, on the other hand, is *not* provided with this
    information.

    So how does your embedded_H recognize that the input string
    includes a copy of itself?

    (and what humans can do with information x, y, and z tells us >>>>>>>>> even left about what an algorithm can do with only x and y). >>>>>>>>>
    If you want to claim it is possible for an algorithm to
    recognize infinitely recursive simulation, you need to actually >>>>>>>>> show how that algorithm works.


    The first step of this elaboration requires acknowledgement that: >>>>>>>> If humans can do thing X then thing X is proven to be possible >>>>>>>> to do.
    ∴ if embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn then embedded_H is correct.

    I can't possibly acknowledge anything about embedded_H if you
    won't provide the details of how it works such as the one I ask
    about above.

    André


    It is stipulated that the key aspect of simulating halt deciders
    is that they examine the behavior patterns of their simulated input. >>>>>
    "Stipulating" this doesn't magically make it possible.

    In this case it does. You did not pay perfect attention to the exact
    words that I used.

    It is universally true that when-so-ever a simulating halt decider >>>>>> must abort the simulation of its input to prevent the infinite
    simulation of this input that thus input specifies an infinite
    sequence of configurations.

    On the basis of the above it is self evident that the pure
    simulation of ⟨Ĥ⟩ ⟨Ĥ⟩ by embedded_H would never stop running, thus
    infinite behavior is proved. Because infinite behavior is proved
    then a transition to Ĥ.qn is proved to be correct.

    The above by itself is much further than anyone has ever gotten
    with the halting problem.

    Because humans can easily see that ⟨Ĥ⟩ ⟨Ĥ⟩ specifies an infinite
    sequence of configurations to simulating halt decider embedded_H
    it is reasonable to conclude that a computation could do likewise.

    That's not a proof. What a human can do tells us nothing about what
    is possible for an algorithm. And remember that the human has
    knowledge which the Turing Machine does not.


    I have a computer program that can do this too, yet it is simply
    ruled as not counting on the basis that it does not conform to the
    limitations of computable functions.

    So how exactly does is the Turing Machine Ĥ determine whether its
    input string is a representation of itself?

    A RASP machine simply sees that its input is at its same machine
    address.

    Unless it has some way of doing that, it has no way of recognizing
    that it is being called recursively.

    André


    https://en.wikipedia.org/wiki/Random-access_stored-program_machine

    If a RASP machine is strictly more powerful than a Turing machine
    then simply abandon the TM model as inherently inferior.

    You are horridly confused here.

    RASP machines or x86 machines are *not* more powerful than Turing
    Machines.

    Anything your computer can do is a computation. But it doesn't follow
    from that that any arbitrary C function or piece of x86 code is a
    computation, because a computation involves a *self-contained*
    algorithm.

    Thus, unless a C function or piece of x86 code can run *on its own*,
    it is not a computation. If it depends on other functions (including
    Operating System functions) then the *combination* of that function
    and all of the other functions it depends on can be construed as a
    computation, but the function on its own cannot be (and your computer
    wouldn't be able to run it in absence of those dependencies).


    Ah so a C function that has its own machine address directly encoded
    into one of its x86 instructions is still a computation. This would be
    the machine address of a label ---> __asm lea eax, HERE

    Sure, but it's a computation that has that address as part of its
    *input*. That means you're not dealing with the same computation as the halting problem.

    Again, when dealing with computations, 'input' doesn't just mean things
    which are passed to a function as formal parameters; it means *any* data which the function has access to and makes use of.

    The problem with all of your solutions to the halting problem isn't
    that they aren't computations, but that they are the *wrong*
    computation.


    So you assume by simply ignoring the self-evident truth of this
    statement:
    It is universally true that when-so-ever a simulating halt decider
    must abort the simulation of its input to prevent the infinite
    simulation of this input that thus input specifies an infinite
    sequence of configurations.


    The input to a computation consists of *all* of the information which
    is made available to that computation, not just those bits of
    information which are passed as formal parameters to a C function.

    Given some C function defined as


    My C/x86 H merely needs to know its own machine address, then it can
    tell that as soon as P calls H with identical parameters that this is
    infinitely nested simulation.

    Boolean foo(someType *X) {...}

    This function does not necessarily compute a function from X to a
    boolean value because a C function can in principle access all sorts
    of other data besides its formal parameter X. *Any* data which the
    function makes use of must be considered part of its input even if it
    isn't declared as a formal parameter.

    If a function refers to both some data in memory and the address of
    that data, those are two *separate* pieces of information, and when
    such a function is construed as a computation, both of those pieces
    of information must be treated as part of its input.

    The halting problem specifies exactly which information the input to
    a halt decider includes, and if a C function makes use of any data
    other than that information then the function may be a computation,
    but it is *not* the computation that the halting problem is concerned
    with.

    That seems like an arbitrary constraint.

    There's nothing arbitrary about it. When we ask whether W is computable
    from X and Y, and a program makes use of X, Y, and Z to calculate W then
    it isn't addressing the question being asked; it's addressing whether W
    is computable from X, Y, and Z, which is an entirely different question.


    So when we ask whether or not the halting problem can be solved and the solution requires someone to manually enter the current value of the S&P
    500 then this solution that can decide the halt status of any input
    simply does not count.

    As long as there are no inputs that a function cannot decide the halt
    status of, then the halting problem is only impossible when arbitrary
    constraints are place upon it.

    We could for example say that it is impossible to clap your hands
    together and then restrict the solution set to people that have no hands.

    The halting problem is concerned with whether it is possible to
    determine whether a given computation is finite given *only* a
    description of a Turing Machine and its input string. There's nothing
    in that input which includes information about where in memory this
    information might be stored in an x86 implementation.


    Still seems like a purely arbitrary constraint.

    This is why both C and x86 assembly language are absolutely terrible
    models for discussing computational theory. Neither of these things
    is designed as a model of computation, and consequently the code for
    C or x86 routines don't necessarily make it clear exactly what it is
    that is being computed. When declaring a C function you must declare
    its formal parameters, but these parameters do *not* necessarily
    correspond to the input of the algorithm which that function implements. >>>
    This is why you really ought to simply abandon C/x86 and focus on TMs
    if you actually want to learn computational theory and what
    computations are. Because of the way they work, it is always entirely
    clear exactly which computation is involved and what the input is
    when working with TMs. Not so with C and/or x86 code.

    André


    The problem with this approach is that key details must be totally
    ignored because TM's are simply not expressive enough to express
    algorithm's concisely.

    No. The problem is that C/x86 allow you to make use of inputs which are
    not *explicitly* declared. But they are still inputs.

    André


    So what? If every time that a halt decider needed to decide that halt
    status of an input it required carefully analyzing ever detail of all of
    the news in the last year (as additional input) as long as it comes up
    with the correct halt status it is a halt decider.


    --
    Copyright 2021 Pete Olcott

    Talent hits a target no one else can hit;
    Genius hits a target no one else can see.
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Tue Mar 1 22:13:33 2022
    XPost: comp.theory, sci.logic, sci.math

    On 3/1/2022 1:59 PM, André G. Isaak wrote:
    On 2022-03-01 11:32, olcott wrote:
    On 3/1/2022 11:46 AM, André G. Isaak wrote:
    On 2022-03-01 08:50, olcott wrote:
    On 2/25/2022 1:00 PM, André G. Isaak wrote:
    On 2022-02-25 10:32, olcott wrote:
    On 2/25/2022 11:17 AM, André G. Isaak wrote:
    On 2022-02-25 09:11, olcott wrote:
    On 2/25/2022 9:45 AM, Richard Damon wrote:
    On 2/25/22 10:26 AM, olcott wrote:

    THIS IS ALWAYS EXACTLY THE SAME
    Simulating halt deciders continue to simulate their input
    until they determine that this simulated input would never >>>>>>>>>> reach its final state.


    But how do they determine that?


    The fact that humans can see that in both cases the simulated
    input never reaches its final state in any finite number of
    simulated steps conclusively proves that it is possible to
    correctly detect the infinite loop and the infinitely nested
    simulation.

    What humans can do provides no evidence at all about what
    algorithms can do. Humans are not algorithms.

    If humans can do thing X then thing X is proven to be possible to do. >>>>>
    People can win olympic pole vaulting competitions. It doesn't
    follow from this that a Turing Machine can.

    And you, the human, are recognizing something making use of a piece
    of information with which the TM is *NOT* provided; you are aware
    of the fact that the input happens to be a representation of Ĥ, a
    machine which includes a copy of embedded_H.

    embedded_H, on the other hand, is *not* provided with this
    information.

    So how does your embedded_H recognize that the input string
    includes a copy of itself?

    (and what humans can do with information x, y, and z tells us
    even left about what an algorithm can do with only x and y).

    If you want to claim it is possible for an algorithm to recognize >>>>>>> infinitely recursive simulation, you need to actually show how
    that algorithm works.


    The first step of this elaboration requires acknowledgement that:
    If humans can do thing X then thing X is proven to be possible to do. >>>>>> ∴ if embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn then embedded_H is correct.

    I can't possibly acknowledge anything about embedded_H if you won't
    provide the details of how it works such as the one I ask about above. >>>>>
    André


    It is stipulated that the key aspect of simulating halt deciders is
    that they examine the behavior patterns of their simulated input.

    "Stipulating" this doesn't magically make it possible.

    In this case it does. You did not pay perfect attention to the exact
    words that I used.

    It is universally true that when-so-ever a simulating halt decider
    must abort the simulation of its input to prevent the infinite
    simulation of this input that thus input specifies an infinite
    sequence of configurations.

    On the basis of the above it is self evident that the pure
    simulation of ⟨Ĥ⟩ ⟨Ĥ⟩ by embedded_H would never stop running, thus
    infinite behavior is proved. Because infinite behavior is proved
    then a transition to Ĥ.qn is proved to be correct.

    The above by itself is much further than anyone has ever gotten with
    the halting problem.

    Because humans can easily see that ⟨Ĥ⟩ ⟨Ĥ⟩ specifies an infinite >>>> sequence of configurations to simulating halt decider embedded_H it
    is reasonable to conclude that a computation could do likewise.

    That's not a proof. What a human can do tells us nothing about what
    is possible for an algorithm. And remember that the human has
    knowledge which the Turing Machine does not.


    I have a computer program that can do this too, yet it is simply ruled
    as not counting on the basis that it does not conform to the
    limitations of computable functions.

    So how exactly does is the Turing Machine Ĥ determine whether its
    input string is a representation of itself?

    A RASP machine simply sees that its input is at its same machine address.

    Unless it has some way of doing that, it has no way of recognizing
    that it is being called recursively.

    André


    https://en.wikipedia.org/wiki/Random-access_stored-program_machine

    If a RASP machine is strictly more powerful than a Turing machine then
    simply abandon the TM model as inherently inferior.

    You are horridly confused here.

    RASP machines or x86 machines are *not* more powerful than Turing Machines.

    Anything your computer can do is a computation. But it doesn't follow
    from that that any arbitrary C function or piece of x86 code is a computation, because a computation involves a *self-contained* algorithm.

    Thus, unless a C function or piece of x86 code can run *on its own*, it
    is not a computation. If it depends on other functions (including
    Operating System functions) then the *combination* of that function and
    all of the other functions it depends on can be construed as a
    computation, but the function on its own cannot be (and your computer wouldn't be able to run it in absence of those dependencies).

    That is great I have a better understanding of this now.

    Since H needs nothing outside of itself to determine its own machine
    address H is still a computable function, even knowing it own machine
    address.

    In computer programming, a pure function is a function that has the
    following properties:[1][2]

    (1) the function return values are identical for identical arguments (no variation with local static variables, non-local variables, mutable
    reference arguments or input streams), and

    (2) the function application has no side effects (no mutation of local
    static variables, non-local variables, mutable reference arguments or input/output streams).

    https://en.wikipedia.org/wiki/Pure_function

    Even though a C function would know its own machine address, it still
    meets the requirements of (1) and (2) because it will always
    consistently derive the same accept or reject state on the basis of the
    same input and has no side effects.

    If H is a computable function and a pure function that seems to be enough.




    --
    Copyright 2021 Pete Olcott "Talent hits a target no one else can hit;
    Genius hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Wed Mar 2 09:09:29 2022
    XPost: comp.theory, sci.logic, sci.math

    On 3/2/2022 6:09 AM, Richard Damon wrote:
    On 3/1/22 11:13 PM, olcott wrote:
    On 3/1/2022 1:59 PM, André G. Isaak wrote:
    On 2022-03-01 11:32, olcott wrote:
    On 3/1/2022 11:46 AM, André G. Isaak wrote:
    On 2022-03-01 08:50, olcott wrote:
    On 2/25/2022 1:00 PM, André G. Isaak wrote:
    On 2022-02-25 10:32, olcott wrote:
    On 2/25/2022 11:17 AM, André G. Isaak wrote:
    On 2022-02-25 09:11, olcott wrote:
    On 2/25/2022 9:45 AM, Richard Damon wrote:
    On 2/25/22 10:26 AM, olcott wrote:

    THIS IS ALWAYS EXACTLY THE SAME
    Simulating halt deciders continue to simulate their input >>>>>>>>>>>> until they determine that this simulated input would never >>>>>>>>>>>> reach its final state.


    But how do they determine that?


    The fact that humans can see that in both cases the simulated >>>>>>>>>> input never reaches its final state in any finite number of >>>>>>>>>> simulated steps conclusively proves that it is possible to >>>>>>>>>> correctly detect the infinite loop and the infinitely nested >>>>>>>>>> simulation.

    What humans can do provides no evidence at all about what
    algorithms can do. Humans are not algorithms.

    If humans can do thing X then thing X is proven to be possible >>>>>>>> to do.

    People can win olympic pole vaulting competitions. It doesn't
    follow from this that a Turing Machine can.

    And you, the human, are recognizing something making use of a
    piece of information with which the TM is *NOT* provided; you are >>>>>>> aware of the fact that the input happens to be a representation
    of Ĥ, a machine which includes a copy of embedded_H.

    embedded_H, on the other hand, is *not* provided with this
    information.

    So how does your embedded_H recognize that the input string
    includes a copy of itself?

    (and what humans can do with information x, y, and z tells us >>>>>>>>> even left about what an algorithm can do with only x and y). >>>>>>>>>
    If you want to claim it is possible for an algorithm to
    recognize infinitely recursive simulation, you need to actually >>>>>>>>> show how that algorithm works.


    The first step of this elaboration requires acknowledgement that: >>>>>>>> If humans can do thing X then thing X is proven to be possible >>>>>>>> to do.
    ∴ if embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn then embedded_H is correct.

    I can't possibly acknowledge anything about embedded_H if you
    won't provide the details of how it works such as the one I ask
    about above.

    André


    It is stipulated that the key aspect of simulating halt deciders
    is that they examine the behavior patterns of their simulated input. >>>>>
    "Stipulating" this doesn't magically make it possible.

    In this case it does. You did not pay perfect attention to the exact
    words that I used.

    It is universally true that when-so-ever a simulating halt decider >>>>>> must abort the simulation of its input to prevent the infinite
    simulation of this input that thus input specifies an infinite
    sequence of configurations.

    On the basis of the above it is self evident that the pure
    simulation of ⟨Ĥ⟩ ⟨Ĥ⟩ by embedded_H would never stop running, thus
    infinite behavior is proved. Because infinite behavior is proved
    then a transition to Ĥ.qn is proved to be correct.

    The above by itself is much further than anyone has ever gotten
    with the halting problem.

    Because humans can easily see that ⟨Ĥ⟩ ⟨Ĥ⟩ specifies an infinite
    sequence of configurations to simulating halt decider embedded_H
    it is reasonable to conclude that a computation could do likewise.

    That's not a proof. What a human can do tells us nothing about what
    is possible for an algorithm. And remember that the human has
    knowledge which the Turing Machine does not.


    I have a computer program that can do this too, yet it is simply
    ruled as not counting on the basis that it does not conform to the
    limitations of computable functions.

    So how exactly does is the Turing Machine Ĥ determine whether its
    input string is a representation of itself?

    A RASP machine simply sees that its input is at its same machine
    address.

    Unless it has some way of doing that, it has no way of recognizing
    that it is being called recursively.

    André


    https://en.wikipedia.org/wiki/Random-access_stored-program_machine

    If a RASP machine is strictly more powerful than a Turing machine
    then simply abandon the TM model as inherently inferior.

    You are horridly confused here.

    RASP machines or x86 machines are *not* more powerful than Turing
    Machines.

    Anything your computer can do is a computation. But it doesn't follow
    from that that any arbitrary C function or piece of x86 code is a
    computation, because a computation involves a *self-contained*
    algorithm.

    Thus, unless a C function or piece of x86 code can run *on its own*,
    it is not a computation. If it depends on other functions (including
    Operating System functions) then the *combination* of that function
    and all of the other functions it depends on can be construed as a
    computation, but the function on its own cannot be (and your computer
    wouldn't be able to run it in absence of those dependencies).

    That is great I have a better understanding of this now.

    Since H needs nothing outside of itself to determine its own machine
    address H is still a computable function, even knowing it own machine
    address.

    If the behavior of a function changes based on where it is loaded into memory, then its loading address is an input to the function.


    It merely needs to know where it is loaded so that it can see when it is called.

    A Halt Decider doesn't get that as an input, so if it gets its address,
    and uses it to change its behavior, it isn't being the needed computation.


    The requirement of a pure function is that the:
    "the function return values are identical for identical arguments"
    as long as there is no variation of this when it uses
    local static variables, non-local variables, mutable reference arguments
    or input streams. FIND AN ORIGINAL SOURCE THAT SAYS OTHERWISE

    Yes, you can define some computation H(wM, w, addr) that does what you
    are trying to define, it isn't a correct Halting Decider, BY DEFINITION.


    If it derives the correct halt status decision then it is a correct halt decider.


    In computer programming, a pure function is a function that has the
    following properties:[1][2]

    (1) the function return values are identical for identical arguments
    (no variation with local static variables, non-local variables,
    mutable reference arguments or input streams), and

    (2) the function application has no side effects (no mutation of local
    static variables, non-local variables, mutable reference arguments or
    input/output streams).

    https://en.wikipedia.org/wiki/Pure_function

    And being a 'Pure Function' isn't sufficient for a piece of code to be a Computation of its formal parameters.


    The requirement of a pure function is that the:
    "the function return values are identical for identical arguments"
    FIND AN ORIGINAL SOURCE THAT SAYS OTHERWISE



    Even though a C function would know its own machine address, it still
    meets the requirements of (1) and (2) because it will always
    consistently derive the same accept or reject state on the basis of
    the same input and has no side effects.

    If H is a computable function and a pure function that seems to be
    enough.


    Nope, it needs to meet the requirements of a COMPUTATION, which means
    that all copies of it must give the same answer for the same input.

    It is absurd to require a sequence of configurations that has been
    aborted to have the same behavior of this same sequence that has not
    been aborted.


    You just don't understand the meaning of the word REQUIREMENTS or
    DEFINITION do you?

    Maybe you should just give up since you don't have time to learn all
    this stuff you just forgot to learn and are making very bad guesses abot.

    There seems to be no formal technical source that agrees with you on
    these things. FIND AN ORIGINAL SOURCE THAT SAYS OTHERWISE



    --
    Copyright 2021 Pete Olcott "Talent hits a target no one else can hit;
    Genius hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to All on Thu Mar 3 10:15:34 2022
    XPost: comp.theory, sci.logic, sci.math

    On 3/1/2022 11:46 AM, André G. Isaak wrote:
    On 2022-03-01 08:50, olcott wrote:
    On 2/25/2022 1:00 PM, André G. Isaak wrote:
    On 2022-02-25 10:32, olcott wrote:
    On 2/25/2022 11:17 AM, André G. Isaak wrote:
    On 2022-02-25 09:11, olcott wrote:
    On 2/25/2022 9:45 AM, Richard Damon wrote:
    On 2/25/22 10:26 AM, olcott wrote:

    THIS IS ALWAYS EXACTLY THE SAME
    Simulating halt deciders continue to simulate their input until >>>>>>>> they determine that this simulated input would never reach its >>>>>>>> final state.


    But how do they determine that?


    The fact that humans can see that in both cases the simulated
    input never reaches its final state in any finite number of
    simulated steps conclusively proves that it is possible to
    correctly detect the infinite loop and the infinitely nested
    simulation.

    What humans can do provides no evidence at all about what
    algorithms can do. Humans are not algorithms.

    If humans can do thing X then thing X is proven to be possible to do.

    People can win olympic pole vaulting competitions. It doesn't follow
    from this that a Turing Machine can.

    And you, the human, are recognizing something making use of a piece
    of information with which the TM is *NOT* provided; you are aware of
    the fact that the input happens to be a representation of Ĥ, a
    machine which includes a copy of embedded_H.

    embedded_H, on the other hand, is *not* provided with this information.

    So how does your embedded_H recognize that the input string includes
    a copy of itself?

    (and what humans can do with information x, y, and z tells us even
    left about what an algorithm can do with only x and y).

    If you want to claim it is possible for an algorithm to recognize
    infinitely recursive simulation, you need to actually show how that
    algorithm works.


    The first step of this elaboration requires acknowledgement that:
    If humans can do thing X then thing X is proven to be possible to do.
    ∴ if embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn then embedded_H is correct. >>>
    I can't possibly acknowledge anything about embedded_H if you won't
    provide the details of how it works such as the one I ask about above.

    André


    It is stipulated that the key aspect of simulating halt deciders is
    that they examine the behavior patterns of their simulated input.

    "Stipulating" this doesn't magically make it possible.

    It is universally true that when-so-ever a simulating halt decider
    must abort the simulation of its input to prevent the infinite
    simulation of this input that thus input specifies an infinite
    sequence of configurations.

    On the basis of the above it is self evident that the pure simulation
    of ⟨Ĥ⟩ ⟨Ĥ⟩ by embedded_H would never stop running, thus infinite >> behavior is proved. Because infinite behavior is proved then a
    transition to Ĥ.qn is proved to be correct.

    The above by itself is much further than anyone has ever gotten with
    the halting problem.

    Because humans can easily see that ⟨Ĥ⟩ ⟨Ĥ⟩ specifies an infinite >> sequence of configurations to simulating halt decider embedded_H it is
    reasonable to conclude that a computation could do likewise.

    That's not a proof. What a human can do tells us nothing about what is possible for an algorithm. And remember that the human has knowledge
    which the Turing Machine does not.

    So how exactly does is the Turing Machine Ĥ determine whether its input string is a representation of itself? Unless it has some way of doing
    that, it has no way of recognizing that it is being called recursively.

    André

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    NON-HALTING CRITERION MEASURE
    It is universally true that when-so-ever a simulating halt decider must
    abort the simulation of its input to prevent the infinite simulation of
    this input that this input specifies an infinite sequence of
    configurations.

    On the basis of the above it is self evident that the pure simulation of ⟨Ĥ⟩ ⟨Ĥ⟩ by embedded_H would never stop running, thus infinite behavior
    is proved. Because infinite behavior is proved then a transition to Ĥ.qn
    is proved to be correct.

    Because all simulating halt deciders are deciders they ONLY compute the
    mapping from their inputs to an accept or reject state. This means that embedded_H does not determine the halt status of itself or the
    computation that contains it: Ĥ ⟨Ĥ⟩ because these are not inputs.



    --
    Copyright 2021 Pete Olcott "Talent hits a target no one else can hit;
    Genius hits a target no one else can see." Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)