• Re: Concise refutation of halting problem proofs V63 [ know its own add

    From olcott@21:1/5 to Ben Bacarisse on Sat Feb 26 22:26:04 2022
    XPost: comp.theory, sci.logic, sci.math

    On 2/26/2022 6:34 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 2/23/2022 6:41 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    On 2/22/2022 9:27 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    On 2/19/2022 6:33 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 2/19/2022 4:59 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    Deciders only compute the mapping from their actual inputs to accept >>>>>>>>>> or reject state.
    Too many words. Deciders accept or reject any and all inputs. >>>>>>>>
    My words are precisely technically accurate.
    There are just too many of them. Waffle is not always wrong. You think
    using lots of words makes you sound all technical and "sciency", but >>>>>>> that's because you've not spent much time around smart technical people.

    Anything such as the behavior of Ĥ ⟨Ĥ⟩ has no relevance to >>>>>>>>>> the halt status decision because it is not an actual input. >>>>>>>>>
    This is so silly. I tried to help once by suggesting you to specify a
    much simpler TM, but you don't like being given helpful exercises. >>>>>>>>> Except for the most trivial examples, TMs are always specified in terms
    of what the input encodes, rather than what the "actual input" is. >>>>>>>>
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    The key distinction is that the exact point of the execution trace >>>>>>>> matters.

    No, that's your big mistake (B). For Turing machines, all that matters >>>>>>> is the state and the tape, and that's what those lines you keep writing >>>>>>> specify. H ⟨Ĥ⟩ ⟨Ĥ⟩ will transition to qn if, and only if, Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩
    transitions to qn.

    How the direct execution of Ĥ ⟨Ĥ⟩ vary from the simulation of ⟨Ĥ⟩ ⟨Ĥ⟩
    ?
    That's not the question. To get out of this hole you need to read what >>>>> people write and address the points they make.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
    You claim that Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ and H ⟨Ĥ⟩ ⟨Ĥ⟩ will not transition to the
    same (actually corresponding) states.

    I have reversed my view on this on the basis of a deeper understanding >>>> of the notion of computable functions.
    Everyone else knew it from understanding what a Turing machine is, but
    kudos to you for (almost) saying you were wrong.


    I was shocked to find out that a Turing machine cannot do what every C
    program can do because TM's only implement computable functions.

    Really? Did you think a TM could let you post to Usenet? Maybe after,
    a few decades of pontificating about them, you will eventually know what Turing machines are.

    This says nothing about computable functions:
    The Church-Turing thesis (formerly commonly known simply as Church's
    thesis) says that any real-world computation can be translated into an equivalent computation involving a Turing machine. https://mathworld.wolfram.com/Church-TuringThesis.html

    It has always been my understanding that anything any real world
    computer can do a TM can do. Now people are telling me that a computable function can't even know its own machine address in memory.

    --
    Copyright 2021 Pete Olcott

    "Talent hits a target no one else can hit;
    Genius hits a target no one else can see."
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Feb 27 09:19:25 2022
    XPost: comp.theory, sci.logic, sci.math

    On 2/27/2022 5:46 AM, Richard Damon wrote:
    On 2/26/22 11:26 PM, olcott wrote:
    On 2/26/2022 6:34 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 2/23/2022 6:41 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    On 2/22/2022 9:27 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    On 2/19/2022 6:33 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 2/19/2022 4:59 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    Deciders only compute the mapping from their actual inputs >>>>>>>>>>>> to accept
    or reject state.
    Too many words.  Deciders accept or reject any and all inputs. >>>>>>>>>>
    My words are precisely technically accurate.
    There are just too many of them.  Waffle is not always wrong. >>>>>>>>> You think
    using lots of words makes you sound all technical and
    "sciency", but
    that's because you've not spent much time around smart
    technical people.

    Anything such as the behavior of Ĥ ⟨Ĥ⟩ has no relevance to >>>>>>>>>>>> the halt status decision because it is not an actual input. >>>>>>>>>>>
    This is so silly.  I tried to help once by suggesting you to >>>>>>>>>>> specify a
    much simpler TM, but you don't like being given helpful
    exercises.
    Except for the most trivial examples, TMs are always
    specified in terms
    of what the input encodes, rather than what the "actual
    input" is.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn

    The key distinction is that the exact point of the execution >>>>>>>>>> trace
    matters.

    No, that's your big mistake (B).  For Turing machines, all that >>>>>>>>> matters
    is the state and the tape, and that's what those lines you keep >>>>>>>>> writing
    specify.  H ⟨Ĥ⟩ ⟨Ĥ⟩ will transition to qn if, and only if, Ĥ.qx
    ⟨Ĥ⟩ ⟨Ĥ⟩
    transitions to qn.

    How the direct execution of Ĥ ⟨Ĥ⟩ vary from the simulation of >>>>>>>> ⟨Ĥ⟩ ⟨Ĥ⟩
    ?
    That's not the question.  To get out of this hole you need to
    read what
    people write and address the points they make.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
    You claim that Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ and H ⟨Ĥ⟩ ⟨Ĥ⟩ will not transition to the
    same (actually corresponding) states.

    I have reversed my view on this on the basis of a deeper
    understanding
    of the notion of computable functions.
    Everyone else knew it from understanding what a Turing machine is, but >>>>> kudos to you for (almost) saying you were wrong.


    I was shocked to find out that a Turing machine cannot do what every C >>>> program can do because TM's only implement computable functions.

    Really?  Did you think a TM could let you post to Usenet?  Maybe after, >>> a few decades of pontificating about them, you will eventually know what >>> Turing machines are.

    This says nothing about computable functions:
    The Church-Turing thesis (formerly commonly known simply as Church's
    thesis) says that any real-world computation can be translated into an
    equivalent computation involving a Turing machine.
    https://mathworld.wolfram.com/Church-TuringThesis.html

    It has always been my understanding that anything any real world
    computer can do a TM can do. Now people are telling me that a
    computable function can't even know its own machine address in memory.


    And your understanding isn't quite correct. Any program that meets the requirement of a Computation, that a computer can do can be done by a
    Turing Machine. This generally means that any ENTIRE program loaded into
    a computer (including the operating system it runs under) taken as a
    whole will be one if the I/O is somewhat limited to meet the model. In particular, if one computer talks to another, that can't be directly
    modeled by a Turing Machine, but the combined system can be mostly modeled.

    The same applies to multiple cores. The main issue with multiple cores
    or multiple computers is that now there is a bit of randomness added
    into the processing mix that a Turing Machine can't handle, but if the algorithm doesn't try to exploit that behavior.

    The big thing that a Turing Machine can't duplicate is behavior of some program SEGMENTS which don't act like a Computation. This can happen if
    they receive input from something not considered as an input to the algorithm, as this breaks the Computation model.

    Thus, yes, if a 'function' uses its address in memory, and that address hasn't been defined as a input to it, then it has ceased to be a
    computation.


    Yet an x86 function can derive its own address without needing to be
    told as an offset to its other known addresses that are hard-coded into
    its instructions. This would seem to make an x86 function strictly more powerful than a TM.

    THe mere fact that you make this statement shoulf help you realize that
    you don't really understand what Computation Theory is about. You seem
    to think it is about 'Computers', when it is about 'Computations', and
    the definition of 'Computation' that it uses vastly predates the modern digital computer, and in fact at that time, a 'Computer' was a person
    with a pad of paper, and maybe an adding machine, and a detailed set of instructions.

    THis is youyr problem with trying to start from your concept of 'First Principles', to do that sort of system analysis, you need to actually
    START from the REAL FIRST PRINCIPLES. If you start with a wrong base principle, you whole analysis is just wrong.


    --
    Copyright 2021 Pete Olcott

    Talent hits a target no one else can hit;
    Genius hits a target no one else can see.
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From olcott@21:1/5 to Richard Damon on Sun Feb 27 15:22:29 2022
    XPost: comp.theory, sci.logic, sci.math

    On 2/27/2022 2:49 PM, Richard Damon wrote:
    On 2/27/22 3:22 PM, olcott wrote:
    On 2/27/2022 2:19 PM, Richard Damon wrote:
    On 2/27/22 10:19 AM, olcott wrote:
    On 2/27/2022 5:46 AM, Richard Damon wrote:
    On 2/26/22 11:26 PM, olcott wrote:
    On 2/26/2022 6:34 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 2/23/2022 6:41 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    On 2/22/2022 9:27 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    On 2/19/2022 6:33 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 2/19/2022 4:59 PM, Ben Bacarisse wrote:
    olcott <polcott2@gmail.com> writes:

    Deciders only compute the mapping from their actual >>>>>>>>>>>>>>>> inputs to accept
    or reject state.
    Too many words.  Deciders accept or reject any and all >>>>>>>>>>>>>>> inputs.

    My words are precisely technically accurate.
    There are just too many of them.  Waffle is not always >>>>>>>>>>>>> wrong. You think
    using lots of words makes you sound all technical and >>>>>>>>>>>>> "sciency", but
    that's because you've not spent much time around smart >>>>>>>>>>>>> technical people.

    Anything such as the behavior of Ĥ ⟨Ĥ⟩ has no relevance to
    the halt status decision because it is not an actual input. >>>>>>>>>>>>>>>
    This is so silly.  I tried to help once by suggesting you >>>>>>>>>>>>>>> to specify a
    much simpler TM, but you don't like being given helpful >>>>>>>>>>>>>>> exercises.
    Except for the most trivial examples, TMs are always >>>>>>>>>>>>>>> specified in terms
    of what the input encodes, rather than what the "actual >>>>>>>>>>>>>>> input" is.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn >>>>>>>>>>>>>>
    The key distinction is that the exact point of the >>>>>>>>>>>>>> execution trace
    matters.

    No, that's your big mistake (B).  For Turing machines, all >>>>>>>>>>>>> that matters
    is the state and the tape, and that's what those lines you >>>>>>>>>>>>> keep writing
    specify.  H ⟨Ĥ⟩ ⟨Ĥ⟩ will transition to qn if, and only if,
    Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩
    transitions to qn.

    How the direct execution of Ĥ ⟨Ĥ⟩ vary from the simulation >>>>>>>>>>>> of ⟨Ĥ⟩ ⟨Ĥ⟩
    ?
    That's not the question.  To get out of this hole you need to >>>>>>>>>>> read what
    people write and address the points they make.

    Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞ >>>>>>>>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
    You claim that Ĥ.qx ⟨Ĥ⟩ ⟨Ĥ⟩ and H ⟨Ĥ⟩ ⟨Ĥ⟩ will not transition
    to the
    same (actually corresponding) states.

    I have reversed my view on this on the basis of a deeper
    understanding
    of the notion of computable functions.
    Everyone else knew it from understanding what a Turing machine >>>>>>>>> is, but
    kudos to you for (almost) saying you were wrong.


    I was shocked to find out that a Turing machine cannot do what >>>>>>>> every C
    program can do because TM's only implement computable functions. >>>>>>>
    Really?  Did you think a TM could let you post to Usenet?  Maybe >>>>>>> after,
    a few decades of pontificating about them, you will eventually
    know what
    Turing machines are.

    This says nothing about computable functions:
    The Church-Turing thesis (formerly commonly known simply as
    Church's thesis) says that any real-world computation can be
    translated into an equivalent computation involving a Turing machine. >>>>>> https://mathworld.wolfram.com/Church-TuringThesis.html

    It has always been my understanding that anything any real world
    computer can do a TM can do. Now people are telling me that a
    computable function can't even know its own machine address in
    memory.


    And your understanding isn't quite correct. Any program that meets
    the requirement of a Computation, that a computer can do can be
    done by a Turing Machine. This generally means that any ENTIRE
    program loaded into a computer (including the operating system it
    runs under) taken as a whole will be one if the I/O is somewhat
    limited to meet the model. In particular, if one computer talks to
    another, that can't be directly modeled by a Turing Machine, but
    the combined system can be mostly modeled.

    The same applies to multiple cores. The main issue with multiple
    cores or multiple computers is that now there is a bit of
    randomness added into the processing mix that a Turing Machine
    can't handle, but if the algorithm doesn't try to exploit that
    behavior.

    The big thing that a Turing Machine can't duplicate is behavior of
    some program SEGMENTS which don't act like a Computation. This can
    happen if they receive input from something not considered as an
    input to the algorithm, as this breaks the Computation model.

    Thus, yes, if a 'function' uses its address in memory, and that
    address hasn't been defined as a input to it, then it has ceased to
    be a computation.


    Yet an x86 function can derive its own address without needing to be
    told as an offset to its other known addresses that are hard-coded
    into its instructions. This would seem to make an x86 function
    strictly more powerful than a TM.

    Right, because code SEGMENTS are not limited to computations.

    And your second claim just shows that you do not understand the
    meaning of a Computation. It is IMPOSSIBLE to make a program that
    performs an ACTUAL COMPUTATION, that can not be done on a Turing
    Machine, The

    So all the many things that an x86 machine can do that a TM cannot
    make the x86 machine strictly more powerful than a TM.

    But not in a Computation Theory sort of way.


    My C/x86 H/P does correctly determine that the correct pure simulation
    of its input cannot reach the final state of this input thus correctly
    deciding that the "impossible" input to the halting problem proofs does
    not halt. H only needs to know its own machine address to do this.

    The machine description of a TM would necessarily have to know its own
    address as the tape location of the description of its start state.

    There is NO Computation, per the definition in Computation Theory, that
    your x86 computer can do that a Turing Machine can not.

    That is like saying there is no 2 pound weight that a puny little
    weakling cannot lift. When an enormous body builder lifts 500 pounds
    that is ruled as not counting because it is not a 2 pound weight.


    And, because your computer is finite, there are computation that the
    Turing Machine can carry out that your x86 can't.


    meaning of a computation is an algorithm that performs a unique
    mapping of any given input to an unique output.

    All that is possible is to write code segments that do not produce an
    actual Computation, and yes, you can not write a Turing Machine to
    match thouse. Your failure to understand (or lie by mis-definition)
    the meaning of what a Computation means you don't understand what
    that statement says.


    THe mere fact that you make this statement shoulf help you realize
    that you don't really understand what Computation Theory is about.
    You seem to think it is about 'Computers', when it is about
    'Computations', and the definition of 'Computation' that it uses
    vastly predates the modern digital computer, and in fact at that
    time, a 'Computer' was a person with a pad of paper, and maybe an
    adding machine, and a detailed set of instructions.

    THis is youyr problem with trying to start from your concept of
    'First Principles', to do that sort of system analysis, you need to
    actually START from the REAL FIRST PRINCIPLES. If you start with a
    wrong base principle, you whole analysis is just wrong.








    --
    Copyright 2021 Pete Olcott

    Talent hits a target no one else can hit;
    Genius hits a target no one else can see.
    Arthur Schopenhauer

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)