• Simply defining =?iso-8859-1?Q?G=F6del?= Incompleteness and Tarski

    From Ben Bacarisse@21:1/5 to olcott on Thu Aug 13 04:11:46 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 6:39 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 4:51 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 10:04 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>

    The brand new concept of Turing Complete computations as opposed to >>>>>>> and contrast with Turing Complete language and Turing Complete
    machines provides for this.

    What are you babbling about? What new concept? Do you mean you've just >>>>>> heard about it? This is a settled topic. The details need more care >>>>>> than you are willing to take, but the equivalence of various notions of >>>>>> "effective procedure" is not in question here.

    So it is commonly understood that a language or abstract machine can >>>>> be considered Turing complete for a subset of problems?

    Can compute a subset of problems, yes. There's no need to add another >>>> misuse of the technical phrase "Turing complete". Misusing the term
    does not create a "brand new concept", it just incorrectly describes a >>>> conventional one.

    I really can't stress enough the value that education can have in cases >>>> like this. There are many books out there that might enable you to know >>>> what others already know.

    I want to establish the theoretical foundation that an x86 based
    refutation of the convetional halting problem counter-examples would
    immediately and directy apply to Turing Machines.

    I can't think of a better way to be sure no one takes you seriously.
    Since (details aside) all programming notations are just as good for
    this purpose,

    OK great then everyone simply assumes Church-Turing.

    No they don't. That was my (detail aside). There are notations known
    to be weaker than TMs (but you won't know any). There are others that are stringer (but you won't know any). And there are some not yet proved to
    be equivalent (but you won't know any).

    In other words, as far as you are concerned, all high-level programming languages have been proven to be equivalent to TMs. Note proven.

    One thing that other mere programming notations are not good for is
    providing an actual execution trace.

    Nonsense.

    int main()
    {
    uint32_t* S1;
    S1 = (uint32_t*) 16; // size param passed to AllocateHeap
    AllocateHeap(&S1); // Allocates 16 uint32_t
    SaveState(&S1); // Saves 16 32-bit registers
    RestoreState(&S1); // Process context switch
    __asm hlt
    }

    Shocking. But then you are not a programmer are you?

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Thu Aug 13 04:25:57 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 7:21 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 5:00 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    On 8/12/2020 3:36 PM, Ben Bacarisse wrote:
    <cut>
    You babbled. Throwing terms around that you don't understand is not a >>>>>> good look for anyone. That whole post was a dog's dinner.

    Let me break it down for you using more precise terms:

    No. I know what you are saying because it's not at all complex, unusual >>>> or new. Your response stems the same problem that leads you to think
    people are mad or lying or playing mind games. Sometimes people think >>>> you are babbling, not because they don't understand, but because you are >>>> tossing a word salad.

    A precise degree of partial Turing completeness can be defined as:
    The subset of Turing-computable functions that are computable on an
    abstract machine or programming language.

    The second sentence is fine. It's standard fare in computability
    theory, though we go a bit further by not limiting ourselves to Turing >>>> computable functions.

    The babbling comes from using phrases like Turing completeness
    inappropriately. The latest, "partial Turing completeness", is of
    course an oxymoron. Were you aiming for some archaic rhetorical effect? >>>
    I want to establish the theoretical foundation that an x86 based
    refutation of the conventional halting problem counter-examples would
    immediately and directly apply to Turing Machines.

    <screeching of brakes> Reality check: what does what you call "partial
    Turing completeness" have to do with your claim? Are you planning to
    demonstrate something about a computational model that can't compute all
    the functions a Turing machine can? Do you plan to extrapolate that up
    to Turing computability?

    I want to explicitly account for the cases that may be computable in
    finite memory that are not computable in unlimited memory. It was
    proposed that such cases can possibly exist.

    There are no such cases.

    To do this it must be universally understood that the computability of
    an x86 based machine is identical to the computability of a Turing
    machine on a subset of functions.

    Two years ago, when you were still lying about having a Turing machine,
    people (certainly me but I think others too) counselled you to use
    anything but a Turing machine because, due to computational equivalence,
    there was no need for the mess that is an actual Turing machine.

    Apparently some people believe that there is a difference between
    Turing complete and Turing equivalent

    Of course there is. Essentially it's the difference between X >= Y and
    X = Y. But it does not matter for what you claim.

    https://scienceblogs.com/goodmath/2007/01/05/turing-equivalent-vs-turing-co

    Not a good article in my opinion.

    But you, being you, have managed to pick the worst possible alternative.
    I an not sure that this is an accident. A pile of x86 is a great place
    to hide.

    Not in this case. I did it this way to make my solution as easy as
    possible to fully understand. I could have buried it in a few billion
    Turing Machine interpreter instructions where simply accessing the
    memory location @ 0xffff takes 65,0000 steps.

    I was suggesting a high-level language. Of course even there you could
    bury your error, but it would be harder.

    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken.

    You mean mathematical truth is not what you always thought it was.

    Since truth itself cannot possibly be broken human understanding of
    the nature of truth is necessarily incorrect.

    Mathematics does not bend to your pronouncements. Mathematical truth is
    what it is, and your declaring it broken has no effect at all.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    PO-provability may be an aspect of PO-satisfiability and PO-truth and PO-provable may not diverge, but the sentence as you wrote it (with
    words you don't yet understand) is just wrong.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Thu Aug 13 11:50:04 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 10:25 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    It boils down to this: If Turing, Gödel and Tarski are correct then
    truth itself is fundamentally broken.

    You mean mathematical truth is not what you always thought it was.

    Since truth itself cannot possibly be broken human understanding of
    the nature of truth is necessarily incorrect.

    Mathematics does not bend to your pronouncements. Mathematical truth is
    what it is, and your declaring it broken has no effect at all.

    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    PO-provability may be an aspect of PO-satisfiability and PO-truth and
    PO-provable may not diverge, but the sentence as you wrote it (with
    words you don't yet understand) is just wrong.

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch.

    Every arithmetical formula is either true or false. Not every
    arithmetical formula is provable or refutable. Deal with it.

    What-so-ever definitive basis that you have to show that a proposition
    is definitely true is the proof of this proposition.

    In a sound system, if you have a proof, the proposition is true. No one
    doubts that. But in the absence of a proof every proposition P
    corresponds to one as yet unknown truth: either P or ~P.

    THERE IS NO WAY OUT OF THIS

    Yes there is: education. You don't have to keep use words you don't understand.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Thu Aug 13 20:38:04 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/12/2020 10:25 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    Since provability is an aspect of satisfiability true and provable
    cannot possibly diverge.

    PO-provability may be an aspect of PO-satisfiability and PO-truth and
    PO-provable may not diverge, but the sentence as you wrote it (with
    words you don't yet understand) is just wrong.

    If a proposition is true you must have some basis to know that it is
    true besides a wild guess or a hunch.

    Every arithmetical formula is either true or false. Not every
    arithmetical formula is provable or refutable. Deal with it.

    The only possible way to correctly conclude that a WFF is true or
    false is some kind of proof.

    Do you see how that is not a reply to my point? Probably not.

    If anyone concludes that a WFF is true or false without some kind of
    proof this is necessarily mere presumption.

    If... So many things if... But fortunately the antecedent is false.
    No one (apart from you) does that.

    (Do you remember that way back in 2012 when you declared: "If God can
    not solve the Halting Problem, then there is something wrong with the problem"?)

    What-so-ever definitive basis that you have to show that a proposition
    is definitely true is the proof of this proposition.

    In a sound system, if you have a proof, the proposition is true. No one
    doubts that. But in the absence of a proof every proposition P
    corresponds to one as yet unknown truth: either P or ~P.


    Sure that is correct, That that is not what is asserted by the Gödel
    proof.

    Yes it is. You've never read it, have you? You would fall at the first page.

    The Gödel proof asserts that G is definitely true and definitely
    unprovable.

    The paper has an extra sting in the tail: that g must be true. But this
    is a metamathematical deduction from outside PM.

    You once thought it would be wise to learn what words like "true" mean
    in a mathematical context, but that did not last long. You stumbled
    when it turned out you did not know what a function was.

    That is the same thing as calculating the sum of two
    integers without using any numbers (integers are numbers).

    Analogies like this are only useful for people like me to see just how
    little you understand. They won't persuade anyone, but the fact that
    you think this is an analogy is very revealing.

    THERE IS NO WAY OUT OF THIS

    Yes there is: education. You don't have to keep use words you don't
    understand.

    You briefly thought you might try this way out...

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Thu Aug 13 20:46:29 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The only thing that I can share is details about my x86 based UTM.

    So boring. And you are still misusing the term UTM.

    I will not share any details about my halt decider until a very long
    time after I post the x86 based UTM to a code repository and have made
    all of the changes suggested by peer review.

    So only another 20-30 years maybe? The "UTM" (which, when you first
    mentioned it really sounded like a UTM because you were lying about
    working with TMs at the time) was going to take a week or so, two years
    ago. What went wrong?

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Thu Aug 13 22:47:37 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 1:34 PM, Keith Thompson wrote:
    <cut>
    I presume you have reasons not to use the standard malloc function.

    Yes. The master UTM is its own self-contained operating system and all
    memory is a single contiguous block as if it was an actual UTM tape.

    Oh dear. You have just said that your UTM (which is not a TM) is not U.
    All three letters are inaccurate. Of course, the PO-UTM part of all
    this appears to be pointless anyway, but if you think its universality
    matters to whatever claim you are making, you have blown it.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Thu Aug 13 23:17:58 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 2:38 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
    <cut>
    In a sound system, if you have a proof, the proposition is true. No one >>>> doubts that. But in the absence of a proof every proposition P
    corresponds to one as yet unknown truth: either P or ~P.

    Sure that is correct, That that is not what is asserted by the Gödel
    proof.

    Yes it is. You've never read it, have you? You would fall at the first page.

    The Gödel proof asserts that G is definitely true and definitely
    unprovable.

    The paper has an extra sting in the tail: that g must be true. But this
    is a metamathematical deduction from outside PM.

    Ah great you are directly delving into the actual heart of it. The key mistake is that true outside of PM does not count as true inside of
    PM.

    Nonsense. There is no sign that you want to know what "true" means
    anymore so it would seem pointless to try to explain. Until you know,
    it would be absurd for me to reply. I only know the conventional
    meanings for words.

    You once thought it would be wise to learn what words like "true" mean
    in a mathematical context, but that did not last long. You stumbled
    when it turned out you did not know what a function was.

    We can get back to that. I still need to know the conventional
    meanings of the related set of terms.

    "We" can't. Only you can. How many more silly posts will you make before
    you decide to learn what it it you are saying?

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Fri Aug 14 22:56:10 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 5:17 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 2:38 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/13/2020 5:50 AM, Ben Bacarisse wrote:
    <cut>
    In a sound system, if you have a proof, the proposition is true. No one >>>>>> doubts that. But in the absence of a proof every proposition P
    corresponds to one as yet unknown truth: either P or ~P.

    Sure that is correct, That that is not what is asserted by the Gödel >>>>> proof.

    Yes it is. You've never read it, have you? You would fall at the first page.

    The Gödel proof asserts that G is definitely true and definitely
    unprovable.

    The paper has an extra sting in the tail: that g must be true. But this >>>> is a metamathematical deduction from outside PM.

    Ah great you are directly delving into the actual heart of it. The key
    mistake is that true outside of PM does not count as true inside of
    PM.

    Nonsense. There is no sign that you want to know what "true" means
    anymore so it would seem pointless to try to explain. Until you know,
    it would be absurd for me to reply. I only know the conventional
    meanings for words.

    The problem is that the conventional terms of the art from which the
    formal notion of truth is composed:

    {Interpretation, Satisfaction, logical consequence, decidability}
    (please let me know it I missed any)

    Suffer from the Sapir–Whorf hypothesis thus making the correct nature
    of analytical truth inexpressible.

    Why are you using them then? I have often urged you to use you own
    words but you keep using words that (a) you don't understand, and (b)
    you now say are not even capable on conveying your meaning.

    Shall I help out by prefixing every word I think you may not be using
    correctly with PO-? Then you can say if you think the word is up to the
    job and that you mean what it is conventionally used to mean. The PO-
    part can then come off. But if you decide you need the new word, you
    will have to define PO-whatever so we all stand a chance of knowing what
    you mean. Deal?

    https://en.wikipedia.org/wiki/Linguistic_relativity

    It seems that the key error of the conventional notion of truth is the concept of decidability. The assumption that every closed WFF maps to
    a Boolean value is simply a false assumption.

    It's not an assumption. Maybe you mean PO-assumption? And, since you
    are wrong, maybe you meant PO-closed? Or PO-WFF? Maybe it's the maps
    that should be PO-maps?

    But if you mean that every sentence of a first-order formal language is
    either true or false, for any given interpretation, (I'm using the
    conventional words here) then you have a simple route to greatness.
    Just give an example!

    No need to fuss over what a UTM is or isn't. No need to read any of the
    papers you are criticising. Just write one sentence in a first-order
    language, along with one interpretation for which it is neither true nor
    false, and you will be famous. Just one!

    But sadly for you, you don't mean that. You mean something more like
    "the PO-assumption that every PO-closed PO-WFF PO-maps to a Boolean
    PO-value is simply a PO-false PO-assumption" and no one cares about
    that.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 15 00:48:47 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    32-bit x86 function calls already use signed 32-bit relative addressing.

    void Bar(int N)
    {
    Foo(5);
    }


    void Foo(int N)
    {
    Bar(6);
    }

    101: 55 push ebp
    102: 8BEC mov ebp, esp
    104: 6A05 push 0x5
    106: E823000000 call 0x12e ; +23h bytes little endian
    10b: 83C404 add esp, 0x4 ; of 00000023h
    10e: 5D pop ebp
    10f: C3 ret

    12e: 55 push ebp
    12f: 8BEC mov ebp, esp
    131: 6A06 push 0x6
    133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
    138: 83C404 add esp, 0x4 ; of FFFFFFC9h
    13b: 5D pop ebp
    13c: C3 ret

    The x86 architecture specifies a Turing complete model of computation
    for its control flow operations:

    Signed 32-bit relative addressing for function calls. (shown above).
    Signed 32-bit relative addressing for unconditional branches.
    Signed relative addressing for all conditional branches.

    There are two notions knocking around that we must distinguish between.
    There is an informal, rather casual, notion of Turing complete that
    glosses over all the reasons why some notation or other is not really
    Turing complete. Then there is the more formal demonstration that some notation really is Turing complete by demonstrating (often in general
    terms) how that notation can simulate any Turing machine computation.

    From the above, you seem to be going for the casual meaning because you
    don't explain how you would deal with having, say, 87 trillion
    functions. Just doing 32-bit relative calls won't work in that case.

    You keep writing UTM. To perfectly simulate any TM computation, the
    only barrier can be running out of resources (the heat-death of the
    universe, or the exhausting of the Martian silicon mines). There must
    be no barrier in the abstract notation. Thus in C, you can't simulate a
    TM's tape with a character array that you malloc and realloc because the definition of C specifies fixed-width pointers. For similar reasons,
    you can't use an ordinary linked list. You have to come up with a data structure that has no limits imposed by the notation. I am not certain
    it's possible. In fact I think it isn't (but I am not 100% sure), and
    the same (actually more) problems exist for pure x86 code.

    (There's a simple way to get round this for C -- but not for x86 code --
    and that is to imagine an unbounded collection of implementations. If
    your TM runs out of tape with 64-bit pointers in the linked list, just
    imagine an implementation with 1024-bit pointers. And if that's not
    enough we can conceive of an implementation with pointers that are a
    megabyte in size. I feel, though, that this does not really meet the requirements.)

    It can also be understood (although more difficult) that the above
    Turing complete model of computation applied to the control flow of
    the x86 architecture can be leveraged to provide access to unlimited
    data.

    Nothing you have shown comes close to demonstrating a Turing complete
    model of computation.

    I won't provide the details of this to avoid getting into endless
    debate with people that do not understand that they do not understand.

    Hey, that's an excellent way to avoid being wrong. Assert a fact
    without evidence and then say you won't discuss it with anyone who can't
    see you are correct!

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 15 01:05:55 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    <cut>
    I am going to go with the code that is the easiest to understand at the "C" level.

    uint32_t* master_state;
    master_state = AllocateHeap(16);

    How long have you been working on this? In Dec 2018 you said all that
    was needed what the UTM and that should take a week or so.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 15 02:13:26 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/14/2020 6:48 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    32-bit x86 function calls already use signed 32-bit relative addressing. >>>
    void Bar(int N)
    {
    Foo(5);
    }


    void Foo(int N)
    {
    Bar(6);
    }

    101: 55 push ebp
    102: 8BEC mov ebp, esp
    104: 6A05 push 0x5
    106: E823000000 call 0x12e ; +23h bytes little endian
    10b: 83C404 add esp, 0x4 ; of 00000023h
    10e: 5D pop ebp
    10f: C3 ret

    12e: 55 push ebp
    12f: 8BEC mov ebp, esp
    131: 6A06 push 0x6
    133: E8C9FFFFFF call 0x101 ; -37h bytes little endian
    138: 83C404 add esp, 0x4 ; of FFFFFFC9h
    13b: 5D pop ebp
    13c: C3 ret

    The x86 architecture specifies a Turing complete model of computation
    for its control flow operations:

    Signed 32-bit relative addressing for function calls. (shown above).
    Signed 32-bit relative addressing for unconditional branches.
    Signed relative addressing for all conditional branches.

    There are two notions knocking around that we must distinguish between.
    There is an informal, rather casual, notion of Turing complete that
    glosses over all the reasons why some notation or other is not really
    Turing complete. Then there is the more formal demonstration that some
    notation really is Turing complete by demonstrating (often in general
    terms) how that notation can simulate any Turing machine computation.

    From the above, you seem to be going for the casual meaning because you
    don't explain how you would deal with having, say, 87 trillion
    functions. Just doing 32-bit relative calls won't work in that case.

    We don't have to actually have the hardware. We only really need the
    abstract model of computation.

    But you are declaring it to be x86 code. You can banish the hardware,
    but x86 code embodies that hardware in the notation.

    To specify 87 trillion functions merely requires a linked list with 87 trillion elements.

    What does that look like in x86 code? And whatever answer you come up
    with, be sure it extends further. There must be no finite limit. This
    is a genuine question. I know you like to just ignore questions and
    move on to say more wrong stuff, but please take a moment to prove me
    wrong. How is this data structure implemented in x86 code?

    Even if you can do it, picking an actual machine language was a silly
    idea. High-level languages are usually defined without reference to
    hardware, so they have no such limits in the notation. You don't have
    to do any of this work in, say, Haskell.

    On 8/14/2020 6:34 AM, Andy Walker wrote:
    every standard m/c code
    that I am familiar with is perfectly capable of encoding any one of
    the standard UTMs [and therefore any TM computation]. All that is
    needed beyond a quite simple FSM, such as that on the front cover of
    Minsky's "Computation", is a mechanism for accessing arbitrarily much
    storage...

    In V41 (x86 is Turing Complete)
    I replace the last two sentences with this:

    On 8/14/2020 2:32 PM, olcott wrote:
    Data access is also based on relative addressing therefore: The x86
    language specifies an abstract model of computation that is Turing
    complete.

    proving (to anyone knowing how easy it is to simulate a Turing Machine
    such as Andy and I)

    How do you choose whose claims you accept without proof?

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 15 17:49:01 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    We don't have to actually have the hardware. We only really need the
    abstract model of computation.

    But you are declaring it to be x86 code. You can banish the hardware,
    but x86 code embodies that hardware in the notation.

    The abstract model of computation that the x86 language specifies is
    Turing complete.

    I am glad you believe that. I wonder why you worked so hard in the past
    (and continue to do so below) to suggest you were working within finite
    limits.

    To specify 87 trillion functions merely requires a linked list with 87
    trillion elements.

    What does that look like in x86 code? And whatever answer you come up
    with, be sure it extends further. There must be no finite limit. This
    is a genuine question. I know you like to just ignore questions and
    move on to say more wrong stuff, but please take a moment to prove me
    wrong. How is this data structure implemented in x86 code?

    Any software engineer of ordinary skill in the art would be able to
    create a linked list of memory elements of unlimited length on the
    basis of a signed 32-bit relative jump instruction.

    I asked what the data structure would look like. Jumps have nothing to
    do with that. What does the data structure for an essentially unbounded
    linked list look like?

    How do you choose whose claims you accept without proof?

    That I directly know how easy it is to simulate a Turing machine
    because I studied this one
    http://www.lns.mit.edu/~dsw/turing/turing.html

    That is not a Turing-complete implementation.

    This all appears to be a rabbit hole down which there is nothing to be
    learned. No one doubts that something very like x86 code (as assembler,
    not machine code) can be tweaked to remove limits and would therefore be Turing-complete. But you were at pains to point out how this was really
    x86 code in every detail.

    The second point is that I no longer know what you are claiming. Simply
    that some abstract model of computation (based loosely or tightly round
    the x86 architecture) is Turing complete or that you have implemented
    (or more likely just thought about) some x86 code that can simulate
    Turing machine computations?

    The third point is what on Earth this has to do with your original
    claim, two years ago, to have something impossible. We still don't know
    what that was (and, yes, I know you've said you won't ever say because
    you've "moved on").

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 15 20:31:23 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    I can now say that the abstract model of computation specified by the
    x86 language is Turing complete on the basis that this model can
    easily simulate this qintuple based Turing machine: http://www.lns.mit.edu/~dsw/turing/turing.html

    Yes, you can say it. But you also said it would take a week or two to
    code up the UTM you needed... over two years ago. And you also said
    that you had an actual Turing machine pair -- in fact "exactly and
    precisely the Peter Linz H and Ĥ". None of which was true.

    The TM simulator you keep citing does not prove that "the x86 language
    is Turing complete" because it is not a full simulator.

    and Relative addressing for control flow and data access provides the abstract model for unlimited memory access.

    Again, easy to say, but not true.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 15 22:34:28 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/15/2020 11:49 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    To specify 87 trillion functions merely requires a linked list with 87 >>>>> trillion elements.

    What does that look like in x86 code? And whatever answer you come up >>>> with, be sure it extends further. There must be no finite limit. This >>>> is a genuine question. I know you like to just ignore questions and
    move on to say more wrong stuff, but please take a moment to prove me
    wrong. How is this data structure implemented in x86 code?

    Any software engineer of ordinary skill in the art would be able to
    create a linked list of memory elements of unlimited length on the
    basis of a signed 32-bit relative jump instruction.

    I asked what the data structure would look like. Jumps have nothing to
    do with that. What does the data structure for an essentially unbounded
    linked list look like?

    We have an unlimited number of memory address sized blocks of memory
    where the beginning of each block has a relative jump into the prior
    block and the end of each block has a relative jump into the next
    block.

    What do jumps have to do with it?

    You are either totally lost or you are trying to construct a
    distraction. Since it seems you can't describe the data structure, just
    show the code that changes every 0 to a 1 (these can be any fixed size)
    data in the list elements. This loop will make the data structure clear
    by showing how the code moves from one element to the next, and how the
    end of the list is detected.

    How do you choose whose claims you accept without proof?

    That I directly know how easy it is to simulate a Turing machine
    because I studied this one
    http://www.lns.mit.edu/~dsw/turing/turing.html

    That is not a Turing-complete implementation.

    http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
    Here is a Turing Complete adaptation, that I derived two years ago:

    All integers are specified as hexadecimal digits [0-9A-F] encoded as:
    H. Each element of the quintuple is space delimited. One Turing
    Machine Description language instruction per line:

    The original input language in Turing complete. It's the implementation
    that is not.

    H+ HHHHHHHH HHHHHHHH H+ [RL]

    States are of arbitrary finite length strings of hexadecimal digits.
    Tape elements are unsigned 32-bit integers.

    That was a pointless change from the point of view of the TM notation.
    The original and your extended on are both Turing complete. (It's a odd
    term for a notation that simply a way of writing TMs.)

    And if you make corresponding changes to the implementation (I don't
    think you did) I very must suspect it was not more Turing complete than
    the original.

    You should stick with the casual meaning. No one will object to a bit
    of hand-waving on this point (unless it is somehow central to whatever
    it is you are claiming these days). The trouble starts when you
    over-specify and state, as facts, things that need to be proven.

    This all appears to be a rabbit hole down which there is nothing to be
    learned. No one doubts that something very like x86 code (as assembler,
    not machine code) can be tweaked to remove limits and would therefore be
    Turing-complete. But you were at pains to point out how this was really
    x86 code in every detail.

    My halting problem research will be presented as a program that runs
    on an x86 based UTM.

    In what sense is this x86 "thing" U and in what sense is it a TM? Are
    these both PO- terms? It sounds like it's just an x86 simulator but
    surely not?

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 15 23:40:51 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/15/2020 4:34 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/15/2020 11:49 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/14/2020 8:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    To specify 87 trillion functions merely requires a linked list with 87 >>>>>>> trillion elements.

    What does that look like in x86 code? And whatever answer you come up >>>>>> with, be sure it extends further. There must be no finite limit. This >>>>>> is a genuine question. I know you like to just ignore questions and >>>>>> move on to say more wrong stuff, but please take a moment to prove me >>>>>> wrong. How is this data structure implemented in x86 code?

    Any software engineer of ordinary skill in the art would be able to
    create a linked list of memory elements of unlimited length on the
    basis of a signed 32-bit relative jump instruction.

    I asked what the data structure would look like. Jumps have nothing to >>>> do with that. What does the data structure for an essentially unbounded >>>> linked list look like?

    We have an unlimited number of memory address sized blocks of memory
    where the beginning of each block has a relative jump into the prior
    block and the end of each block has a relative jump into the next
    block.

    What do jumps have to do with it?

    You are either totally lost or you are trying to construct a
    distraction. Since it seems you can't describe the data structure, just
    show the code that changes every 0 to a 1 (these can be any fixed size)
    data in the list elements. This loop will make the data structure clear
    by showing how the code moves from one element to the next, and how the
    end of the list is detected.

    How do you choose whose claims you accept without proof?

    That I directly know how easy it is to simulate a Turing machine
    because I studied this one
    http://www.lns.mit.edu/~dsw/turing/turing.html

    That is not a Turing-complete implementation.

    http://www.lns.mit.edu/~dsw/turing/doc/tm_manual.txt
    Here is a Turing Complete adaptation, that I derived two years ago:

    All integers are specified as hexadecimal digits [0-9A-F] encoded as:
    H. Each element of the quintuple is space delimited. One Turing
    Machine Description language instruction per line:

    The original input language in Turing complete. It's the implementation
    that is not.

    H+ HHHHHHHH HHHHHHHH H+ [RL]

    States are of arbitrary finite length strings of hexadecimal digits.
    Tape elements are unsigned 32-bit integers.

    That was a pointless change from the point of view of the TM notation.
    The original and your extended on are both Turing complete. (It's a odd
    term for a notation that simply a way of writing TMs.)

    He had a very small finite number of states.

    That's bad (just from a design point of view) but it does not matter
    from the completeness point of view. I don't have time or the
    inclination to try to teach you this topic, but limited state and
    alphabet sets can be compensated for by using an encoding on the tape.
    You only need a handful of states and tape symbols to encode a universal
    Turing machine.

    And if you make corresponding changes to the implementation (I don't
    think you did) I very must suspect it was not more Turing complete than
    the original.

    My redesign obviously has an unlimited number of states.

    Yes, and that lets you represent more TM's directly, but it does not
    alter the completeness (or otherwise) of the notation.

    My halting problem research will be presented as a program that runs
    on an x86 based UTM.

    In what sense is this x86 "thing" U and in what sense is it a TM?

    This is apparently over your head.

    By which you mean you won't answer. Nothing new there. Anything to
    avoid be wrong again.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sun Aug 16 13:23:09 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/15/2020 8:36 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    I have shown that the abstract model of computation specified by the
    x86 language is Turing complete pertaining to access to unlimited
    memory.

    No, you have asserted it. In fact you've dodged requests to show it.
    Write the loop I talked about. That will overcome the problem you have
    in describing data structures.

    Your jmp examples are just silly (EIP is fixed-width and it's no help
    for data accesses).

    Within the abstract model of computation specified by the x86 language
    we would understand that jumping into the next 4GB block resets the EIP
    to an offset within this block. Since this is not currently defined
    within the current specification it would be a required adjustment to
    the current definition. You have definitely made a valid p0oiint
    here. There might even be other teeny tiny little adjustments that
    would be required.

    Yes, since jumps are of no use for the key data structure, you need the
    same "teeny tiny" change to the data addressing modes and, hey presto,
    you have access to unbounded storage implicitly defined in the abstract machine. The notation itself takes on the burden of there always being
    a "next" data block. You can't do it in pure x86 code.

    So now, since you can't use x86 addressing to access unbounded store,
    why bother with the mess that x86 code? You could invent your own
    abstract machine language with the few simple instructions you need. Of
    you could use an exiting one. Or you could use a high-level language
    that has the right concepts already defined.

    x86 code is the wrong choice. Taken literally it does not work. You
    need to invent some new entirely abstract addressing mode for it but you
    still have all the complication of it being, in all other respects, and
    actual machine language.

    It is common knowledge in the field that requirements of simulating a
    Turing Machine are trivial.

    You have no idea what is common knowledge in this field. What certainly
    is common is a lot of hand-waving about limits being arbitrary and so
    not crucial. And, to be frank, that's pretty much all anyone cares
    about. But the technical detail -- can some abstract notation really
    describe potentially unbounded access to storage -- is much less
    commonly discussed.

    I invite you to show that it's possible.

    I already did, although your caveat was totally apt.

    No you did not. You are either more ignorant that I have dared to
    assume, or you a mendacious intellectual coward. You have neither
    defined the x86 data structure that represents an unbounded list, nor
    (as per my simpler request) shown the x86 code that traverses such a
    list. Maybe you know that in trying you will fail?

    I would probably prefer to use my unstated assumption that the EIP is
    only relative to the current 4GB memory block because this lets me
    keep the x86 language essentially as it is.

    Except for the fundamental changes that make it Turing complete you
    mean? And stop wittering on about EIP. You've latched onto relative
    jumps because you think you can sweep a huge change under carpet that
    way (and thus make it look like a "tiny tiny" change). But you need a
    new addressing mode for data. Be brave. Try to write the loop I
    suggested. Do some actual intellectual work for a change.

    This seems to indicate that there is some sort of rigorous formal
    definition of UTM:

    Of course there is. It's defined in Turing's 1936 paper though modesty prevented him from using his name for it. If you ever read a book you'd
    know this stuff. That is why I have been complaining about your misuse
    of the term. Whatever you have now is nether U nor a TM.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sun Aug 16 17:41:34 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/16/2020 7:23 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/15/2020 8:36 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    I have shown that the abstract model of computation specified by the >>>>> x86 language is Turing complete pertaining to access to unlimited
    memory.

    No, you have asserted it. In fact you've dodged requests to show it.
    Write the loop I talked about. That will overcome the problem you have >>>> in describing data structures.

    Your jmp examples are just silly (EIP is fixed-width and it's no help
    for data accesses).

    Within the abstract model of computation specified by the x86 language
    we would understand that jumping into the next 4GB block resets the EIP
    to an offset within this block. Since this is not currently defined
    within the current specification it would be a required adjustment to
    the current definition. You have definitely made a valid p0oiint
    here. There might even be other teeny tiny little adjustments that
    would be required.

    Yes, since jumps are of no use for the key data structure, you need the

    Jumps <are> moving the tape head.

    Show me a loop that scans the tape replacing zeros with ones. I'm not
    going spend any more time trying to fathom what you are doing. If you
    don't want to show code, keep it all secret and keep making your empty
    boasts. I really don't mind. No one is going to know what you are
    doing much less be taken in by your claims, so your positing are safe
    from needing correction.

    same "teeny tiny" change to the data addressing modes and, hey presto,

    Data addressing modes are already relative.

    But not unbounded. Not a small change. You are now not dealing with
    the x86 as designed but a new imaginary instruction set that just
    happens to have all the awful crud associated with the x86.

    you have access to unbounded storage implicitly defined in the abstract
    machine. The notation itself takes on the burden of there always being
    a "next" data block. You can't do it in pure x86 code.

    It can certainly be done in pure x86 code. Current x86 hardware does
    not interpret the x86 code this way.

    Show me a loop that scans the tape replacing zeros with ones. Stop
    asserting things and do some work.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sun Aug 16 22:23:17 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/16/2020 2:52 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/16/2020 2:13 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/16/2020 11:41 AM, Ben Bacarisse wrote:
    <cut>
    Show me a loop that scans the tape replacing zeros with ones.

    I have already shown you enough code that any coder would get it.

    How curious, though, that you have not shown the simple loop I
    requested. Is it a "trade secret"?

    You are apparently not a coder.

    You know, I don't think that will be the conclusion that other readers >>>> draw!

    I already have other readers of exceptional credibility that have
    certified that my analysis is precisely correct:

    The relative addressing of the x86 language does specify an abstract
    model of computation with access to unlimited memory.

    Who said that? I need to go and correct that post too.

    Who said that? When I ask a question it is in the hope of getting an
    answer. You see, I think you have either misunderstood or are lying.
    In the absence of an answer, I'll have to pick one for myself.

    Meanwhile, a simple unbounded loop is sill not possible (or at least,
    you refuse to post your secret code).

    It is self-evident to any software engineer and incomprehensible to
    people without sufficient software engineering background.

    Kaz can probably tell you how it is done and you may not understand
    him either, yet you would believe him.

    And yet /you/ can't.

    This is the simplest explantion of the process that I can make:

    If we reduce your problem to filling all of unmlimited memory with
    zeroes we simply have the code keep moving itself forward as it fills
    the memory area before it with zeroes. To minimize the number of moves
    it always moves itself exactly as far as it can reach with its
    relative addressing.

    No, that was not the issue. You posited an unbounded linked list in a
    previous post. I asked to see it.

    It is curious that "the simplest explanation of the process" (a) does
    not address the question, and (b) is not code. The simplest explanation
    of such a loop would be the code for the loop.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 22 01:10:07 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/21/2020 5:20 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    So you don't have the gumption to admit when you have been proved
    wrong?

    Me: Since it seems you can't describe the data structure, just show the
    code that changes every 0 to a 1 (these can be any fixed size) data
    in the list elements. This loop will make the data structure clear
    by showing how the code moves from one element to the next, and how
    the end of the list is detected.

    So you are saying that you are incapable of this?

    Still waiting...

    Still waiting for you to admit you lied when you said you had an "actual
    Turing machine" pair "exactly and precisely the Peter Linz H and Ĥ" way
    back in 2018.

    Still waiting for any evidence at all that you have anything to post
    other than empty boasts.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sun Aug 23 17:51:12 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:00 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/22/2020 8:04 PM, Ben Bacarisse wrote:
    <cut>
    So don't prove me wrong! I don't mind.

    But I wonder what you gain by not posting the x86 program (the one that >>>> scans a linked list changing zeros to ones, but where the list is not in >>>> any way bounded in size). It's odd that you have decided not to show
    /anyone/ that you are right, because you think I am biased. Why give me >>>> that power? The only loser in that strategy is you.

    Its a pearls before swine thing.

    As I say, you owe me nothing (though an apology for calling me a pig
    would not go amiss). By all means, keep all your great discoveries
    secret forever, but I don't see how that benefits you one iota.

    Oddly, when I got bored telling you were wrong about the x86 and stopped
    replying you complained about being ignored. It seems only being told
    you are right is acceptable.

    You consistently prove that you lack the required prerequisites:

    As does anyone who knows the x86. The required prerequisite is to
    accept your extended architecture as being called "x86".

    No one disputes that suitable abstract instruction sets, unbounded
    registers,

    Yet you continue to fail to understand (or acknowledge) that the
    32-bit wide fixed width registers defined by one of the two x86
    abstract models of computation would by themselves provide access to unlimited memory without have to actually change the abstract model of computation defined by this x86 language at all.

    I can not "understand" a falsehood. Why do you say "would" not "do"?
    Maybe you really do know that the x86 does not provide access to
    theoretically unlimited memory, so you merely say that is "would"? Why
    the subjunctive implying conditionality? You mean it would if it could?

    It is the relative addressing of the x86 abstract model of computation defined by the x86 language that provides memory access and control
    flow access to an unlimited number of 4GB blocks of memory.

    No. The x86 has no way to access unbounded memory.

    It is only hardware implementations of this abstract model that place
    any limit on memory access. These hardware limits are implementation
    details that are not any aspect of the abstract model itself.

    No, it is the design of the x86 that prevents it. Did you try to write
    the loop I suggested?

    This is a very definitive statement from me: it's not possible. In
    fact, I maybe wrong. There could be some corner of the specification
    that would permit access to some unbounded data (like the PDP-11 paper
    tape) so I am relying on other people to review what I write (and they
    may all have given up). I know /you/ have not demonstrated it, but

    The x86 language itself allows the instruction pointer and/or the data pointer

    There is no "data pointer" in this context. Relative addressing is RIP-relative only. You can (when not in IA-32 and compatibility mode)
    use RIP-relative addressing to access data but your remark suggests it
    applies to registers other than the instruction pointer. Mind you, you
    have not been clear if you are talking about 64 bit mode. All your
    previous posts seemed to suggest IA-32.

    to move forward in 1 to 0x7fffffff incremenents an unlimited number of
    times from their current location.

    To see why you are wrong (even about that) you need to read the details
    about that addressing mode. But you don't do details.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sun Aug 23 22:14:03 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 10:42 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    In other words you continue to refrain from agreeing this this is true:

    If the implementation detail of the memory architecture of the x86
    language was an unlimited sequence of contiguous 4GB blocks and the
    absolute addressing modes of the 86 language only referred to
    addresses within the current 4GB block then it becomes rather obvious
    that the x86 language exactly as it currently exists specifies access
    to unlimited memory.

    The antecedent is false,

    You and I both know that implementation details are not a part of an
    abstract model of computation itself.

    We do.

    You and I both know that the capability of moving a tape head forward relative to its current location an unlimited number of times would
    define access to unlimited storage.

    Provided you can read and write the data, and go back as well as
    forward. But given your disliking of details, that's quibbling.

    That the abstract model of computation defined by the x86 language
    allows its instruction pointer and its data pointer to move forward
    relative to their current location an unlimited number of times also
    very obviously defines access to unlimited storage.

    No, we don't know that. The architecture of the x86 is very clear about
    what relative addresses are and how they are calculated. RIP is a
    fixed-width register.

    You and I also both know that when the physical hardware
    implementations of this abstract model place limits on the access to
    storage that these limits are not a part of the abstract model itself.

    Yes, we do.

    Thus we can conclude that the abstract model of computation defined by
    the x86 language is fully Turing complete pertaining the requirement
    that a Turing machine has access to unlimited memory.

    No we can't conclude that.

    Write the loop I suggested and I will admit I was wrong. Alternatively,
    admit that after your 12 hours trying you did not decide that I did not
    deserve to see it, but rather that you simply failed in the task.

    This discussion is beyond pointless. You are convinced you are right.
    Great. Don't post anything substantive and keep thinking you are right.
    All good yes? Why keep posting things that make people like me want to
    point out the errors? You are sure there are none, so what do you gain
    by posting those messages and having to waste time with people like me?

    Can we be frank for a moment? Anyone who did not know the world of
    Internet cranks would wonder why you and Newberry and Graham Cooper
    (both of whom have also sorted the halting problem) don't just get
    together and write a joint paper. Or maybe thrash it out and find out
    which one of you got it wrong so the others can write the paper.

    But that's not how this place works. The key resource here is people
    who know stuff. You want to be able to tell them (people who know
    stuff) that they are wrong or being disrespectful, or that they are mad
    or deluded. Boasting and feeling superior is the main aim, and there is
    no point in trying to feel superior to another crank.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sun Aug 23 22:12:55 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 11:51 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:00 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/22/2020 8:04 PM, Ben Bacarisse wrote:
    <cut>
    So don't prove me wrong! I don't mind.

    But I wonder what you gain by not posting the x86 program (the one that >>>>>> scans a linked list changing zeros to ones, but where the list is not in >>>>>> any way bounded in size). It's odd that you have decided not to show >>>>>> /anyone/ that you are right, because you think I am biased. Why give me >>>>>> that power? The only loser in that strategy is you.

    Its a pearls before swine thing.

    As I say, you owe me nothing (though an apology for calling me a pig
    would not go amiss). By all means, keep all your great discoveries
    secret forever, but I don't see how that benefits you one iota.

    Oddly, when I got bored telling you were wrong about the x86 and stopped >>>> replying you complained about being ignored. It seems only being told >>>> you are right is acceptable.

    You consistently prove that you lack the required prerequisites:

    As does anyone who knows the x86. The required prerequisite is to
    accept your extended architecture as being called "x86".

    No one disputes that suitable abstract instruction sets, unbounded
    registers,

    Yet you continue to fail to understand (or acknowledge) that the
    32-bit wide fixed width registers defined by one of the two x86
    abstract models of computation would by themselves provide access to
    unlimited memory without have to actually change the abstract model of
    computation defined by this x86 language at all.

    I can not "understand" a falsehood. Why do you say "would" not "do"?
    Maybe you really do know that the x86 does not provide access to
    theoretically unlimited memory, so you merely say that is "would"? Why
    the subjunctive implying conditionality? You mean it would if it could?

    It is the relative addressing of the x86 abstract model of computation
    defined by the x86 language that provides memory access and control
    flow access to an unlimited number of 4GB blocks of memory.

    No. The x86 has no way to access unbounded memory.

    It is only hardware implementations of this abstract model that place
    any limit on memory access. These hardware limits are implementation
    details that are not any aspect of the abstract model itself.

    No, it is the design of the x86 that prevents it. Did you try to write
    the loop I suggested?

    This is a very definitive statement from me: it's not possible. In
    fact, I maybe wrong. There could be some corner of the specification
    that would permit access to some unbounded data (like the PDP-11 paper
    tape) so I am relying on other people to review what I write (and they
    may all have given up). I know /you/ have not demonstrated it, but

    The x86 language itself allows the instruction pointer and/or the data
    pointer

    There is no "data pointer" in this context. Relative addressing is
    RIP-relative only. You can (when not in IA-32 and compatibility mode)
    use RIP-relative addressing to access data but your remark suggests it
    applies to registers other than the instruction pointer. Mind you, you
    have not been clear if you are talking about 64 bit mode. All your
    previous posts seemed to suggest IA-32.

    to move forward in 1 to 0x7fffffff incremenents an unlimited number of
    times from their current location.

    To see why you are wrong (even about that) you need to read the details
    about that addressing mode. But you don't do details.

    Ah great you have finally provided some evidence that you can
    understand these things.

    But you have not.

    Here is a detail that you missed.

    See my other post for why you say these things. (As it happens I
    address the points below as well, but that is not going to be useful to
    you.)

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Tue Aug 25 01:59:23 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/23/2020 7:38 PM, Ben Bacarisse wrote:
    <cut>
    But suit yourself. As I said, boasting and insulting is the main
    pleasure for a crank like you, and you won't stop just because I call
    you out on it. Have fun.

    I don't disrespect you. I disrespect your disrespecting me.
    The code is completed, it is only 65 bytes of x86 machine code.

    Another empty boast -- still no code. You can't seem to stop doing it.
    Or maybe repeating the excuse for not posting code -- that it would be
    "pearls before swine" -- would not look good next to your claim that you
    don't disrespect me!

    The code begins @ machine address 0x100. Fills the 256 bytes preceding
    it with “@@@@â€. Dynamically modifies its own first instruction to
    refer to its new code address. Copies itself to a new address that is
    exactly 0x100 bytes above its current code address and then does a
    relative jump to this new address.

    Instead of 65 bytes of unambiguous code, you post 312 bytes of poorly
    worded description. Despite the poor wording, this is clearly not the
    code you said you could produce (it's not in any way a linked list) but
    that's OK. You say enough for me to know that even this more limited
    loop does not meet your objective. Post the code and you can find out
    why it can only access a finite amount memory. Don't post the code and
    you can boast away to year heart's content. It's up to you.

    I had the emulator terminate its execution after the first four full
    cycles. I output the first 1024 bytes of memory before and after
    execution and output complete execution trace of all of the execution.
    This is 23 pages long.

    Wow! You wrote code and got 23 pages of output. You must be right.
    I'll call the editor of the JACM right away.

    According to the meaning of the syntax of the x86 language this
    program would continue to operate after it completed its execution @
    machine address: FFFFFF00.

    By studiously hiding the code you can avoid anyone being able to be
    specific, but assuming a reasonable interpretation of your non-code
    waffle, the jumps will always be to addresses within the current code
    segment. I'm pretty sure there are other problems with it (hint: you
    talk of EIP and not RIP so data access is also a problem) but your
    hiding the code protects it from more specific analysis.

    The only reason that it would not continue to operate would be that
    the physical implementation of the abstract model did not implement
    the semantics specified by the language syntax.

    No. For more details, post the code.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Tue Aug 25 03:54:52 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    When you understand RIP relative addressing better you will more
    directly see how it makes the instruction pointer equivalent to a
    Turing machine tape head.

    No it doesn't. But if you think it does, why did you waste time talking
    about jumps and EIP rather than simple data accesses via RIP? Turing completeness does not need unbounded code, only unbounded data. My
    guess? Because you didn't know about RIP-relative data addressing until
    a few minutes ago.

    RIP relative addressing is much simpler, data can be directly read
    and written to relative to the instruction pointer. It is much easier
    to see that RIP relative addressing more directly provides an
    abstract model of computation with unlimited memory access.

    See? You did not know about it, or you would not have gone the
    complicated route. (Of course you are wrong about both approaches, but
    you should have been wrong only about the simpler one.)

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Tue Aug 25 23:35:33 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/25/2020 2:29 PM, Keith Thompson wrote:
    <cut>
    A relative JMP is defined in terms of its effect on EIP, which is
    specifed in that document as a 32-bit register. This directly
    contradicts your claim.
    <cut>
    I have switched gears now.
    I have a much more effective approach.

    To make the abstract model of computation implied by RIP relative
    addressing code have Turing complete memory access for both code and
    data we merely fail to assume any specific size of the instruction
    pointer.

    Finally! (Bravo Keith.) You will never admit that this change of gears
    means you were wrong, but at least there is now no need for you to keep
    saying the same wrong thing over and over again.

    Now, what was it you had when falsely claimed to have Linz's H and H^?
    That's the only interesting part of this whole sorry mess -- your claim
    to have something impossible. That unbounded register machines are
    Turing complete, unlike the x86, is just computability 101, despite it
    taking you weeks to get there.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 29 17:47:50 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model
    of computation that is functionally identical to a Turing machine.

    I am not sure what this conditional, hedged and vague remark is intended
    to say, but the C programming language (without the standard IO library)
    is not Turing complete[1]. I suspect it isn't Turing complete even
    /with/ the standard IO library, but I'd have to think about that. It
    all comes down to what parts of the language are bounded by definition.

    [1] I am not 100% sure of this. I have seen no proof, but I can't see
    any way round the bounded nature of the language. C appears to be too low-level to be formally Turing complete, but I'd love to find out that
    I'm wrong about this.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 29 20:03:32 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model
    of computation that is functionally identical to a Turing machine.

    I am not sure what this conditional, hedged and vague remark is intended
    to say, but the C programming language (without the standard IO library)
    is not Turing complete[1]. I suspect it isn't Turing complete even
    /with/ the standard IO library, but I'd have to think about that. It
    all comes down to what parts of the language are bounded by definition.

    [1] I am not 100% sure of this. I have seen no proof, but I can't see
    any way round the bounded nature of the language. C appears to be too
    low-level to be formally Turing complete, but I'd love to find out that
    I'm wrong about this.

    When the "C" programming language is mapped to this abstract model of computation it becomes functionally identical to a Turing Machine if
    the following abstract model is functionally identical to a Turing
    Machine:

    No. This is a technical question that can't be resolved with words like "mapped". Any non-Turing complete model such as a pushdown automaton
    can be "mapped" to a Turing machine (which is certainly Turing
    complete). This proves nothing other that the ability to waffle.

    If someone has a technical argument, I'd like to hear it.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 29 21:35:27 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 2:03 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 11:47 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The "C" programming language can be construed to be functionally
    identical to a Turing machine if it can be mapped to an abstract model >>>>> of computation that is functionally identical to a Turing machine.

    I am not sure what this conditional, hedged and vague remark is intended >>>> to say, but the C programming language (without the standard IO library) >>>> is not Turing complete[1]. I suspect it isn't Turing complete even
    /with/ the standard IO library, but I'd have to think about that. It
    all comes down to what parts of the language are bounded by definition. >>>>
    [1] I am not 100% sure of this. I have seen no proof, but I can't see >>>> any way round the bounded nature of the language. C appears to be too >>>> low-level to be formally Turing complete, but I'd love to find out that >>>> I'm wrong about this.

    When the "C" programming language is mapped to this abstract model of
    computation it becomes functionally identical to a Turing Machine if
    the following abstract model is functionally identical to a Turing
    Machine:

    No. This is a technical question that can't be resolved with words like
    "mapped". Any non-Turing complete model such as a pushdown automaton
    can be "mapped" to a Turing machine (which is certainly Turing
    complete).

    You made that point.

    Obviously, but did you understand it?

    We map the "C" programming language to the tiny subset of the x64
    programming language and then validate that this "C" language can
    recognize the set of recursively enumerable languages.

    I don't think you know what you are talking about. Sorry, I'll
    re-phrase that. I am sure you don't know what you are talking about.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sat Aug 29 23:53:31 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:
    <cut>
    I think that the only higher level construct required to make "C"
    truly Turing complete is the capability to create an unlimited number
    of totally unlimited linked lists.

    You need much less than this, but that would do it.

    I have almost all of the details
    figured out how to do that.

    Since it's a purely theoretical notation you just stipulate an interface
    for it. It takes no time at all. How did you define it in Dec 2018?
    Your snippet of C-like code was, you claim, written in a Turing complete notation.

    Can you see how adding this to "C" would make "C" Turing complete?

    It would then not be C. It would be C-like or quasi C or you could even
    pick a new name. But your track record in misusing names suggests you
    will keep calling it C no matter how radically different its
    computational properties are.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to olcott on Sun Aug 30 20:05:50 2020
    XPost: comp.theory, comp.ai.philosophy, sci.lang.semantics

    olcott <NoOne@NoWhere.com> writes:

    On 8/30/2020 9:38 AM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:

    On 8/29/2020 8:35 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    <cut>
    The Turing complete stuff would be machine code that the "C" does not >>>>> know about or care about. If "C" needs unlimited integers it makes
    them out of ASCII digits.

    You are confused. "Implementing" a non-Turing complete language by
    using a Turing complete notation (even a Turing machine itself) does not >>>> make the original language Turing complete.

    You already agreed that it does.

    You are having trouble understanding what people (it's not just me) are
    saying to you. I'm not bothered if you don't know what I am saying, but
    I am happy to answer questions about it if you want to find out.

    Obviously it's simpler for me if you continue as you are.

    On 8/29/2020 5:53 PM, Ben Bacarisse wrote:
    olcott <NoOne@NoWhere.com> writes:
    I think that the only higher level construct required to make "C"
    truly Turing complete is the capability to create an unlimited number >>>>> of totally unlimited linked lists.

    You need much less than this, but that would do it.

    Would you like some help to understand how this stuff works?

    Not interested?

    I have defined two terms:
    (a) Functionally equivalent machines / computations.
    (b) Identical computations

    Yes, I saw that. It's clear from those posts that you are not
    interesting in learning about this topic.

    You keep posting ill-formed musings. Few of them have enough technical
    detail to be wrong so they don't need to be corrected. But it seems you
    are having fun, so everyone is happy for the time being.

    My intention is that functionally equivalent machines are exactly the
    same thing as Turing equivalent machines.

    They never will be actual Turing machines though (unless they were to
    start with, of course). Is all this recent rambling really just a
    lengthly attempt to retrospectively absolve yourself for lying? Move
    on, mate.

    I've snipped your happy ramblings. I hope you did not you want me to
    comment on them.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)